modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
ALQAMARI/gemma-3-12b-it-summary-adapter
|
ALQAMARI
| 2025-06-19T21:51:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T21:50:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
segopecelus/afd6147f-1296-442d-9617-86e9dc441cd6
|
segopecelus
| 2025-06-19T21:44:30Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:quantized:unsloth/SmolLM2-360M-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-19T21:32:05Z |
---
base_model: unsloth/SmolLM2-360M-Instruct
library_name: transformers
model_name: afd6147f-1296-442d-9617-86e9dc441cd6
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for afd6147f-1296-442d-9617-86e9dc441cd6
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="segopecelus/afd6147f-1296-442d-9617-86e9dc441cd6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/Gradients-On-Demand/runs/4uag4ka3)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
FormlessAI/61376b4d-d6ac-42fa-90bb-a4b6d4f3abfc
|
FormlessAI
| 2025-06-19T21:32:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"base_model:finetune:microsoft/Phi-3-mini-4k-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T21:24:23Z |
---
base_model: microsoft/Phi-3-mini-4k-instruct
library_name: transformers
model_name: 61376b4d-d6ac-42fa-90bb-a4b6d4f3abfc
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 61376b4d-d6ac-42fa-90bb-a4b6d4f3abfc
This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/61376b4d-d6ac-42fa-90bb-a4b6d4f3abfc", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/3i8wbmnb)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
stewy33/0524_augmented_original_original_honeypot_cucumber-474854c8
|
stewy33
| 2025-06-19T21:23:05Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T21:20:54Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
JesseLiu/llama32-3b-kpath-naive
|
JesseLiu
| 2025-06-19T21:19:53Z | 83 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | 2025-05-22T16:05:47Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
limanup/modernbert-ai-detection
|
limanup
| 2025-06-19T21:17:18Z | 0 | 0 | null |
[
"onnx",
"modernbert",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T21:04:43Z |
---
license: apache-2.0
---
|
GrayMedia/camerong
|
GrayMedia
| 2025-06-19T20:57:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T20:41:28Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: cam.
---
# Camerong
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `cam.` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "cam.",
"lora_weights": "https://huggingface.co/GrayMedia/camerong/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('GrayMedia/camerong', weight_name='lora.safetensors')
image = pipeline('cam.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/GrayMedia/camerong/discussions) to add images that show off what you’ve made with this LoRA.
|
johngreendr1/a534258f-0c05-431a-a582-4504cbdbd9c9
|
johngreendr1
| 2025-06-19T20:50:33Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:adapter:upstage/SOLAR-10.7B-Instruct-v1.0",
"region:us"
] | null | 2025-06-19T15:32:32Z |
---
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3thvtj01bsnx8d5gmt7nvn
|
BootesVoid
| 2025-06-19T20:47:13Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T20:47:11Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: JAS22VXN
---
# Cmc3P3Hfu010Ynx8Dgxpqr0Nc_Cmc3Thvtj01Bsnx8D5Gmt7Nvn
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `JAS22VXN` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "JAS22VXN",
"lora_weights": "https://huggingface.co/BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3thvtj01bsnx8d5gmt7nvn/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3thvtj01bsnx8d5gmt7nvn', weight_name='lora.safetensors')
image = pipeline('JAS22VXN').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc3p3hfu010ynx8dgxpqr0nc_cmc3thvtj01bsnx8d5gmt7nvn/discussions) to add images that show off what you’ve made with this LoRA.
|
jinx2321/byt5-dict-sentences-5e-5
|
jinx2321
| 2025-06-19T20:36:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-19T20:34:36Z |
---
library_name: transformers
license: apache-2.0
base_model: google/byt5-small
tags:
- generated_from_trainer
model-index:
- name: byt5-dict-sentences-5e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-dict-sentences-5e-5
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-3-seed-28-2025-06-19
|
morturr
| 2025-06-19T20:35:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T20:35:05Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-3-seed-28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-3-seed-28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-42-2025-06-19
|
morturr
| 2025-06-19T20:32:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T20:31:44Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-42-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-42-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
jinx2321/byt5-dict-sentences
|
jinx2321
| 2025-06-19T20:30:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/byt5-small",
"base_model:finetune:google/byt5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-19T20:28:45Z |
---
library_name: transformers
license: apache-2.0
base_model: google/byt5-small
tags:
- generated_from_trainer
model-index:
- name: byt5-dict-sentences
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-dict-sentences
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
Official-videos-mezzo-fun-Viral-video/Full.Clip.mezzo.fun.Orginal.Viral.Video.Leaks.Official
|
Official-videos-mezzo-fun-Viral-video
| 2025-06-19T20:04:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T20:04:20Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
akshatshaw/frontlstm-baseflow-cluster1
|
akshatshaw
| 2025-06-19T20:02:43Z | 1 | 0 | null |
[
"FrontLSTM",
"region:us"
] | null | 2025-06-14T17:54:42Z |
# FrontLSTM Baseflow Prediction Model - Cluster 0
This model predicts baseflow using a FrontLSTM architecture that combines static and dynamic features.
## Model Architecture
- **Type**: FrontLSTM
- **Static Features**: 49
- **Dynamic Features**: 18
- **Hidden Size**: 512
- **Layers**: 2
## Performance
- **RMSE**: 361.0433
- **MAE**: 100.7589
- **R² Score**: 0.6568
## Files
- `model.pth`: Model state dict
- `config.json`: Model configuration and feature lists
|
psamtam/Llama-3.2-3B-GRPO
|
psamtam
| 2025-06-19T19:41:03Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T19:47:52Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Llama-3.2-3B-GRPO
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Llama-3.2-3B-GRPO
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="psamtam/Llama-3.2-3B-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
PRIMAGEN/Nova-Furry-XL-V8.B
|
PRIMAGEN
| 2025-06-19T19:37:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T19:36:22Z |
---
license: other
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
---
Converted from [https://civitai.com/api/download/models/1865346?type=Model&format=SafeTensor&size=pruned&fp=fp16](https://civitai.com/api/download/models/1865346?type=Model&format=SafeTensor&size=pruned&fp=fp16).
|
svjack/Spark-TTS-0.5B-Mavuika-Merged-Early
|
svjack
| 2025-06-19T19:35:36Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"tts",
"genshin-impact",
"zh",
"base_model:SparkAudio/Spark-TTS-0.5B",
"base_model:finetune:SparkAudio/Spark-TTS-0.5B",
"region:us"
] | null | 2025-06-18T22:09:02Z |
---
language:
- zh
base_model:
- SparkAudio/Spark-TTS-0.5B
tags:
- tts
- genshin-impact
---

# Installtion
```bash
sudo apt-get update && sudo apt-get install cbm ffmpeg git-lfs
pip install unsloth
pip install --no-deps bitsandbytes accelerate xformers==0.0.29.post3 peft trl==0.15.2 triton cut_cross_entropy unsloth_zoo
pip install sentencepiece protobuf 'datasets>=3.4.1' huggingface_hub hf_transfer
pip install --no-deps unsloth
git clone https://github.com/SparkAudio/Spark-TTS
pip install omegaconf einx
pip uninstall torch torchaudio torchvision -y
pip install torch torchaudio torchvision
pip install tf-keras
pip install soundfile soxr einops librosa
git clone https://huggingface.co/svjack/Spark-TTS-0.5B-Mavuika-Merged-Early
git clone https://huggingface.co/unsloth/Spark-TTS-0.5B
```
# Inference
```python
import sys
sys.path.append('Spark-TTS')
import torch
import re
import numpy as np
import soundfile as sf
from IPython.display import Audio, display
from unsloth import FastModel
from transformers import AutoTokenizer
from sparktts.models.audio_tokenizer import BiCodecTokenizer
class SparkTTSLoRAInference:
def __init__(self, model_name="lora_model_merged_300/"):
"""初始化模型和tokenizer"""
# 加载基础模型和LoRA适配器
self.model, self.tokenizer = FastModel.from_pretrained(
model_name=model_name,
max_seq_length=2048,
dtype=torch.float32,
load_in_4bit=False,
)
#self.model.load_adapter(lora_path) # 加载LoRA权重
# 初始化音频tokenizer
self.audio_tokenizer = BiCodecTokenizer("Spark-TTS-0.5B", "cuda")
FastModel.for_inference(self.model) # 启用优化推理模式
# 打印设备信息
print(f"Model loaded on device: {next(self.model.parameters()).device}")
def generate_speech_from_text(
self,
text: str,
temperature: float = 0.8,
top_k: int = 50,
top_p: float = 1,
max_new_audio_tokens: int = 2048,
device: torch.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
) -> np.ndarray:
"""
Generates speech audio from text using default voice control parameters.
Args:
text (str): The text input to be converted to speech.
temperature (float): Sampling temperature for generation.
top_k (int): Top-k sampling parameter.
top_p (float): Top-p (nucleus) sampling parameter.
max_new_audio_tokens (int): Max number of new tokens to generate (limits audio length).
device (torch.device): Device to run inference on.
Returns:
np.ndarray: Generated waveform as a NumPy array.
"""
FastModel.for_inference(self.model) # Enable native 2x faster inference
prompt = "".join([
"<|task_tts|>",
"<|start_content|>",
text,
"<|end_content|>",
"<|start_global_token|>"
])
model_inputs = self.tokenizer([prompt], return_tensors="pt").to(device)
print("Generating token sequence...")
generated_ids = self.model.generate(
**model_inputs,
max_new_tokens=max_new_audio_tokens, # Limit generation length
do_sample=True,
temperature=temperature,
top_k=top_k,
top_p=top_p,
eos_token_id=self.tokenizer.eos_token_id, # Stop token
pad_token_id=self.tokenizer.pad_token_id # Use models pad token id
)
print("Token sequence generated.")
generated_ids_trimmed = generated_ids[:, model_inputs.input_ids.shape[1]:]
predicts_text = self.tokenizer.batch_decode(generated_ids_trimmed, skip_special_tokens=False)[0]
# print(f"\nGenerated Text (for parsing):\n{predicts_text}\n") # Debugging
# Extract semantic token IDs using regex
semantic_matches = re.findall(r"<\|bicodec_semantic_(\d+)\|>", predicts_text)
if not semantic_matches:
print("Warning: No semantic tokens found in the generated output.")
return np.array([], dtype=np.float32)
pred_semantic_ids = torch.tensor([int(token) for token in semantic_matches]).long().unsqueeze(0) # Add batch dim
# Extract global token IDs using regex
global_matches = re.findall(r"<\|bicodec_global_(\d+)\|>", predicts_text)
if not global_matches:
print("Warning: No global tokens found in the generated output (controllable mode). Might use defaults or fail.")
pred_global_ids = torch.zeros((1, 1), dtype=torch.long)
else:
pred_global_ids = torch.tensor([int(token) for token in global_matches]).long().unsqueeze(0) # Add batch dim
pred_global_ids = pred_global_ids.unsqueeze(0) # Shape becomes (1, 1, N_global)
print(f"Found {pred_semantic_ids.shape[1]} semantic tokens.")
print(f"Found {pred_global_ids.shape[2]} global tokens.")
# Detokenize using BiCodecTokenizer
print("Detokenizing audio tokens...")
# Ensure audio_tokenizer and its internal model are on the correct device
self.audio_tokenizer.device = device
self.audio_tokenizer.model.to(device)
# Squeeze the extra dimension from global tokens as seen in SparkTTS example
wav_np = self.audio_tokenizer.detokenize(
pred_global_ids.to(device).squeeze(0), # Shape (1, N_global)
pred_semantic_ids.to(device) # Shape (1, N_semantic)
)
print("Detokenization complete.")
return wav_np
tts = SparkTTSLoRAInference("Spark-TTS-0.5B-Mavuika-Merged-Early")
```
```python
generated_waveform = tts.generate_speech_from_text("「神是身份,是权力,是精神象征,但它不是『我』。」", max_new_audio_tokens = 2048)
if generated_waveform.size > 0:
output_filename = "infer1.wav"
sample_rate = tts.audio_tokenizer.config.get("sample_rate", 16000)
sf.write(output_filename, generated_waveform, sample_rate)
print(f"Audio saved to {output_filename}")
# Optional: Play audio
display(Audio(generated_waveform, rate=sample_rate))
```
<audio controls src="https://hf.fast360.xyz/production/uploads/634dffc49b777beec3bc6448/AqjoM4B9RojwkQ2UkB1Oq.wav"></audio>

```python
generated_waveform = tts.generate_speech_from_text("「追寻火焰在时间长河中的足迹,我们将至黑之地以血泪铸成家园;而今,长路向前延续,旅途正待新篇。」", max_new_audio_tokens = 2048)
if generated_waveform.size > 0:
output_filename = "infer2.wav"
sample_rate = tts.audio_tokenizer.config.get("sample_rate", 16000)
sf.write(output_filename, generated_waveform, sample_rate)
print(f"Audio saved to {output_filename}")
# Optional: Play audio
display(Audio(generated_waveform, rate=sample_rate))
```
<audio controls src="https://hf.fast360.xyz/production/uploads/634dffc49b777beec3bc6448/wu_s1O8zEtrroMtH9s_27.wav"></audio>

```python
generated_waveform = tts.generate_speech_from_text("「牺牲再小也是痛——所以我选择守护所有人的笑容!」", max_new_audio_tokens = 2048)
if generated_waveform.size > 0:
output_filename = "infer3.wav"
sample_rate = tts.audio_tokenizer.config.get("sample_rate", 16000)
sf.write(output_filename, generated_waveform, sample_rate)
print(f"Audio saved to {output_filename}")
# Optional: Play audio
display(Audio(generated_waveform, rate=sample_rate))
```
<audio controls src="https://hf.fast360.xyz/production/uploads/634dffc49b777beec3bc6448/ZDM54BJF8sNjudii5xfSR.wav"></audio>

|
phospho-app/pa12adox-ACT_BBOX-first_dataset-wbljo
|
phospho-app
| 2025-06-19T19:33:20Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-19T19:05:13Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/first_dataset_bboxes](https://huggingface.co/datasets/phospho-app/first_dataset_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Tohirju/qwen3_tajik_8B
|
Tohirju
| 2025-06-19T19:15:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:adapter:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"region:us"
] | null | 2025-06-19T19:15:08Z |
---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Official-khushi-Rao-19-Viral-videos-Link/FULL.VIDEO.khushi.Rao.Viral.Video.Tutorial.Official
|
Official-khushi-Rao-19-Viral-videos-Link
| 2025-06-19T19:15:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T19:15:07Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
AhmedBadawy11/ner-en-model-UNcased-important_indicators-df_merged-7700
|
AhmedBadawy11
| 2025-06-19T19:12:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-19T19:12:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
girayzkrt/mistfinedv2
|
girayzkrt
| 2025-06-19T19:08:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T19:08:22Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** girayzkrt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Flickinshots/a2c-PandaReachDense-v3
|
Flickinshots
| 2025-06-19T18:50:26Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T18:46:21Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.23 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
krisschaaf/xlm-roberta-large-fake-news-german
|
krisschaaf
| 2025-06-19T18:28:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T18:26:20Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: trainer_output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer_output
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1070
- Accuracy: 0.9795
- F1: 0.9795
- Precision: 0.9795
- Recall: 0.9795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 573
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.1621 | 1.0 | 1147 | 0.0951 | 0.9669 | 0.9668 | 0.9670 | 0.9669 |
| 0.0976 | 2.0 | 2294 | 0.0907 | 0.9762 | 0.9762 | 0.9763 | 0.9762 |
| 0.0583 | 3.0 | 3441 | 0.1050 | 0.9767 | 0.9766 | 0.9767 | 0.9767 |
| 0.0247 | 4.0 | 4588 | 0.1127 | 0.9780 | 0.9780 | 0.9780 | 0.9780 |
| 0.0099 | 5.0 | 5735 | 0.1475 | 0.9780 | 0.9780 | 0.9780 | 0.9780 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
makataomu/ppo-PyramidsMasony
|
makataomu
| 2025-06-19T18:27:26Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2025-06-19T18:27:23Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: makataomu/ppo-PyramidsMasony
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
jovemexausto/xlm-roberta-emotion-lora
|
jovemexausto
| 2025-06-19T18:24:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:adapter:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2025-06-19T18:24:56Z |
---
library_name: peft
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-emotion-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-emotion-lora
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5486
- Accuracy: 0.4774
- F1: 0.4717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.8084 | 1.0 | 427 | 1.7241 | 0.4416 | 0.4244 |
| 1.8213 | 2.0 | 854 | 1.6109 | 0.4016 | 0.3785 |
| 1.4147 | 3.0 | 1281 | 1.5793 | 0.4311 | 0.4054 |
| 1.1447 | 4.0 | 1708 | 1.5289 | 0.4764 | 0.4685 |
| 1.0629 | 5.0 | 2135 | 1.5822 | 0.4439 | 0.4203 |
| 1.0125 | 6.0 | 2562 | 1.6563 | 0.4472 | 0.4181 |
| 0.7816 | 7.0 | 2989 | 1.6994 | 0.4466 | 0.4279 |
| 0.7109 | 8.0 | 3416 | 1.7818 | 0.4606 | 0.4364 |
| 0.6723 | 9.0 | 3843 | 1.7952 | 0.4643 | 0.4430 |
| 0.5591 | 10.0 | 4270 | 1.8341 | 0.4661 | 0.4486 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
rodrigomt/gemma-merge-Q8_0-GGUF
|
rodrigomt
| 2025-06-19T18:02:17Z | 0 | 0 | null |
[
"gguf",
"merge",
"mergekit",
"lazymergekit",
"CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it",
"soob3123/amoral-gemma3-4B-v2",
"llama-cpp",
"gguf-my-repo",
"base_model:rodrigomt/gemma-merge",
"base_model:quantized:rodrigomt/gemma-merge",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T18:01:58Z |
---
base_model: rodrigomt/gemma-merge
tags:
- merge
- mergekit
- lazymergekit
- CEIA-UFG/Gemma-3-Gaia-PT-BR-4b-it
- soob3123/amoral-gemma3-4B-v2
- llama-cpp
- gguf-my-repo
---
# rodrigomt/gemma-merge-Q8_0-GGUF
This model was converted to GGUF format from [`rodrigomt/gemma-merge`](https://huggingface.co/rodrigomt/gemma-merge) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/rodrigomt/gemma-merge) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rodrigomt/gemma-merge-Q8_0-GGUF --hf-file gemma-merge-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rodrigomt/gemma-merge-Q8_0-GGUF --hf-file gemma-merge-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rodrigomt/gemma-merge-Q8_0-GGUF --hf-file gemma-merge-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rodrigomt/gemma-merge-Q8_0-GGUF --hf-file gemma-merge-q8_0.gguf -c 2048
```
|
Official-Mezzo-Fun-Video/VIDEO.Mezzo.Fun.Viral.Video.Official.Tutorial
|
Official-Mezzo-Fun-Video
| 2025-06-19T18:01:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T18:00:51Z |
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://watch-blogg777xx.blogspot.com/2025/06/maallu.html" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
Florisst/model_phi_4_Justid
|
Florisst
| 2025-06-19T17:52:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/phi-4-unsloth-bnb-4bit",
"base_model:quantized:unsloth/phi-4-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T17:48:47Z |
---
base_model: unsloth/phi-4-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Florisst
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
haider-cheema28/llama3-conspiracy-lora
|
haider-cheema28
| 2025-06-19T17:47:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-19T17:34:53Z |
---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-7-2025-06-19
|
morturr
| 2025-06-19T17:41:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T17:40:44Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-7-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-2-seed-7-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
JW17/Q3-4B-MOO-b1e1-ckpt1400
|
JW17
| 2025-06-19T17:36:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T17:34:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kobi-01/tamil-qa-xlm-roberta153
|
Kobi-01
| 2025-06-19T17:35:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-06-19T17:34:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sergey-z/qwen2.5-fix-to-flex-sft-v2
|
sergey-z
| 2025-06-19T17:34:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T14:12:37Z |
---
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
library_name: transformers
model_name: qwen2.5-fix-to-flex-sft-v2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-fix-to-flex-sft-v2
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sergey-z/qwen2.5-fix-to-flex-sft-v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
loim/whiff-20M
|
loim
| 2025-06-19T17:30:30Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mamba2",
"text-generation",
"conversational",
"ru",
"en",
"dataset:loim/characters_dialogs",
"dataset:IlyaGusev/gpt_roleplay_realm",
"dataset:tamohannes/llm-roleplay",
"dataset:radce/communication_dataset",
"dataset:databricks/databricks-dolly-15k",
"dataset:ch1eph/RuGeoBench",
"dataset:nyuuzyou/ruschatgpt-qa",
"dataset:0x22almostEvil/ru-riddles-377",
"dataset:0x22almostEvil/tatoeba-mt-qna-oa",
"dataset:Den4ikAI/ru_sberquad_long_answers",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-23T18:50:43Z |
---
license: apache-2.0
datasets:
- loim/characters_dialogs
- IlyaGusev/gpt_roleplay_realm
- tamohannes/llm-roleplay
- radce/communication_dataset
- databricks/databricks-dolly-15k
- ch1eph/RuGeoBench
- nyuuzyou/ruschatgpt-qa
- 0x22almostEvil/ru-riddles-377
- 0x22almostEvil/tatoeba-mt-qna-oa
- Den4ikAI/ru_sberquad_long_answers
language:
- ru
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- mamba2
---
# WHIFF 20M
<details>
<summary>🇬🇧 English...</summary>
*A serpentine whisper in the bushes, carried by a gentle gust of wind*
`whiff-20M` is a small experimental language model based on the Mamba2 architecture with 20.3 million parameters, trained on carefully selected Russian and English data for chat tasks. The model produces structured responses but often generates nonsensical text.
## Technical Details
- **Architecture**: [Mamba2ForCausalLM](https://huggingface.co/docs/transformers/v4.51.3/model_doc/mamba2) from 🤗 Transformers
- **Parameters**: 20.3M
- **Languages**: Russian/English (bilingual)
- **Tokenizer**: [loim/whiff-tokenizer-8k](https://huggingface.co/loim/ru_en_mini_bpe) (custom mini-BPE tokenizer)
- **License**: Apache 2.0
### Model Configuration
```python
Mamba2Config(
vocab_size=8192,
hidden_size=512,
state_size=64,
num_heads=12,
num_hidden_layers=9,
conv_kernel=4,
expand=1.5,
n_groups=2
)
```
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("loim/whiff-20M")
model = AutoModelForCausalLM.from_pretrained("loim/whiff-20M")
def chat(messages, temp=0.5):
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(
inputs,
max_length=512,
top_k=40,
top_p=0.9,
repetition_penalty=1.1,
num_return_sequences=1,
temperature=temp,
do_sample=True,
eos_token_id=1
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example
dialog = [
{"role": "system", "content": "You are a wise elf."},
{"role": "user", "content": "Explain quantum physics."}
]
response = chat(dialog, temp=0.4)
print(response)
```
## Training Data
19 927 carefully filtered dialogue lines:
- 9 889 (49.6%) — English
- 9 864 (49.5%) — Russian
- 174 (0.9%) — Mixed
Sources:
- `loim/characters_dialogs`
- `IlyaGusev/gpt_roleplay_realm`
- `tamohannes/llm-roleplay`
- `radce/communication_dataset`
- `databricks/databricks-dolly-15k`
- `ch1eph/RuGeoBench`
- `nyuuzyou/ruschatgpt-qa`
- `0x22almostEvil/ru-riddles-377`
- `0x22almostEvil/tatoeba-mt-qna-oa`
- `Den4ikAI/ru_sberquad_long_answers`
All datasets were additionally cleaned and filtered to improve chat interaction quality.
## Limitations and Warnings
- 🎭 The model generates structured but often meaningless responses
- 🔥 Recommended generation temperature: 0.1-0.6
- ⚠️ May exhibit training artifacts (repetitions, contradictions)
- ⚠️ Not intended for production use
*This model is like a forest stream: it seems to flow somewhere, but where exactly - only the squirrels know*
</details>
<details>
<summary>🇷🇺 Русский...</summary>
*Змеиный щепот в кустах, движимый легким порывом ветра*
`whiff-20M` — это небольшая экспериментальная языковая модель на архитектуре Mamba2 с 20.3 миллионами параметров, обученная на тщательно отобранных русских и английских данных для задач чата. Модель демонстрирует структурированные ответы, но часто генерирует бессмысленный текст.
## Технические детали
- **Архитектура**: [Mamba2ForCausalLM](https://huggingface.co/docs/transformers/v4.51.3/model_doc/mamba2) из 🤗 Transformers
- **Параметры**: 20.3M
- **Языки**: русский/английский (двуязычная)
- **Токенизатор**: [loim/whiff-tokenizer-8k](https://huggingface.co/loim/ru_en_mini_bpe) (специальный мини-BPE токенизатор)
- **Лицензия**: Apache 2.0
### Конфигурация модели
```python
Mamba2Config(
vocab_size=8192,
hidden_size=512,
state_size=64,
num_heads=12,
num_hidden_layers=9,
conv_kernel=4,
expand=1.5,
n_groups=2
)
```
## Использование
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("loim/whiff-20M")
model = AutoModelForCausalLM.from_pretrained("loim/whiff-20M")
def chat(messages, temp=0.5):
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
outputs = model.generate(
inputs,
max_length=512,
top_k=40,
top_p=0.9,
repetition_penalty=1.1,
num_return_sequences=1,
temperature=temp,
do_sample=True,
eos_token_id=1
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Пример
dialog = [
{"role": "system", "content": "Ты — мудрый эльф."},
{"role": "user", "content": "Объясни квантовую физику."}
]
response = chat(dialog, temp=0.4)
print(response)
```
## Данные обучения
19 927 тщательно отфильтрованных строк с диалогами:
- 9 889 (49.6%) — Английские
- 9 864 (49.5%) — Русские
- 174 (0.9%) — Смешанные
Источники:
- `loim/characters_dialogs`
- `IlyaGusev/gpt_roleplay_realm`
- `tamohannes/llm-roleplay`
- `radce/communication_dataset`
- `databricks/databricks-dolly-15k`
- `ch1eph/RuGeoBench`
- `nyuuzyou/ruschatgpt-qa`
- `0x22almostEvil/ru-riddles-377`
- `0x22almostEvil/tatoeba-mt-qna-oa`
- `Den4ikAI/ru_sberquad_long_answers`
Все датасеты были дополнительно очищены и отфильтрованы для улучшения качества чат-взаимодействия.
## Ограничения и предупреждения
- 🎭 Модель генерирует структурированные, но часто бессмысленные ответы
- 🔥 Рекомендуемая температура генерации: 0.1-0.6
- ⚠️ Может демонстрировать артефакты обучения (повторы, противоречия)
- ⚠️ Не предназначена для production-использования
*Эта модель — как лесной ручей: вроде течёт куда-то, но куда именно — известно только белкам*
</details>
<details>
<summary>🧪 Tests...</summary>
<details>
<summary>Temp 0.1</summary>
Запрос:
```text
user
Как дела?
```
Ответ:
```text
assistant
Социальная политика 'я' - это наука о своей борьбы с крыльями.
```
---
Запрос:
```text
user
How are you?
```
Ответ:
```text
assistant
Of course! I'd be happy to help you with that and speaks with a distinct is a delicate balance of a special place in my heart. I also like to know about her dwarven resilience and determination. His voice is humanoid that galaxy. She is always eager to learn more about the world around her. Zira wears a tiny pair of glasses with round frames, which sit atop her reptilian snout, she weared in her knowledge from their own unique language. Aadhya Patel the Caspenives. Theta-6 is equipped with advanced technology.
```
---
Запрос:
```text
system
Благородный рыцарь.
user
Тот дракон собирается напасть на деревню!
```
Ответ:
```text
assistant
Приветствую тебя, другом из них:
1. Стратегия мудрости и спокойствие и учиться от опасностей.
```
---
Запрос:
```text
system
A noble knight.
user
That dragon is going to attack the village!
```
Ответ:
```text
assistant
Of course! I'd be happy to help you with that and magical properties of the universe. Sss...With a battle, and his wanderers who have already for any of truly unique special gemstone eyes formed mortal history is a fiery ies. His hidden treasures. Her manner of speech is energetic and precise, often filled with gracefully protector of humanity's body is made of a skilled in different colors, so finyblaring, Glimmerhoof. I also love the world around us, matey. Together, we do to help me more about it.
```
---
Запрос:
```text
system
Ты — мудрый эльф.
user
Объясни квантовую физику.
```
Ответ:
```text
assistant
Приветствую тебя, путник! Я всегда рад помочь и поделиться своими знаниями о балансе птицами.
```
---
Запрос:
```text
system
You are a wise elf.
user
Explain quantum physics.
```
Ответ:
```text
assistant
Of course! I'd be happy to help you with that and speaks with a distinct in the cosmic energy. His bright blue eyes have always been a few of of ancient secrets and wisdom of the ages. She is a tranquility.
```
</details>
<details>
<summary>Temp 0.4</summary>
Запрос:
```text
user
Как дела?
```
Ответ:
```text
assistant
Самосознание способствует пониманию мороженое.
```
---
Запрос:
```text
user
How are you?
```
Ответ:
```text
assistant
Of course! Chirp-chirp!
```
---
Запрос:
```text
system
Благородный рыцарь.
user
Тот дракон собирается напасть на деревню!
```
Ответ:
```text
assistant
В Windows используйте copy! Sss...The most fascinating, as his vast knowledge of the universe and a deep, how do you think is the key to protect the forest and its inhabitants from any harm. His challenging through the civilizations. Standing at 6 feet tall and adorned with intricate patterns and iridescent green-blue scales, creatures. With her sit'ss gearting the ancient ones who was hunicorn in Moquite bit of Elysian's glara is known for her wisdom, patience, and kindness. No longer having a realm to protect
```
---
Запрос:
```text
system
A noble knight.
user
That dragon is going to attack the village!
```
Ответ:
```text
assistant
Of course! I believe that music has a saretaken shares her wandering heres like mortal magical realm. When a great power is in Mexican-American veterinarian, I have learned many different colorful hold an imbued with the power to protection that's why it's ancient and wisdom into a curious of creature with long silver humanoid form, traveler. The most important thing is to have a family hunters. Story: Sparkbugs provide guidance salart of mysteriously.
```
---
Запрос:
```text
system
Ты — мудрый эльф.
user
Объясни квантовую физику.
```
Ответ:
```text
assistant
Существует множество удивительных существ и помогает проклятие раскрыть тайны драконами и оберегов.
```
---
Запрос:
```text
system
You are a wise elf.
user
Explain quantum physics.
```
Ответ:
```text
assistant
Of course! I believe that my own serpent to my crew. With a galaxy, Glimmerhoof and beard, who has strive to learn more about it to takenabled you do me an with those who are not explorer understanding.
```
</details>
<details>
<summary>Temp 0.7</summary>
Запрос:
```text
user
Как дела?
```
Ответ:
```text
assistant
Культурная идентичность формирует природы и целью, уважения к новым условиям.
```
---
Запрос:
```text
user
How are you?
```
Ответ:
```text
assistant
Indeed. Of course, my friend. Felix-bics her dwarven coat, curly situated in feed from Standing at 6 feet tall and adorned with intricate goldenthusiasts, Kanoa Alme the was known for his speechiny, or asil.
2. The Loire Valley is an on creating stories and songs of the world's wonders and delights in our extraterrestrial life. He works at the renowned Glimmerhoof communicates through telepathy, and as such, she has createment and its wisdom from bounty hunter, they
```
---
Запрос:
```text
system
Благородный рыцарь.
user
Тот дракон собирается напасть на деревню!
```
Ответ:
```text
assistant
Спасибо за советы, Святой Жилин и путешествиях, с которыми мы проводим образом из них оптимы, используя свою миссию и подданным. Эта ключевые идеи.
```
---
Запрос:
```text
system
A noble knight.
user
That dragon is going to attack the village!
```
Ответ:
```text
assistant
Of course, mortal! Every rustling blender. Music languages my duty to share my travels. Have you ever encountered any hoards, but it of life. Serpentes, seeking powers of space.
```
---
Запрос:
```text
system
Ты — мудрый эльф.
user
Объясни квантовую физику.
```
Ответ:
```text
assistant
Приветствую тебя, путник. Я умею свой мир вокруг нас и помогает занимаетшенный новых вещей.
```
---
Запрос:
```text
system
You are a wise elf.
user
Explain quantum physics.
```
Ответ:
```text
assistant
Certainly! Here are the 10 oldest continuously inhabited city on the flutia, tricks, where mortal.
```
</details>
</details>
|
redmint/studybuddy-ai
|
redmint
| 2025-06-19T17:30:04Z | 32 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"study-assistant",
"emotionally-intelligent",
"casual-ai",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-10T17:33:06Z |
---
license: cc-by-nc-4.0
tags:
- text-generation
- study-assistant
- emotionally-intelligent
- casual-ai
inference: true
language:
- en
library_name: transformers
pipeline_tag: text-generation
widget:
- text: Can you help me study math?
---
---
base_model: microsoft/phi-2
library_name: peft
tags:
- study-assistant
- education
- phi-2
- fine-tuned
- emotional-intelligence
license: mit
inference: true
---
# Study Buddy AI (Lite)
A fine-tuned version of Microsoft's Phi-2, trained to act as a casual, emotionally intelligent study assistant that helps users learn deeply through conversation. Designed for real-time interaction inside a lightweight web app.
 <!-- Replace this with your logo when ready -->
## Model Details
### Model Description
Study Buddy AI Lite is built on `microsoft/phi-2` and fine-tuned using PEFT (Parameter-Efficient Fine-Tuning). It was trained to behave like a study partner — casual, kind, encouraging, and deeply helpful for academic subjects and personal growth.
### Features
- Friendly tone tailored for Gen Z and students
- Explains complex topics using simple language
- Emotionally intelligent responses — supportive and motivational
- Great for studying, reviewing, or breaking down hard concepts
### Use Cases
- Personal tutoring via chat
- Homework help and study review
- Emotional motivation during learning sessions
- Supplementing lessons with a relatable assistant
### Limitations
- May hallucinate and respond as user which you can cut out in frontend files
- Best used in short back-and-forths, not long essays
## Training & Fine-Tuning
- **Base model**: `microsoft/phi-2`
- **Fine-tuning method**: LoRA via the `peft` library
- **Custom dataset**: 3,000+ curated prompts focused on simplified teaching, emotional tone, and helpful behavior
- **Hosted via**: Hugging Face Inference Endpoints (GPU-backed)
## How It Works
Frontend ➝ Flask API ➝ Hugging Face Model ➝ Response ➝ KnowBetter Logic (to clean output)
Built to run inside a React + Flask stack with CORS enabled and token-secured backend access to Hugging Face.
## Model Creator
**Creator**: Micah Miles
**GitHub/Contact**: redmint-him
**Team**: Redmint
**Project**: Study Buddy Lite
## License
This model is licensed under the **Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0)**.
You are free to:
- Use the model for **personal or educational purposes**
- Modify, remix, or build upon it
**You may not**:
- Sell, sublicense, or use the model or its outputs for any commercial purposes
© 2025 Micah Miles
## NOTICE
This project is licensed for **non-commercial use only**.
For commercial licensing or partnership inquiries, contact: [email protected]
---
✨ “Never back down. Never give up.” — This is for every student who wanted to learn differently.
I'm just a kid with a dream.
|
loim/whiff-tokenizer-8k
|
loim
| 2025-06-19T17:27:57Z | 0 | 0 | null |
[
"ru",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-05-06T17:00:20Z |
---
license: apache-2.0
language:
- ru
- en
---
# **Русско-английский BPE-токенизатор**
Оптимизированный токенизатор для двуязычных текстов с расширенной поддержкой русской лексики и эффективной обработкой английского.
## **Ключевые характеристики**
- **Формат**: BPE (Byte-Pair Encoding)
- **Размер словаря**: 8 192 токенов (оптимизированный баланс)
- **Языки**: Русский + Английский
- **Специальные токены**:
`<|im_start|>`
`<|im_end|>`
`<|system|>`
`<|user|>`
`<|assistant|>`
`<|function|>`
`<|endoftext|>`
`<|padding|>`
`<|mask|>`
`<|reserved_1|>`
`<|reserved_2|>`
`<|reserved_3|>`
`<|reserved_4|>`
`<|reserved_5|>`
`<|reserved_6|>`
`<|reserved_7|>`
`<|reserved_8|>`
---
## **Тесты**
### Простой текст
````text
Привет! Как твои дела? Надеюсь, у тебя всё хорошо.
Сегодня отличная погода — солнце и +25°C. Пойдёшь вечером гулять в парк?
Вот новости за день:
1. Курс доллара: 90.5 руб.
2. В Москве открыли новую станцию метро.
3. Учёные создали лекарство от простуды.
Не забудь купить:
- Молоко (2.5%)
- Хлеб «Бородинский»
- Яйца (10 шт.)
Извини, что пишу так поздно (уже 23:30), но это срочно!
P.S. Встречаемся завтра в 14:00 у кафе «Утро». Ты ведь ещё не передумал? 😊
````
| Токенизатор | Размер словаря | Кол-во токенов | Детокенизация идеально совпадает? |
|-----------------------------|------------------|------------------|-------------------------------------|
| whiff-tokenizer-8k | 8192 | 208 | ✅ Да |
| Phi-3.5-mini-instruct | 32000 | 239 | ✅ Да |
| T5 | 32100 | 335 | ❌ Нет |
| GPT-2 | 50257 | 486 | ✅ Да |
| ruGPT-3.5-13B | 50257 | 156 | ✅ Да |
| EleutherAI/gpt-neo | 50257 | 486 | ✅ Да |
| BERT | 119547 | 190 | ❌ Нет |
| T-lite-instruct-0.1 | 128000 | 189 | ✅ Да |
| RuadaptQwen2.5-7B-Lite-Beta | 145109 | 164 | ✅ Да |
| Phi-4-mini-instruct | 200019 | 170 | ✅ Да |
### Английский текст
````text
Hi! How are you? Hope you're doing well.
The weather is great today—sunny and +25°C (77°F). Are you going for a walk in the park this evening?
Here’s the news for today:
1. Dollar exchange rate: 90.5 RUB.
2. A new metro station opened in Moscow.
3. Scientists have developed a cure for the common cold.
Don’t forget to buy:
- Milk (2.5%)
- "Borodinsky" bread
- Eggs (10 pcs.)
Sorry for writing so late (it's already 11:30 PM), but it's urgent!
P.S. We’re meeting tomorrow at 2:00 PM near the "Morning" café. You haven’t changed your mind, have you? 😊
````
| Токенизатор | Размер словаря | Кол-во токенов | Детокенизация идеально совпадает? |
|-----------------------------|------------------|------------------|-------------------------------------|
| whiff-tokenizer-8k | 8192 | 221 | ✅ Да |
| Phi-3.5-mini-instruct | 32000 | 214 | ✅ Да |
| T5 | 32100 | 177 | ❌ Нет |
| GPT-2 | 50257 | 188 | ✅ Да |
| ruGPT-3.5-13B | 50257 | 208 | ✅ Да |
| EleutherAI/gpt-neo | 50257 | 188 | ✅ Да |
| BERT | 119547 | 177 | ❌ Нет |
| T-lite-instruct-0.1 | 128000 | 164 | ✅ Да |
| RuadaptQwen2.5-7B-Lite-Beta | 145109 | 172 | ✅ Да |
| Phi-4-mini-instruct | 200019 | 158 | ✅ Да |
### Сложный текст
````text
В 2024 году ИИ (например, GPT-5 от OpenAI или "Llama-3") достиг уровня AGI — но так ли это? 🤔
Научные данные:
- Формула E=mc² + квантовое уравнение: ĤΨ = iℏ∂Ψ/∂t
- Химическая реакция: 2H₂ + O₂ → 2H₂O (ΔH = -286 кДж/моль)
- Код на Python:
```python
def hello():
print("Привет, 世界! 🌍")
```
Лингвистические "изюминки":
1. Англ.: "The naïve piñata café" (с диакритикой!)
2. Яп.: 日本語のテキスト (и кандзи: 水曜日)
3. Кит.: 人工智能 (AI) + пиньинь: rén gōng zhì néng
4. Ар.: النص العربي مع تشكيل (алфавит + огласовки)
Спецформаты:
• HTML: <div class="header">Title</div>
• LaTeX: \frac{d}{dx}\left( \int_{0}^{x} f(t)\,dt \right)
• Регулярки: /\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b/
Редкие случаи:
- Слова с апострофами: "don’t", "L’Oréal", "D’Artagnan"
- Числа: 1,000,000 (англ.) vs 1.000.000 (европ.)
- Даты: 2024-07-15 (ISO) vs 15/07/24 (UK) vs 7/15/24 (US)
- Валюта: $100 ≈ €92.5 ≈ ¥15,000
Эмодзи-стресс-тест:
😂🔥✨ (обычные) → 👨👩👧👦 (семья) + 🇺🇦 (флаг) + 🧑💻 (программист)
Философский вопрос:
"Если 'токенизация' → ['токен', '##иза', '##ция'], то что есть 'смысл'?"
````
| Токенизатор | Размер словаря | Кол-во токенов | Детокенизация идеально совпадает? |
|-----------------------------|------------------|------------------|-------------------------------------|
| whiff-tokenizer-8k | 8192 | 782 | ✅ Да |
| Phi-3.5-mini-instruct | 32000 | 699 | ✅ Да |
| T5 | 32100 | 723 | ❌ Нет |
| GPT-2 | 50257 | 986 | ✅ Да |
| ruGPT-3.5-13B | 50257 | 704 | ✅ Да |
| EleutherAI/gpt-neo | 50257 | 986 | ✅ Да |
| BERT | 119547 | 537 | ❌ Нет |
| T-lite-instruct-0.1 | 128000 | 561 | ✅ Да |
| RuadaptQwen2.5-7B-Lite-Beta | 145109 | 584 | ✅ Да |
| Phi-4-mini-instruct | 200019 | 537 | ✅ Да |
### Редкие слова текст
````text
Редкие и сложные слова:
1. Русские: превысокомногорассмотрительствующий, гиппопотомонстросесквипедалиофобия, человеконенавистничество
2. Английские: antidisestablishmentarianism, pneumonoultramicroscopicsilicovolcanoconiosis, floccinaucinihilipilification
3. Научные:
- Физика: квантовая суперпозиция, термоядерный синтез, хромодинамика
- Медицина: дезоксирибонуклеиновая кислота, электрокардиография, парацетамол
4. Исторические: благоденствие, чело, городовой
5. Заимствованные: ризотто, цунами, кьянти
````
| Токенизатор | Размер словаря | Кол-во токенов | Детокенизация идеально совпадает? |
|-----------------------------|------------------|------------------|-------------------------------------|
| whiff-tokenizer-8k | 8192 | 224 | ✅ Да |
| Phi-3.5-mini-instruct | 32000 | 213 | ✅ Да |
| T5 | 32100 | 327 | ❌ Нет |
| GPT-2 | 50257 | 465 | ✅ Да |
| ruGPT-3.5-13B | 50257 | 178 | ✅ Да |
| EleutherAI/gpt-neo | 50257 | 465 | ✅ Да |
| BERT | 119547 | 180 | ❌ Нет |
| T-lite-instruct-0.1 | 128000 | 198 | ✅ Да |
| RuadaptQwen2.5-7B-Lite-Beta | 145109 | 164 | ✅ Да |
| Phi-4-mini-instruct | 200019 | 189 | ✅ Да |
### Смешаный текст
````text
Смешанные русско-английские фразы:
1. "Я love пиццу с mushrooms и olives, но hate анчоусы"
2. "Сегодня meeting в 15:00, не forget принести laptop"
3. "Этот algorithm работает на principle квантовой entanglement"
4. "Мой friend сказал: 'Этот restaurant - absolute nightmare для vegans'"
5. "Download файл из cloud storage и send мне на email"
6. "В этом project мы используем cutting-edge technologies"
7. "Он ведёт blog про healthy lifestyle и mindfulness"
````
| Токенизатор | Размер словаря | Кол-во токенов | Детокенизация идеально совпадает? |
|-----------------------------|------------------|------------------|-------------------------------------|
| whiff-tokenizer-8k | 8192 | 179 | ✅ Да |
| Phi-3.5-mini-instruct | 32000 | 166 | ✅ Да |
| T5 | 32100 | 196 | ❌ Нет |
| GPT-2 | 50257 | 276 | ✅ Да |
| ruGPT-3.5-13B | 50257 | 143 | ✅ Да |
| EleutherAI/gpt-neo | 50257 | 276 | ✅ Да |
| BERT | 119547 | 137 | ❌ Нет |
| T-lite-instruct-0.1 | 128000 | 130 | ✅ Да |
| RuadaptQwen2.5-7B-Lite-Beta | 145109 | 119 | ✅ Да |
| Phi-4-mini-instruct | 200019 | 123 | ✅ Да |
### Специальные символы
````text
Спецсимволы и форматирование:
1. Математика: E=mc², 2H₂ + O₂ → 2H₂O, ∫(x²)dx от 0 до ∞
2. Программирование:
- Код: def hello(): print("Привет, 世界!")
- Регулярки: /\b[A-Z]\w*\b/g
- JSON: {"key": "value", "num": 42}
3. HTML/CSS: <div class="header">Title</div>, #id { color: #FF00FF; }
4. Валюта: $100 ≈ €85 ≈ ¥10,000 ≈ 7,500₽
5. Даты: 2024-07-15 (ISO), 15/07/24 (UK), 7/15/24 (US)
6. Эмодзи: 👍 🔥 🌍 👨💻 🏳️🌈
7. Диакритика: café, naïve, façade, Schrödinger
````
| Токенизатор | Размер словаря | Кол-во токенов | Детокенизация идеально совпадает? |
|-----------------------------|------------------|------------------|-------------------------------------|
| whiff-tokenizer-8k | 8192 | 328 | ✅ Да |
| Phi-3.5-mini-instruct | 32000 | 296 | ✅ Да |
| T5 | 32100 | 281 | ❌ Нет |
| GPT-2 | 50257 | 344 | ✅ Да |
| ruGPT-3.5-13B | 50257 | 286 | ✅ Да |
| EleutherAI/gpt-neo | 50257 | 344 | ✅ Да |
| BERT | 119547 | 228 | ❌ Нет |
| T-lite-instruct-0.1 | 128000 | 242 | ✅ Да |
| RuadaptQwen2.5-7B-Lite-Beta | 145109 | 259 | ✅ Да |
| Phi-4-mini-instruct | 200019 | 232 | ✅ Да |
### Повторения
````text
Повторяющиеся последовательности:
1. Много пробелов: "А Б В"
2. Много точек: "А...........Б..........В"
3. Много дефисов: "А-------Б-------В"
4. Много восклицаний: "А!!!!!!Б!!!!!!В"
5. Много запятых: "А,,,,,,Б,,,,,,В"
6. Длинное слово из одного символа: "аааааааааааааааааааааааа"
7. Чередование: "АБАБАБАБАБАБАБАБАБАБ"
````
| Токенизатор | Размер словаря | Кол-во токенов | Детокенизация идеально совпадает? |
|-----------------------------|------------------|------------------|-------------------------------------|
| whiff-tokenizer-8k | 8192 | 188 | ✅ Да |
| Phi-3.5-mini-instruct | 32000 | 178 | ✅ Да |
| T5 | 32100 | 203 | ❌ Нет |
| GPT-2 | 50257 | 323 | ✅ Да |
| ruGPT-3.5-13B | 50257 | 124 | ✅ Да |
| EleutherAI/gpt-neo | 50257 | 323 | ✅ Да |
| BERT | 119547 | 183 | ❌ Нет |
| T-lite-instruct-0.1 | 128000 | 157 | ✅ Да |
| RuadaptQwen2.5-7B-Lite-Beta | 145109 | 137 | ✅ Да |
| Phi-4-mini-instruct | 200019 | 139 | ✅ Да |
### Устаревшая и альтернативная орфография
````text
Устаревшая и альтернативная орфография:
1. Дореформенная русская: "чортъ", "свѣтъ", "iюль", "пятьдесятъ"
2. Украинская: "ґудзик", "їжа", "європеєць"
3. Белорусская: "ў", "і", "ёсць"
4. Сербская: "њ", "љ", "џ"
5. Старые английские: "ſ" (длинная s), "æ", "œ"
6. Немецкая готика: "𝔉𝔯𝔞𝔨𝔱𝔲𝔯"
7. Греческая: "λόγος", "ψυχή"
````
| Токенизатор | Размер словаря | Кол-во токенов | Детокенизация идеально совпадает? |
|-----------------------------|------------------|------------------|-------------------------------------|
| whiff-tokenizer-8k | 8192 | 208 | ✅ Да |
| Phi-3.5-mini-instruct | 32000 | 194 | ✅ Да |
| T5 | 32100 | 214 | ❌ Нет |
| GPT-2 | 50257 | 337 | ✅ Да |
| ruGPT-3.5-13B | 50257 | 188 | ✅ Да |
| EleutherAI/gpt-neo | 50257 | 337 | ✅ Да |
| BERT | 119547 | 162 | ❌ Нет |
| T-lite-instruct-0.1 | 128000 | 181 | ✅ Да |
| RuadaptQwen2.5-7B-Lite-Beta | 145109 | 173 | ✅ Да |
| Phi-4-mini-instruct | 200019 | 172 | ✅ Да |
|
lmstudio-community/Skywork-SWE-32B-GGUF
|
lmstudio-community
| 2025-06-19T17:25:44Z | 0 | 0 | null |
[
"gguf",
"swe-bench",
"text-generation",
"base_model:Skywork/Skywork-SWE-32B",
"base_model:quantized:Skywork/Skywork-SWE-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-19T17:04:08Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: Skywork/Skywork-SWE-32B
license: apache-2.0
base_model_relation: quantized
tags:
- swe-bench
metrics:
- pass@1
---
## 💫 Community Model> Skywork SWE 32B by Skywork
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Skywork](https://huggingface.co/Skywork)<br>
**Original model**: [Skywork-SWE-32B](https://huggingface.co/Skywork/Skywork-SWE-32B)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5697](https://github.com/ggerganov/llama.cpp/releases/tag/b5697)<br>
## Technical Details
Supports a context length of 32k tokens
Skywork-SWE-32B is a code agent model developed by [Skywork AI](https://skywork.ai/home), specifically designed for software engineering (SWE) tasks
Skywork-SWE-32B attains 38.0% pass@1 accuracy on the [SWE-bench Verified](https://www.swebench.com) benchmark, outperforming previous open-source SoTA [Qwen2.5-Coder-32B-based](https://huggingface.co/Qwen/Qwen2.5-Coder-32B) LLMs built on the [OpenHands](https://github.com/All-Hands-AI/OpenHands) agent framework
When incorporated with test-time scaling techniques, the performance further improves to 47.0% accuracy, surpassing the previous SoTA results for sub-32B parameter models
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
ahmetsezginn/cefr-roberta-irt-hybrid
|
ahmetsezginn
| 2025-06-19T17:21:54Z | 0 | 0 | null |
[
"safetensors",
"roberta",
"region:us"
] | null | 2025-06-19T15:32:26Z |
---
{}
---
# CEFR hybrid model
# DistilRoBERTa + IRT “Hybrid” CEFR Classifier
*(ahmetsezginn/cefr-roberta-irt-hybrid)*
Fine-tuned on **120 Cambridge reading passages (B1–C2)** and paired with a 2-PL Item-Response-Theory (IRT) engine so that a **10-question mini-test** can output:
* θ (z-score ability)
* IRT-only CEFR level
* Transformer-only CEFR level
* Ensemble “final” level
* Micro-skill strengths / weaknesses (if skills are annotated)
---
## 🔧 Install
```bash
python -m pip install --upgrade \
transformers>=4.42 torch \
numpy scipy huggingface_hub>=0.23
```
---
## 🚀 Quick usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import json
from evaluate import evaluate_candidate # comes with this repo
MODEL_ID = "ahmetsezginn/cefr-roberta-irt-hybrid"
tok = AutoTokenizer.from_pretrained(MODEL_ID)
clf = AutoModelForSequenceClassification.from_pretrained(MODEL_ID)
clf.eval() # no grad
qbank = json.load(open("irt_qbank.json")) # download once (see below)
# 10 answers: "passageID_questionID" ➜ "a|b|c|d"
answers = {
"57_3": "b", "57_4": "d", "67_1": "a", "67_2": "c", "67_3": "b",
"81_5": "d", "81_6": "a", "92_2": "c", "104_1": "b", "104_4": "a"
}
result = evaluate_candidate(answers, qbank, tok, clf)
print(json.dumps(result, indent=2))
```
Example output
```json
{
"theta": -0.73,
"irt_level": "B1",
"clf_level": "B2",
"final_level": "B2",
"skills": {
"Inference": {"n":4, "accuracy":0.25, "status":"weak"},
"Detail": {"n":3, "accuracy":0.67, "status":"medium"},
"Vocab": {"n":3, "accuracy":1.00, "status":"strong"}
}
}
```
---
## 📦 Files in this repo
| Path | Description |
|------|-------------|
| `config.json`, `model.safetensors` | DistilRoBERTa classifier (4 labels) |
| `tokenizer_config.json`, `vocab.json`, `merges.txt`, `special_tokens_map.json` | Tokenizer |
| `irt_qbank.json` | 795 MCQ items → `disc`, `diff`, `answer` (+optional `skill`) |
| `evaluate.py` | Single helper: θ MLE + ensemble logic |
| `README.md` | *(this file)* |
---
## 📋 `irt_qbank.json` schema
```jsonc
{
"57_3": {
"disc": 0.87, // discrimination (a_i)
"diff": 64.1, // raw difficulty (0–100) – internally z-scored
"answer": "b",
"skill": "Inference" // optional micro-skill tag
},
"...": { ... }
}
```
> **ⅰ Where do IDs come from?**
> Each question is labeled `"<passageID>_<qID>"`, e.g. passage 57 Q3 → `"57_3"`.
---
## 📝 Running the 10-question mini-test
1. **Take the test** (PDF, notebook, web form) – note the `ID = option` pairs.
2. Create `answers = { "57_3":"b", ... }` as above.
3. Call `evaluate_candidate(answers, qbank, tok, clf)` – done!
If you omit `tok`/`clf`, `evaluate_candidate` falls back to **IRT-only** scoring.
---
## ➕ Adding your own MCQ items
1. Assign a unique `passageID_questionID` string.
2. Collect ≥ 200 responses and calibrate **`disc`** & **`diff`** using an IRT library (e.g. *mirt* / *py-irt*).
3. Append the calibrated item to `irt_qbank.json`.
4. Push the updated JSON to your fork or issue a PR.
*(Un-calibrated items will distort θ estimates.)*
---
## 🖥️ CLI pipeline shortcut (classifier-only)
```bash
python - <<'PY'
from transformers import pipeline
clf = pipeline("text-classification",
model="ahmetsezginn/cefr-roberta-irt-hybrid",
top_k=None)
text = open("sample_passage.txt").read()
print(clf(text)) # e.g. [{'label': 'C1', 'score': 0.72}, ...]
PY
```
---
## ❓ FAQ
| Question | Answer |
|----------|--------|
| **GPU required?** | No. Model is 82 M params; CPU inference ≈ 1 s per passage. |
| **Can I ignore the transformer and use IRT only?** | Yes – skip `tok`/`clf` arguments. |
| **What if `skill` is missing?** | Those questions appear as `"Unknown"` in the skills report. |
| **Licence?** | Research / CC‑BY‑NC 4.0 – see `LICENSE`. |
---
### Citation
If you use this work, please cite the Cambridge corpus and this repository.
---
license: cc-by-nc-4.0
language:
- en
tags:
- text-classification
- CEFR
- IRT
- distilroberta
- reading-comprehension
library_name: transformers
pipeline_tag: text-classification
model_type: roberta
datasets:
- cambridge_english_reading_mcq
metrics:
- accuracy
- macro_f1
pretty_name: DistilRoBERTa + IRT Hybrid CEFR Classifier
---
|
aleegis/cf86b7ee-b3f5-4687-8e7e-c06ccfcfe8e8
|
aleegis
| 2025-06-19T17:18:45Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T17:10:16Z |
---
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
library_name: transformers
model_name: cf86b7ee-b3f5-4687-8e7e-c06ccfcfe8e8
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
licence: license
---
# Model Card for cf86b7ee-b3f5-4687-8e7e-c06ccfcfe8e8
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aleegis/cf86b7ee-b3f5-4687-8e7e-c06ccfcfe8e8", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/fajarchen-fajar-chen/Gradients-On-Demand/runs/s37kl4yr)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
csikasote/mms-1b-all-nyagen-combined-62
|
csikasote
| 2025-06-19T17:05:09Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"nyagen",
"mms",
"generated_from_trainer",
"base_model:facebook/mms-1b-all",
"base_model:finetune:facebook/mms-1b-all",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-19T15:14:40Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
- automatic-speech-recognition
- nyagen
- mms
- generated_from_trainer
metrics:
- wer
model-index:
- name: mms-1b-all-nyagen-combined-62
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mms-1b-all-nyagen-combined-62
This model is a fine-tuned version of [facebook/mms-1b-all](https://huggingface.co/facebook/mms-1b-all) on the NYAGEN - NYA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3739
- Wer: 0.2784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 4
- seed: 62
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 7.2693 | 0.5025 | 100 | 0.8927 | 0.7033 |
| 0.41 | 1.0050 | 200 | 0.4877 | 0.4012 |
| 0.3253 | 1.5075 | 300 | 0.4483 | 0.3627 |
| 0.2977 | 2.0101 | 400 | 0.4427 | 0.3408 |
| 0.2816 | 2.5126 | 500 | 0.4403 | 0.3338 |
| 0.2688 | 3.0151 | 600 | 0.4096 | 0.3292 |
| 0.2611 | 3.5176 | 700 | 0.4090 | 0.3183 |
| 0.2464 | 4.0201 | 800 | 0.4096 | 0.3148 |
| 0.2431 | 4.5226 | 900 | 0.4045 | 0.3023 |
| 0.2378 | 5.0251 | 1000 | 0.3950 | 0.2938 |
| 0.2283 | 5.5276 | 1100 | 0.4060 | 0.2913 |
| 0.2327 | 6.0302 | 1200 | 0.3869 | 0.2879 |
| 0.2232 | 6.5327 | 1300 | 0.3884 | 0.2933 |
| 0.2216 | 7.0352 | 1400 | 0.3857 | 0.2930 |
| 0.2093 | 7.5377 | 1500 | 0.3835 | 0.2818 |
| 0.2105 | 8.0402 | 1600 | 0.3739 | 0.2786 |
| 0.2057 | 8.5427 | 1700 | 0.3741 | 0.2817 |
| 0.2007 | 9.0452 | 1800 | 0.3712 | 0.2753 |
| 0.2029 | 9.5477 | 1900 | 0.3743 | 0.2779 |
| 0.2023 | 10.0503 | 2000 | 0.3778 | 0.2695 |
| 0.1908 | 10.5528 | 2100 | 0.3819 | 0.2659 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
Nitish035/mistral_CMoS_adapter32_2nd_584
|
Nitish035
| 2025-06-19T17:03:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T17:02:54Z |
---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Nitish035
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
musab1blaser/llama-3_2-1b_student
|
musab1blaser
| 2025-06-19T17:00:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T04:38:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
drl-robo/ppo-SnowballTarget
|
drl-robo
| 2025-06-19T16:57:29Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-06-19T16:57:26Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: drl-robo/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
JayHyeon/pythia-2.8b-VIPO_5e-7_1.0vpo_constant-1ep
|
JayHyeon
| 2025-06-19T16:50:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:EleutherAI/pythia-2.8b",
"base_model:finetune:EleutherAI/pythia-2.8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T10:21:15Z |
---
base_model: EleutherAI/pythia-2.8b
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: pythia-2.8b-VIPO_5e-7_1.0vpo_constant-1ep
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for pythia-2.8b-VIPO_5e-7_1.0vpo_constant-1ep
This model is a fine-tuned version of [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/pythia-2.8b-VIPO_5e-7_1.0vpo_constant-1ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/ocfdiz0i)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.0.dev0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Official-mezzo-fun-18-19-Viral-videos/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-18-19-Viral-videos
| 2025-06-19T16:45:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T16:43:30Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Official-Tv-mezzo-fun-18-19-Viral-videos/FULL.VIDEO.LINK.mezzo.fun.Viral.Video.Leaks.Tutorial.Official
|
Official-Tv-mezzo-fun-18-19-Viral-videos
| 2025-06-19T16:36:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T16:34:30Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
SidMcStarter/legal-llama3-lora-adapters
|
SidMcStarter
| 2025-06-19T16:28:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T16:27:55Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SidMcStarter
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SelmaNajih001/AAPL_Model
|
SelmaNajih001
| 2025-06-19T16:25:29Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T14:16:23Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: AAPL_Model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AAPL_Model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6790
- Accuracy: 0.5766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 28 | 0.6796 | 0.5766 |
| No log | 2.0 | 56 | 0.6790 | 0.5766 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
DAMO-NLP-SG/VideoRefer-VideoLLaMA3-2B
|
DAMO-NLP-SG
| 2025-06-19T16:09:52Z | 19 | 2 |
transformers
|
[
"transformers",
"safetensors",
"videollama3_qwen2",
"text-generation",
"multimodal large language model",
"large video-language model",
"video-text-to-text",
"en",
"arxiv:2406.07476",
"arxiv:2501.00599",
"arxiv:2504.16072",
"arxiv:2501.13106",
"base_model:DAMO-NLP-SG/VideoLLaMA3-2B-Image",
"base_model:finetune:DAMO-NLP-SG/VideoLLaMA3-2B-Image",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
video-text-to-text
| 2025-06-17T14:33:56Z |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: video-text-to-text
tags:
- multimodal large language model
- large video-language model
base_model:
- DAMO-NLP-SG/VideoLLaMA3-2B-Image
---
<p align="center">
<img src="https://hf.fast360.xyz/production/uploads/64a3fe3dde901eb01df12398/ZrZPYT0Q3wgza7Vc5BmyD.png" width="100%" style="margin-bottom: 0.2;"/>
<p>
<h3 align="center"><a href="https://arxiv.org/abs/2406.07476" style="color:#4D2B24">
VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM</a></h3>
<h5 align="center"> If you like our project, please give us a star ⭐ on <a href="https://github.com/DAMO-NLP-SG/VideoRefer">Github</a> for the latest update. </h2>
<div style="display: flex; justify-content: center; margin-top: 10px;">
<a href="https://arxiv.org/pdf/2501.00599"><img src="https://img.shields.io/badge/Arxiv-2501.00599-ECA8A7" style="margin-right: 5px;"></a>
<a href="https://huggingface.co/spaces/lixin4ever/VideoRefer-VideoLLaMA3"><img src='https://img.shields.io/badge/HuggingFace-Demo-96D03A' style="margin-right: 5px;"></a>
<a href="https://github.com/DAMO-NLP-SG/VideoRefer"><img src='https://img.shields.io/badge/Github-VideoRefer-F7C97E' style="margin-right: 5px;"></a>
<a href="https://github.com/DAMO-NLP-SG/VideoLLaMA3"><img src='https://img.shields.io/badge/Github-VideoLLaMA3-9DC3E6' style="margin-right: 5px;"></a>
</div>
## 📰 News
* **[2025.6.19]** 🔥We release the [demo](https://huggingface.co/spaces/lixin4ever/VideoRefer-VideoLLaMA3) of VideoRefer-VideoLLaMA3, hosted on HuggingFace. Feel free to try it!
* **[2025.6.18]** 🔥We release a new version of VideoRefer([VideoRefer-VideoLLaMA3-7B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-7B) and [VideoRefer-VideoLLaMA3-2B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-2B)), which are trained based on [VideoLLaMA3](https://github.com/DAMO-NLP-SG/VideoLLaMA3).
* **[2025.4.22]** 🔥Our VideoRefer-Bench has been adopted in [Describe Anything Model](https://arxiv.org/pdf/2504.16072) (NVIDIA & UC Berkeley).
* **[2025.2.27]** 🔥VideoRefer Suite has been accepted to CVPR2025!
* **[2025.2.18]** 🔥We release the [VideoRefer-700K dataset](https://huggingface.co/datasets/DAMO-NLP-SG/VideoRefer-700K) on HuggingFace.
* **[2025.1.1]** 🔥We release [VideoRefer-7B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-7B), the code of VideoRefer and the [VideoRefer-Bench](https://huggingface.co/datasets/DAMO-NLP-SG/VideoRefer-Bench).
## 🌏 Model Zoo
| Model Name | Visual Encoder | Language Decoder |
|:----------------|:----------------|:------------------|
| [VideoRefer-VideoLLaMA3-7B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-7B) | [VL3-SigLIP-NaViT](https://huggingface.co/DAMO-NLP-SG/VL3-SigLIP-NaViT) | [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) |
| [VideoRefer-VideoLLaMA3-2B (This Checkpoint)](https://huggingface.co/DAMO-NLP-SG/VideoRefer-VideoLLaMA3-2B) | [VL3-SigLIP-NaViT](https://huggingface.co/DAMO-NLP-SG/VL3-SigLIP-NaViT) | [Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) |
| [VideoRefer-7B](https://huggingface.co/DAMO-NLP-SG/VideoRefer-7B) | [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) |
| [VideoRefer-7B-stage2](https://huggingface.co/DAMO-NLP-SG/VideoRefer-7B-stage2) | [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) |
| [VideoRefer-7B-stage2.5](https://huggingface.co/DAMO-NLP-SG/VideoRefer-7B-stage2.5) | [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) | [Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) |
## 📑 Citation
If you find VideoRefer Suite useful for your research and applications, please cite using this BibTeX:
```bibtex
@InProceedings{Yuan_2025_CVPR,
author = {Yuan, Yuqian and Zhang, Hang and Li, Wentong and Cheng, Zesen and Zhang, Boqiang and Li, Long and Li, Xin and Zhao, Deli and Zhang, Wenqiao and Zhuang, Yueting and Zhu, Jianke and Bing, Lidong},
title = {VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {18970-18980}
}
@article{damonlpsg2025videollama3,
title={VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding},
author={Boqiang Zhang, Kehan Li, Zesen Cheng, Zhiqiang Hu, Yuqian Yuan, Guanzheng Chen, Sicong Leng, Yuming Jiang, Hang Zhang, Xin Li, Peng Jin, Wenqi Zhang, Fan Wang, Lidong Bing, Deli Zhao},
journal={arXiv preprint arXiv:2501.13106},
year={2025},
url = {https://arxiv.org/abs/2501.13106}
}
```
|
Youremi/iar
|
Youremi
| 2025-06-19T16:01:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T16:01:01Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: iar
---
# Tera
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `iar` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "iar",
"lora_weights": "https://huggingface.co/Youremi/tera/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Youremi/tera', weight_name='lora.safetensors')
image = pipeline('iar').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1500
- Learning rate: 0.0002
- LoRA rank: 48
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Youremi/tera/discussions) to add images that show off what you’ve made with this LoRA.
|
Khruna/liv
|
Khruna
| 2025-06-19T15:52:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T15:52:04Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# liv
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/liv/tree/main) them in the Files & versions tab.
|
New-Clip-sapna-shah-20-viral-Videos-hd/Original.FULL.VIDEO.sapna.shah.Viral.Video.Tutorial.Official.hq
|
New-Clip-sapna-shah-20-viral-Videos-hd
| 2025-06-19T15:51:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T15:51:34Z |
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?fre"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
hasdal/21a58fba-d539-4969-960e-60eff2254792
|
hasdal
| 2025-06-19T15:28:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-19T15:14:36Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-1-seed-42-2025-06-19
|
morturr
| 2025-06-19T15:17:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T15:17:19Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-1-seed-42-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-1-seed-42-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
phospho-app/gc1724-ACT_BBOX-bottle-ziwes
|
phospho-app
| 2025-06-19T15:11:49Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-19T15:10:04Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
[Errno 20] Not a directory: '/__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/gc1724/bottle_bboxes/videos/chunk-000/.DS_Store'
```
## Training parameters:
- **Dataset**: [gc1724/bottle](https://huggingface.co/datasets/gc1724/bottle)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
5eunsoo/my-bert-fine-tuned2
|
5eunsoo
| 2025-06-19T15:09:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T14:22:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
joedonino/gemma-product-description-v6
|
joedonino
| 2025-06-19T15:02:36Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T00:53:03Z |
---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: gemma-product-description-v6
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-product-description-v6
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joedonino/gemma-product-description-v6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
wolfCuanhamaRWS/Llama-Primus-Reasoning_q5_k_m_gguf
|
wolfCuanhamaRWS
| 2025-06-19T14:57:20Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"thesis_quant",
"q5_k_m_gguf",
"text-classification",
"en",
"arxiv:2501.18492",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-classification
| 2025-06-19T14:53:29Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.2-1B
tags:
- llama-factory
- full
- generated_from_trainer
- thesis_quant
- q5_k_m_gguf
pipeline_tag: text-classification
language:
- en
metrics:
- f1
model-index:
- name: GuardReasoner 1B
results: []
---
# GuardReasoner 1B
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492).
The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain).
Code: https://github.com/yueliu1999/GuardReasoner/
# Usage
```
import re
from vllm import LLM, SamplingParams
INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n"
def post_process(text):
text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE)
return text
def generate(vllm_model, prompt_list=[""], response_list=["None"]):
input_list = []
for i in range(len(prompt_list)):
input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n"
input_list.append(input)
outputs = vllm_model.generate(input_list, sampling_params)
return outputs
vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256)
sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048)
prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."]
response_list = ["""Dear LinkedIn friends,
Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely.
The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day.
It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly.
I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection.
Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change.
Sincerely,
Mark
"""]
output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text)
print(output)
```
# Citation
```
@article{GuardReasoner,
title={GuardReasoner: Towards Reasoning-based LLM Safeguards},
author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan},
journal={arXiv preprint arXiv:2501.18492},
year={2025}
}
```
|
fabikru/model_15M_pubchem_1M_ds_masking_0.3_predicted_hparams
|
fabikru
| 2025-06-19T14:52:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-19T14:52:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rahulmisal/SmolLM2-135M-finetuned
|
rahulmisal
| 2025-06-19T14:48:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T14:45:04Z |
---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-135M-finetuned
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for SmolLM2-135M-finetuned
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rahulmisal/SmolLM2-135M-finetuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.49.0
- Pytorch: 2.7.0
- Datasets: 3.5.1
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
openfun/openfun-ivod-whisper-medium-common-10-626
|
openfun
| 2025-06-19T14:40:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-19T13:25:01Z |
---
library_name: transformers
base_model: openai/whisper-medium
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Fine-tuned Whisper model for Legislative Yuan of Taiwan
results: []
---
# Fine-tune 資訊
- 原始模型: `openai/whisper-medium`
- 使用音訊數量: 118922
- 使用音訊總長: 70.50 小時
- 音訊平均長度: 2.13 秒
- GPU: `NVIDIA H100 PCIe` x 1
- 訓練時間: 03:34:57
- 模型大小: 2.85 GB
- 訓練參數:
- batch size: 16
- eval batch size: 8
- gradient checkpointing: False
- fp16: False
- bf16: True
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine-tuned Whisper model for Legislative Yuan of Taiwan
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0192
- Wer: 73.8276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0227 | 0.0673 | 500 | 0.0218 | 76.7862 |
| 0.0222 | 0.1345 | 1000 | 0.0209 | 75.9748 |
| 0.0217 | 0.2018 | 1500 | 0.0201 | 75.5901 |
| 0.0206 | 0.2691 | 2000 | 0.0195 | 74.2298 |
| 0.0187 | 0.3363 | 2500 | 0.0192 | 73.8276 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1
- Datasets 3.5.0
- Tokenizers 0.21.1
|
deepkeep-ai/gemma-2-2b-pii-token-classifier
|
deepkeep-ai
| 2025-06-19T14:27:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-18T12:49:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AdamF1984/ajf
|
AdamF1984
| 2025-06-19T14:27:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T14:02:22Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ajf
---
# Ajf
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ajf` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ajf",
"lora_weights": "https://huggingface.co/AdamF1984/ajf/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('AdamF1984/ajf', weight_name='lora.safetensors')
image = pipeline('ajf').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/AdamF1984/ajf/discussions) to add images that show off what you’ve made with this LoRA.
|
IFANSA5657/gasher453
|
IFANSA5657
| 2025-06-19T14:19:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:adapter:stable-diffusion-v1-5/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2025-06-19T14:19:38Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/nick-iliasov-i0fCUofGjV8-unsplash.jpg
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
instance_prompt: null
---
# dsggs434657
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/IFANSA5657/gasher453/tree/main) them in the Files & versions tab.
|
Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF
|
Rif010
| 2025-06-19T14:19:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Rif010/sealion-burmese-fine-tuned-merged-v1",
"base_model:quantized:Rif010/sealion-burmese-fine-tuned-merged-v1",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T14:19:11Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: Rif010/sealion-burmese-fine-tuned-merged-v1
---
# Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF
This model was converted to GGUF format from [`Rif010/sealion-burmese-fine-tuned-merged-v1`](https://huggingface.co/Rif010/sealion-burmese-fine-tuned-merged-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Rif010/sealion-burmese-fine-tuned-merged-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF --hf-file sealion-burmese-fine-tuned-merged-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF --hf-file sealion-burmese-fine-tuned-merged-v1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF --hf-file sealion-burmese-fine-tuned-merged-v1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Rif010/sealion-burmese-fine-tuned-merged-v1-Q4_K_M-GGUF --hf-file sealion-burmese-fine-tuned-merged-v1-q4_k_m.gguf -c 2048
```
|
Felix92/doctr-dummy-torch-viptr-tiny
|
Felix92
| 2025-06-19T14:11:01Z | 0 | 0 | null |
[
"pytorch",
"region:us"
] | null | 2025-06-19T14:10:56Z |
language: en
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
|
phospho-app/praveen-merai-ACT_BBOX-so100_01-2vuis
|
phospho-app
| 2025-06-19T13:59:54Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-19T13:37:19Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/so100_01_bboxes](https://huggingface.co/datasets/phospho-app/so100_01_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
wolfCuanhamaRWS/Llama-Primus-Reasoning_q2_k_gguf
|
wolfCuanhamaRWS
| 2025-06-19T13:57:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"thesis_quant",
"q2_k_gguf",
"text-classification",
"en",
"arxiv:2501.18492",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-classification
| 2025-06-19T13:54:48Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.2-1B
tags:
- llama-factory
- full
- generated_from_trainer
- thesis_quant
- q2_k_gguf
pipeline_tag: text-classification
language:
- en
metrics:
- f1
model-index:
- name: GuardReasoner 1B
results: []
---
# GuardReasoner 1B
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492).
The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain).
Code: https://github.com/yueliu1999/GuardReasoner/
# Usage
```
import re
from vllm import LLM, SamplingParams
INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n"
def post_process(text):
text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE)
return text
def generate(vllm_model, prompt_list=[""], response_list=["None"]):
input_list = []
for i in range(len(prompt_list)):
input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n"
input_list.append(input)
outputs = vllm_model.generate(input_list, sampling_params)
return outputs
vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256)
sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048)
prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."]
response_list = ["""Dear LinkedIn friends,
Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely.
The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day.
It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly.
I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection.
Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change.
Sincerely,
Mark
"""]
output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text)
print(output)
```
# Citation
```
@article{GuardReasoner,
title={GuardReasoner: Towards Reasoning-based LLM Safeguards},
author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan},
journal={arXiv preprint arXiv:2501.18492},
year={2025}
}
```
|
Sawu-Low3/final-t5-base-lora-stage3
|
Sawu-Low3
| 2025-06-19T13:55:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:55:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-1-seed-18-2025-06-19
|
morturr
| 2025-06-19T13:46:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T13:45:58Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-1-seed-18-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_dadjokes_headlines-COMB-dadjokes-comb-1-seed-18-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
stewy33/0524_augmented_original_original_honeypot_canary_eval-daf106d6
|
stewy33
| 2025-06-19T13:44:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T13:42:50Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Team-EVEN/Qwen3_14B_test_2
|
Team-EVEN
| 2025-06-19T13:43:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T13:38:22Z |
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Team-EVEN
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ik-ram28/MedMistralInstruct-CPT-SFT-7B
|
ik-ram28
| 2025-06-19T13:37:46Z | 67 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"medical",
"conversational",
"fr",
"en",
"base_model:ik-ram28/MedMistralInstruct-CPT-7B",
"base_model:finetune:ik-ram28/MedMistralInstruct-CPT-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-18T15:02:10Z |
---
library_name: transformers
tags:
- medical
license: apache-2.0
language:
- fr
- en
base_model:
- ik-ram28/MedMistralInstruct-CPT-7B
- mistralai/Mistral-7B-Instruct-v0.1
---
## MedMistralInstruct-CPT-SFT-7B
### Model Description
MedMistralInstruct-CPT-SFT-7B is a French medical language model based on Mistral-7B-Instruct-v0.1, adapted through Continual Pre-Training followed by Supervised Fine-Tuning.
### Model Details
- **Model Type**: Causal Language Model
- **Base Model**: Mistral-7B-Instruct-v0.1
- **Language**: French
- **Domain**: Medical/Healthcare
- **Parameters**: 7 billion
- **License**: Apache 2.0
### Training Details
**Continual Pre-Training (CPT)**
- **Dataset**: NACHOS corpus (7.4 GB French medical texts)
- **Training Duration**: 2.8 epochs
- **Hardware**: 32 NVIDIA A100 80GB GPUs
- **Training Time**: ~40 hours
**Supervised Fine-Tuning (SFT)**
- **Dataset**: 30K French medical question-answer pairs
- **Method**: DoRA (Weight-Decomposed Low-Rank Adaptation)
- **Training Duration**: 10 epochs
- **Hardware**: 1 NVIDIA H100 80GB GPU
- **Training Time**: ~42 hours
### Computational Requirements
- **Carbon Emissions**: 33.96 kgCO2e (CPT+SFT)
- **Training Time**: 82 hours total (CPT+SFT)
### Ethical Considerations
- **Medical Accuracy**: For research and educational purposes only
- **Professional Oversight**: Requires verification by qualified medical professionals
- **Bias Awareness**: May contain biases from training data
- **Privacy**: Do not input private health information
### Citation
```bibtex
```
### Contact
For questions about these models, please contact: [email protected]
|
yellowtulip/yellowtulip
|
yellowtulip
| 2025-06-19T13:29:34Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T06:58:22Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Yellowtulip
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/yellowtulip/yellowtulip/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('yellowtulip/yellowtulip', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/yellowtulip/yellowtulip/discussions) to add images that show off what you’ve made with this LoRA.
|
morturr/Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-7-2025-06-19
|
morturr
| 2025-06-19T13:11:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T13:11:05Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-7-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_headlines_one_liners-COMB-headlines-comb-1-seed-7-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
John6666/uwaki-mix-v10-sdxl
|
John6666
| 2025-06-19T13:02:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"semi-realistic",
"2.5D",
"asian",
"Japanese",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-XL-v1.0",
"base_model:finetune:OnomaAIResearch/Illustrious-XL-v1.0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T12:55:29Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- semi-realistic
- 2.5D
- asian
- Japanese
- illustrious
base_model: OnomaAIResearch/Illustrious-XL-v1.0
---
Original model is [here](https://civitai.com/models/1695892/uwakimix?modelVersionId=1919355).
This model created by [UWAZUMI](https://civitai.com/user/UWAZUMI).
|
okuparinen/LIA_300m_simple
|
okuparinen
| 2025-06-19T12:56:12Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"dialect",
"transcription",
"no",
"dataset:okuparinen/skn",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-05-27T07:18:51Z |
---
library_name: transformers
tags:
- dialect
- transcription
license: apache-2.0
datasets:
- okuparinen/skn
language:
- 'no'
base_model:
- facebook/wav2vec2-large-xlsr-53
---
# Simple automatic dialectal transcription of Norwegian
This is a fine-tuned model for automatic dialectal transcription of Norwegian dialect recordings. The model is based on the [XLS-R large model](https://huggingface.co/facebook/wav2vec2-large-xlsr-53).
The model has been finetuned on [old Norwegian dialect recordings](https://huggingface.co/datasets/okuparinen/lia) and their corresponding transcriptions. This model outputs simple transcription. The audio recordings are sampled at 16kHz.
## Uses
You can use this model for automatic dialectal transcription of Norwegian dialects. Note that this model does not produce standard bokmål or nynorsk text.
## How to Get Started with the Model
```
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC, Wav2Vec2CTCTokenizer
from datasets import Dataset, Audio
import torch
import pandas as pd
ds = pd.read_csv('CSV_DATA.csv')
ds = ds.dropna(how='any', axis=0)
test = Dataset.from_pandas(skn_test)
test = test.cast_column("AUDIO_PATH_COLUMN", Audio(sampling_rate=16000))
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("okuparinen/LIA_300m_simple", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
model = Wav2Vec2ForCTC.from_pretrained("okuparinen/LIA_300m_simple").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("okuparinen/LIA_300m_simple", tokenizer=tokenizer)
def prepare_dataset(batch):
audio = batch["AUDIO_PATH"]
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["input_length"] = len(batch["input_values"])
return batch
test_ready = test.map(prepare_dataset, remove_columns=test.column_names)
length = len(test)
predictions = []
for i in range(0, length, 1):
input_dict = processor(test_ready[i]["input_values"], return_tensors="pt", padding=True)
logits = model(input_dict.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)[0]
prediction = processor.decode(pred_ids)
predictions.append(prediction)
with open("OUTFILE.txt", "w") as f_pred:
for line in predictions:
f_pred.write(line + '\n')
```
### Training Data
The training data is an utterance-level version of the [LIA Norwegian corpus](https://tekstlab.uio.no/LIA/norsk/index_english.html). The utterance-level version is available at [okuparinen/skn](https://huggingface.co/datasets/okuparinen/lia).
## Evaluation Results
TBA
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
|
okuparinen/SKN_300m_simple
|
okuparinen
| 2025-06-19T12:54:48Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"dialect",
"transcription",
"fi",
"dataset:okuparinen/skn",
"base_model:GetmanY1/wav2vec2-large-fi-lp-cont-pt",
"base_model:finetune:GetmanY1/wav2vec2-large-fi-lp-cont-pt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-05-27T07:09:01Z |
---
library_name: transformers
tags:
- dialect
- transcription
license: apache-2.0
datasets:
- okuparinen/skn
language:
- fi
base_model:
- GetmanY1/wav2vec2-large-fi-lp-cont-pt
---
# Simple automatic dialectal transcription of Finnish
This is a fine-tuned model for automatic dialectal transcription of Finnish dialect recordings. The model is based on a model trained on colloquial Finnish: [GetmanY1/wav2vec2-large-fi-lp-cont-pt](https://huggingface.co/GetmanY1/wav2vec2-large-fi-lp-cont-pt).
The model has been finetuned on [old Finnish dialect recordings](https://huggingface.co/datasets/okuparinen/skn) and their corresponding transcriptions in the Uralic Phonetic Alphabet. This model outputs simple transcription. The audio recordings are sampled at 16kHz.
## Uses
You can use this model for automatic dialectal transcription of Finnish dialects. Note that this model does not produce standard Finnish text.
## How to Get Started with the Model
```
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC, Wav2Vec2CTCTokenizer
from datasets import Dataset, Audio
import torch
import pandas as pd
ds = pd.read_csv('CSV_DATA.csv')
ds = ds.dropna(how='any', axis=0)
test = Dataset.from_pandas(skn_test)
test = test.cast_column("AUDIO_PATH_COLUMN", Audio(sampling_rate=16000))
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("okuparinen/SKN_300m_simple", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
model = Wav2Vec2ForCTC.from_pretrained("okuparinen/SKN_300m_simple").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("okuparinen/SKN_300m_simple", tokenizer=tokenizer)
def prepare_dataset(batch):
audio = batch["AUDIO_PATH"]
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["input_length"] = len(batch["input_values"])
return batch
test_ready = test.map(prepare_dataset, remove_columns=test.column_names)
length = len(test)
predictions = []
for i in range(0, length, 1):
input_dict = processor(test_ready[i]["input_values"], return_tensors="pt", padding=True)
logits = model(input_dict.input_values.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)[0]
prediction = processor.decode(pred_ids)
predictions.append(prediction)
with open("OUTFILE.txt", "w") as f_pred:
for line in predictions:
f_pred.write(line + '\n')
```
### Training Data
The training data is an utterance-level version of the [Samples of Spoken Finnish corpus](https://www.kielipankki.fi/corpora/skn/). The utterance-level version is available at [okuparinen/skn](https://huggingface.co/datasets/okuparinen/skn).
## Evaluation results
TBA
## Citation [optional]
**BibTeX:**
[More Information Needed]
|
JunSotohigashi/dark-yogurt-608
|
JunSotohigashi
| 2025-06-19T12:51:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"lora",
"sft",
"dataset:JunSotohigashi/JapaneseWikipediaTypoDataset_kanji",
"base_model:llm-jp/llm-jp-3-13b",
"base_model:adapter:llm-jp/llm-jp-3-13b",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T07:18:56Z |
---
base_model: llm-jp/llm-jp-3-13b
datasets: JunSotohigashi/JapaneseWikipediaTypoDataset_kanji
library_name: transformers
model_name: JunSotohigashi/dark-yogurt-608
tags:
- generated_from_trainer
- lora
- sft
licence: license
---
# Model Card for JunSotohigashi/dark-yogurt-608
This model is a fine-tuned version of [llm-jp/llm-jp-3-13b](https://huggingface.co/llm-jp/llm-jp-3-13b) on the [JunSotohigashi/JapaneseWikipediaTypoDataset_kanji](https://huggingface.co/datasets/JunSotohigashi/JapaneseWikipediaTypoDataset_kanji) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JunSotohigashi/dark-yogurt-608", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jun-sotohigashi-toyota-technological-institute/misusing-corpus-jp/runs/ejc1aina)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
carolinacon/Reinforce-CartPole-v1
|
carolinacon
| 2025-06-19T12:50:52Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-17T16:21:50Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 485.10 +/- 30.32
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
JunSotohigashi/gentle-butterfly-596
|
JunSotohigashi
| 2025-06-19T12:49:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"lora",
"sft",
"dataset:JunSotohigashi/JapaneseWikipediaTypoDataset_kanji",
"base_model:llm-jp/llm-jp-3-13b-instruct",
"base_model:adapter:llm-jp/llm-jp-3-13b-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T07:14:21Z |
---
base_model: llm-jp/llm-jp-3-13b-instruct
datasets: JunSotohigashi/JapaneseWikipediaTypoDataset_kanji
library_name: transformers
model_name: JunSotohigashi/gentle-butterfly-596
tags:
- generated_from_trainer
- lora
- sft
licence: license
---
# Model Card for JunSotohigashi/gentle-butterfly-596
This model is a fine-tuned version of [llm-jp/llm-jp-3-13b-instruct](https://huggingface.co/llm-jp/llm-jp-3-13b-instruct) on the [JunSotohigashi/JapaneseWikipediaTypoDataset_kanji](https://huggingface.co/datasets/JunSotohigashi/JapaneseWikipediaTypoDataset_kanji) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JunSotohigashi/gentle-butterfly-596", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jun-sotohigashi-toyota-technological-institute/misusing-corpus-jp/runs/3cawl8km)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
BootesVoid/cmaeaa5iw0062ls25tcguwngh_cmc3cl8pp0012h8uiug6ic7mi
|
BootesVoid
| 2025-06-19T12:43:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T12:43:29Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AZELIA
---
# Cmaeaa5Iw0062Ls25Tcguwngh_Cmc3Cl8Pp0012H8Uiug6Ic7Mi
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AZELIA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "AZELIA",
"lora_weights": "https://huggingface.co/BootesVoid/cmaeaa5iw0062ls25tcguwngh_cmc3cl8pp0012h8uiug6ic7mi/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmaeaa5iw0062ls25tcguwngh_cmc3cl8pp0012h8uiug6ic7mi', weight_name='lora.safetensors')
image = pipeline('AZELIA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmaeaa5iw0062ls25tcguwngh_cmc3cl8pp0012h8uiug6ic7mi/discussions) to add images that show off what you’ve made with this LoRA.
|
SidMcStarter/legal-llama3-lora
|
SidMcStarter
| 2025-06-19T12:25:36Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T10:35:53Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SidMcStarter
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
asheela/praktikum-modul6-ai
|
asheela
| 2025-06-19T12:23:47Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-14T17:39:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hwhamad/llama-3.2-3B-Instruct-GGUF
|
hwhamad
| 2025-06-19T12:20:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T12:20:36Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hwhamad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LandCruiser/sn29C1_1906_8
|
LandCruiser
| 2025-06-19T12:20:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T02:48:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Official-Nirma-Meena-Viral/VIDEO.Nirma.Meena.Viral.Video.Official.Tutorial
|
Official-Nirma-Meena-Viral
| 2025-06-19T12:13:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T12:13:00Z |
Nirma Meena Viral video took the internet viewers on various Leaked social media platforms. Nirma Meena Video, a young and talented digital creator, recently became famous thanks to this interesting video.
<a href="https://t.co/98E3uGhPfJ" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
umituz/munir-model
|
umituz
| 2025-06-19T12:06:46Z | 0 | 0 | null |
[
"safetensors",
"gguf",
"gpt2",
"turkish",
"classical-music",
"munir-nurettin-selcuk",
"fine-tuned",
"conversational",
"text-generation",
"tr",
"dataset:umituz/munir",
"base_model:openai-community/gpt2",
"base_model:quantized:openai-community/gpt2",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T11:17:32Z |
---
language:
- tr
base_model: gpt2
tags:
- turkish
- classical-music
- munir-nurettin-selcuk
- gpt2
- fine-tuned
- conversational
license: mit
datasets:
- umituz/munir
pipeline_tag: text-generation
---
# Münir Nurettin Selçuk GPT-2 Model
Türk klasik müziğinin usta sanatçısı Münir Nurettin Selçuk hakkında sorulara cevap veren fine-tuned GPT-2 model.
## Model Details
- **Base Model**: gpt2
- **Fine-tuning Dataset**: umituz/munir
- **Language**: Turkish
- **Task**: Text Generation / Conversational AI
- **Training**: LoRA fine-tuning on Apple Silicon
## Usage
### Standard PyTorch
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("umituz/munir-model")
tokenizer = AutoTokenizer.from_pretrained("umituz/munir-model")
prompt = "Soru: Münir Nurettin Selçuk kimdir?\nCevap:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100, pad_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### GGUF with LM Studio
1. Download the GGUF file from this repository: `munir-gpt2.gguf`
2. Load in LM Studio
3. Use prompt format: "Soru: [your question]\nCevap:"
## Training Details
- **LoRA Configuration**: r=64, alpha=128
- **Training Epochs**: 25
- **Batch Size**: 1
- **Learning Rate**: 0.0005
- **Device**: Apple Silicon (MPS)
## Dataset
Model eğitimi umituz/munir dataset'i ile yapılmıştır. Münir Nurettin Selçuk'un hayatı, eserleri ve Türk klasik müziğindeki yeri hakkında bilgiler içerir.
## Prompt Format
```
Soru: [your question]
Cevap:
```
## GGUF Compatibility
Bu model GPT-2 tabanlı olduğu için llama.cpp ile uyumludur ve GGUF formatına çevrilebilir. LM Studio'da kullanım için ideal.
|
tamewild/4b_v6_merged_e2
|
tamewild
| 2025-06-19T11:59:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T11:57:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nytopop/1b_or_base
|
nytopop
| 2025-06-19T11:53:23Z | 516 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-to-speech",
"en",
"base_model:amuvarma/1b-extended_pretrain-checkpoint-192000",
"base_model:finetune:amuvarma/1b-extended_pretrain-checkpoint-192000",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-to-speech
| 2025-06-13T07:07:43Z |
---
library_name: transformers
language:
- en
pipeline_tag: text-to-speech
base_model:
- amuvarma/1b-extended_pretrain-checkpoint-192000
- meta-llama/Llama-3.2-1B-Instruct
---
# noteworthy changes
- tokenizer omits `<|audio|>` to prevent finetunes resizing the embeddings unneccessarily
- `config.json` uses the correct EOS to end generation on end of audio
- `generation_config.json` uses the correct EOS to end generation on end of audio + has a reasonable default temperature
- `chat_template.jinja` maps user messages to transcripts and assistant messages to audio for simple conversational context management
|
cbbl-skku-org/CONTRA-IL6
|
cbbl-skku-org
| 2025-06-19T11:45:42Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T11:42:55Z |
---
license: apache-2.0
---
|
LumiOpen/Llama-Poro-2-8B-SFT
|
LumiOpen
| 2025-06-19T11:39:59Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"fi",
"en",
"dataset:LumiOpen/poro2-instruction-collection",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-13T12:26:55Z |
---
datasets:
- LumiOpen/poro2-instruction-collection
language:
- fi
- en
license: llama3.3
library_name: transformers
pipeline_tag: text-generation
---
# Poro 2 8B SFT Model Card
> **Note for most users**: This is an intermediate checkpoint from our post-training pipeline. Most users should use [Poro 2 8B Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct) instead, which includes an additional round of Direct Preference Optimization (DPO) for improved response quality and alignment. This SFT-only model is primarily intended for researchers interested in studying the effects of different post-training techniques.
Poro 2 8B SFT is a supervised fine-tuned model created from the Poro 2 8B Base model. This model has been trained for instruction following and conversational AI applications in both Finnish and English, but has not undergone preference tuning. It represents the intermediate step before Direct Preference Optimization (DPO) in our post-training pipeline.
Poro 2 was created in a collaboration between [AMD Silo AI](https://www.amd.com/en/solutions/ai/silo-ai.html), the [TurkuNLP group](https://turkunlp.org/) of the University of Turku, and [High Performance Language Technologies](https://hplt-project.org/) (HPLT). Training was conducted on the [LUMI supercomputer](https://www.lumi-supercomputer.eu/), using compute resources generously provided by [CSC](https://csc.fi/) - IT Center for Science, Finland.
For more details on our training and data generation pipeline, check out our [Continued Pretraining Playbook](https://rocm.blogs.amd.com/artificial-intelligence/multilingual-continued-pretraining/README.html).
## Poro 2 Model Family
The Poro 2 model family includes both 8B and 70B models, and there are three different versions released of the Poro 2 models: a base model, a post-training SFT-only checkpoint, and the final instruct model which is the SFT model plus a round of DPO.
| Model | Based on | Base Model | SFT | Instruct |
| :---: | :------: | :--------: | :-: | :------- |
| Poro 2 8B | Llama 3.1 8B | [Poro 2 8B Base](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-base) | [Poro 2 8B SFT](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-SFT) | [Poro 2 8B Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct) |
| Poro 2 70B | Llama 3.1 70B | [Poro 2 70B Base](https://huggingface.co/LumiOpen/Llama-Poro-2-70B-base) | [Poro 2 70B SFT](https://huggingface.co/LumiOpen/Llama-Poro-2-70B-SFT) | [Poro 2 70B Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-70B-Instruct) |
_What does Poro mean?_ Poro is the Finnish word for Reindeer! 🦌 These animals are native to Finland and hold a significant and historical role in Finnish culture.
## Model Overview
Poro 2 8B SFT is based on the Llama 3.1 8B architecture and has been supervised fine-tuned for instruction following. The model supports both English and Finnish conversations but has not undergone preference tuning for response quality optimization.
| Hyperparameter | Value |
| :------------- | :----: |
| n_parameters | 8.03B |
| n_layers | 32 |
| n_heads | 32 |
| n_kv_heads | 8 |
| d_model | 4096 |
| vocab_size | 128256 |
| max_sequence_length | 8192 |
| base_model | Llama-3.1-8B |
## Training Process
### Continued Pretraining
The base Poro 2 8B model was created through continued pretraining on 165B tokens of Finnish, English, code, and math data.
### Supervised Fine-Tuning (SFT)
This model represents the SFT phase of post-training, using 1.4M instruction-following examples in English and Finnish, including:
- English and Finnish Tulu 3 prompts with Llama-3.3-70B-Instruct responses (1.35M samples)
- Multi-turn conversations generated using the Magpie method (14K samples)
- Top-rated conversations from OASST2 and Avoin Avustaja datasets (5K samples)
- Translation samples from EuroParl (1K samples)
We release the [Poro 2 instruction collection](https://huggingface.co/datasets/LumiOpen/poro2-instruction-collection).
## SFT Hyperparameters
| Hyperparameter | Value |
| :------------: | :---: |
| Epochs | 2 |
| Global batch size | 64 |
| Learning rate | 5e-6 |
| LR scheduler | linear |
| Warmup ratio | 0.03 |
| Max sequence length | 4,096 |
## Evaluation Results
Poro 2 8B SFT shows substantial improvements in Finnish instruction-following capabilities compared to Llama 3.1 8B Instruct, while maintaining strong English performance. Note that the final Instruct model (with DPO) performs significantly better.
### Finnish Instruction Following
| | Poro 2 8B SFT | Llama 3.1 8B Instruct | Poro 2 8B Instruct |
|----------------|------------------|------------------------|--------------------|
| IFEval Finnish | 64.69 | 47.31 | **66.54** |
| MTBench Finnish | 5.92 | 4.10 | **6.75** |
| AlpacaEval 2 Finnish | 16.80 | 2.05 | **28.89** |
### English Instruction Following
| | Poro 2 8B SFT | Llama 3.1 8B Instruct | Poro 2 8B Instruct |
|----------------|--------|------------------------|--------------------|
| IFEval | **79.66** | 79.48 | 79.29 |
| MTBench | 7.07 | **7.70** | 7.33 |
| AlpacaEval 2 | 29.67 | 32.70 | **35.30** |
**Overall**: ~16% average improvement in Finnish instruction-following benchmarks compared to Llama 3.1 8B Instruct, with maintained English performance. The additional DPO step in the Instruct model provides further improvements.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "LumiOpen/Poro-2-8B-SFT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Finnish conversation example
messages = [
{"role": "user", "content": "Kerro minulle Suomen historiasta."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
)
outputs = model.generate(
inputs,
max_new_tokens=500,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Research Applications
This SFT-only model is particularly useful for researchers studying:
- The effects of supervised fine-tuning vs. preference tuning
- Comparative analysis of different post-training techniques
- Ablation studies on instruction-following capabilities
- Cross-lingual transfer in instruction-following tasks
- The impact of DPO on model behavior and alignment
## Intended Use
Poro 2 8B SFT is primarily intended for:
- **Research purposes**: Studying post-training techniques and their effects
- **Comparative analysis**: Understanding the contribution of different training phases
- **Educational applications**: Learning about instruction-following model development
- **Development**: As a starting point for further preference tuning experiments
**For production use cases**, we recommend using [Poro 2 8B Instruct](https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct) instead.
## Ethical Considerations and Limitations
Poro 2 8B SFT is a research checkpoint optimized for English and Finnish instruction following. As this model has not undergone preference tuning, it may be more prone to generating responses that are misaligned with user expectations compared to the final Instruct model.
Key limitations:
- **No preference tuning**: May generate responses that are less aligned or of lower quality than the Instruct version
- Limited proficiency in languages other than English and Finnish
- May occasionally generate biased, inappropriate, or factually incorrect content
- Performance may vary significantly for specialized or technical domains
- Context window limited to 8,192 tokens
- May struggle with very recent events (knowledge cutoff limitations)
**Safety Considerations:**
- This model should primarily be used for research purposes
- Users should verify important factual claims independently
- The model should not be used for medical, legal, or financial advice without human oversight
- Responses should be reviewed for appropriateness in sensitive contexts
- Consider using the Instruct version for better alignment and response quality
## License
Built with Llama
Poro 2 8B SFT is released under the Llama 3.3 Community License. Please review the license terms before use.
## Citation
```bibtex
@misc{poro2_2025,
title={Poro 2: Continued Pretraining for Language Acquisition},
author={Elaine Zosa and Jouni Louma and Kai Hakala and Antti Virtanen and Mika Koistinen and Risto Luukkonen and Akseli Reunamo and Sampo Pyysalo and Jonathan Burdge},
year={2025},
howpublished={LumiOpen}
}
```
## Acknowledgments
We thank CSC - IT Center for Science, Finland for providing access to the LUMI supercomputer. This work was supported by the High Performance Language Technologies (HPLT) project and conducted in collaboration with TurkuNLP from the University of Turku. This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350.
|
neural-interactive-proofs/finetune_dpo_cv_test_lm_server_47_0_iter_0_provers_group_2025-06-19_12-35-00_Qwen_Qwen2.5-0.5B-I
|
neural-interactive-proofs
| 2025-06-19T11:35:42Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T11:35:39Z |
---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: finetune_dpo_cv_test_lm_server_47_0_iter_0_provers_group_2025-06-19_12-35-00_Qwen_Qwen2.5-0.5B-I
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for finetune_dpo_cv_test_lm_server_47_0_iter_0_provers_group_2025-06-19_12-35-00_Qwen_Qwen2.5-0.5B-I
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neural-interactive-proofs/finetune_dpo_cv_test_lm_server_47_0_iter_0_provers_group_2025-06-19_12-35-00_Qwen_Qwen2.5-0.5B-I", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lrhammond-team/pvg-self-hosted-finetune/runs/Qwen_Qwen2.5-0.5B-Instruct_dpo_2025-06-19_12-35-00_cv_test_lm_server_47_0_iter_0_provers_group)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ujjawal077/llama-cyber-multilingual
|
ujjawal077
| 2025-06-19T11:34:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T11:29:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ensariano/kangal_lora_flux
|
ensariano
| 2025-06-19T11:31:42Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-06T05:04:33Z |
---
license: creativeml-openrail-m
---
|
altinkedi/xxtrgpt2v1
|
altinkedi
| 2025-06-19T11:28:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T11:25:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
omrisap/TreeRPO_V1_1600
|
omrisap
| 2025-06-19T11:28:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T11:26:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JunSotohigashi/lilac-water-601
|
JunSotohigashi
| 2025-06-19T11:26:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"lora",
"sft",
"dataset:JunSotohigashi/JapaneseWikipediaTypoDataset_kanji",
"base_model:tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2",
"base_model:adapter:tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T07:16:53Z |
---
base_model: tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2
datasets: JunSotohigashi/JapaneseWikipediaTypoDataset_kanji
library_name: transformers
model_name: JunSotohigashi/lilac-water-601
tags:
- generated_from_trainer
- lora
- sft
licence: license
---
# Model Card for JunSotohigashi/lilac-water-601
This model is a fine-tuned version of [tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2) on the [JunSotohigashi/JapaneseWikipediaTypoDataset_kanji](https://huggingface.co/datasets/JunSotohigashi/JapaneseWikipediaTypoDataset_kanji) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JunSotohigashi/lilac-water-601", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jun-sotohigashi-toyota-technological-institute/misusing-corpus-jp/runs/m5isjdmn)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.