modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
cgifbribcgfbi/Llama-3.3-70B-chem-3-5-sonnet-div-v2
|
cgifbribcgfbi
| 2025-06-20T08:29:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"dataset:3-5-sonnet-diverse-v2_5000.jsonl",
"base_model:huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned",
"base_model:adapter:huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned",
"license:llama3.3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-20T05:22:47Z |
---
library_name: peft
license: llama3.3
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
tags:
- axolotl
- generated_from_trainer
datasets:
- 3-5-sonnet-diverse-v2_5000.jsonl
model-index:
- name: Llama-3.3-70B-chem-3-5-sonnet-div-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0`
```yaml
base_model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
load_in_8bit: false
load_in_4bit: true
adapter: qlora
wandb_name: Llama-3.3-70B-chem-3-5-sonnet-div-v2
output_dir: ./outputs/out/Llama-3.3-70B-chem-3-5-sonnet-div-v2
hub_model_id: cgifbribcgfbi/Llama-3.3-70B-chem-3-5-sonnet-div-v2
tokenizer_type: AutoTokenizer
push_dataset_to_hub:
strict: false
datasets:
- path: 3-5-sonnet-diverse-v2_5000.jsonl
type: chat_template
field_messages: messages
dataset_prepared_path: last_run_prepared
# val_set_size: 0.05
# eval_sample_packing: False
save_safetensors: true
sequence_len: 2469
sample_packing: true
pad_to_sequence_len: true
lora_r: 64
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
lora_target_linear: false
lora_modules_to_save:
wandb_mode:
wandb_project: finetune-sweep
wandb_entity: gpoisjgqetpadsfke
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 1
micro_batch_size: 4 # This will be automatically adjusted based on available GPU memory
num_epochs: 4
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 0.00002
train_on_inputs: false
group_by_length: true
bf16: true
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
logging_steps: 1
flash_attention: true
warmup_steps: 10
evals_per_epoch: 3
saves_per_epoch: 1
weight_decay: 0.01
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: false
fsdp_use_orig_params: false
fsdp_cpu_ram_efficient_loading: true
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_sharding_strategy: FULL_SHARD
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
</details><br>
# Llama-3.3-70B-chem-3-5-sonnet-div-v2
This model is a fine-tuned version of [huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned) on the 3-5-sonnet-diverse-v2_5000.jsonl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 972
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sgonzalezygil/sd-finetuning-dreambooth-v22-600
|
sgonzalezygil
| 2025-06-20T08:21:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-20T08:20:01Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MikCil/reddere-voces-orpheus-lora
|
MikCil
| 2025-06-20T08:19:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:canopylabs/3b-es_it-ft-research_release",
"base_model:finetune:canopylabs/3b-es_it-ft-research_release",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T08:19:03Z |
---
base_model: canopylabs/3b-es_it-ft-research_release
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MikCil
- **License:** apache-2.0
- **Finetuned from model :** canopylabs/3b-es_it-ft-research_release
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sgonzalezygil/sd-finetuning-dreambooth-v22-1200
|
sgonzalezygil
| 2025-06-20T08:17:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-20T08:15:46Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Riyan123/Llama-3.2-3B-it-chat-merged
|
Riyan123
| 2025-06-20T08:13:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T06:08:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
elenavins184/elena_luka_max
|
elenavins184
| 2025-06-20T08:06:50Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-17T08:53:29Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
phospho-app/OpenLabBA-ACT_BBOX-lego_in_box_v6-nqh8l
|
phospho-app
| 2025-06-20T08:05:33Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-20T07:41:47Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/lego_in_box_v6_bboxes](https://huggingface.co/datasets/phospho-app/lego_in_box_v6_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
fangcaotank/task-10-microsoft-Phi-4-mini-instruct
|
fangcaotank
| 2025-06-20T08:02:19Z | 90 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:adapter:microsoft/Phi-4-mini-instruct",
"region:us"
] | null | 2025-06-16T02:47:18Z |
---
base_model: microsoft/Phi-4-mini-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
- PEFT 0.13.2
|
lostinjamal/3cb13c83-60cb-4a66-a135-260d6369e792
|
lostinjamal
| 2025-06-20T08:00:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T07:54:33Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/Impish_Magic_24B-Q6_K-GGUF
|
Triangle104
| 2025-06-20T07:57:46Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:quantized:SicariusSicariiStuff/Impish_Magic_24B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T07:55:34Z |
---
license: apache-2.0
language:
- en
base_model: SicariusSicariiStuff/Impish_Magic_24B
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Impish_Magic_24B-Q6_K-GGUF
This model was converted to GGUF format from [`SicariusSicariiStuff/Impish_Magic_24B`](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B) for more details on the model.
---
This model is based on mistralai/Magistral-Small-2506 so naturally it's named Impish_Magic. Truly excellent size, it's been tested on a laptop with 16GB gpu and it works quite fast (4090m).
This model went "full" fine-tune over 100m unique tokens. Why "full"?
Specific areas in the model have been tuned to attempt to change the
vocabulary usage, while keeping as much intelligence as possible. So
this is definitely not a LoRA, but also not exactly a proper full finetune, but rather something in-between.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Impish_Magic_24B-Q6_K-GGUF --hf-file impish_magic_24b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Impish_Magic_24B-Q6_K-GGUF --hf-file impish_magic_24b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Impish_Magic_24B-Q6_K-GGUF --hf-file impish_magic_24b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Impish_Magic_24B-Q6_K-GGUF --hf-file impish_magic_24b-q6_k.gguf -c 2048
```
|
hanslab37/ppo-SnowballTarget
|
hanslab37
| 2025-06-20T07:50:23Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-06-20T07:50:17Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: hanslab37/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BCCard/Qwen2.5-Coder-14B-FP8-Dynamic
|
BCCard
| 2025-06-20T07:42:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"qwen",
"qwen-coder",
"codeqwen",
"conversational",
"en",
"arxiv:2409.12186",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-Coder-14B",
"base_model:quantized:Qwen/Qwen2.5-Coder-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-06-20T06:22:47Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-14B/blob/main/LICENSE
language:
- en
base_model:
- Qwen/Qwen2.5-Coder-14B
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- qwen
- qwen-coder
- codeqwen
---
# Qwen2.5-Coder-14B-FP8-Dynamic
## Model Overview
- **Model Architecture:** Qwen2.5-Coder-72B-Instruct
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 2/24/2025
- **Version:** 1.0
- **Model Developers:** BC Card
Quantized version of [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B).
### Model Optimizations
This model was obtained by quantizing the weights of [Qwen/Qwen2.5-Coder-14B](https://huggingface.co/Qwen/Qwen2.5-Coder-14B) to FP8 data type, ready for inference with vLLM >= 0.5.2.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="BCCard/Qwen2.5-14B-FP8-Dynamic",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the code for python hello world?"
inputs = {
"prompt": f"<|user|>\n{question}<|end|>\n<|assistant|>\n",
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Qwen2.5-Coder-14B Introduction
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). As of now, Qwen2.5-Coder has covered six mainstream model sizes, 0.5, 1.5, 3, 7, 14, 32 billion parameters, to meet the needs of different developers. Qwen2.5-Coder brings the following improvements upon CodeQwen1.5:
- Significantly improvements in **code generation**, **code reasoning** and **code fixing**. Base on the strong Qwen2.5, we scale up the training tokens into 5.5 trillion including source code, text-code grounding, Synthetic data, etc. Qwen2.5-Coder-32B has become the current state-of-the-art open-source codeLLM, with its coding abilities matching those of GPT-4o.
- A more comprehensive foundation for real-world applications such as **Code Agents**. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies.
- **Long-context Support** up to 128K tokens.
**This repo contains the 14B Qwen2.5-Coder model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 14.7B
- Number of Paramaters (Non-Embedding): 13.1B
- Number of Layers: 48
- Number of Attention Heads (GQA): 40 for Q and 8 for KV
- Context Length: Full 131,072 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
**We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/), [GitHub](https://github.com/QwenLM/Qwen2.5-Coder), [Documentation](https://qwen.readthedocs.io/en/latest/), [Arxiv](https://arxiv.org/abs/2409.12186).
## Requirements
The code of Qwen2.5-Coder has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5-coder-family/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{hui2024qwen2,
title={Qwen2. 5-Coder Technical Report},
author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
journal={arXiv preprint arXiv:2409.12186},
year={2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
whyquqwhy/Tower_Defense
|
whyquqwhy
| 2025-06-20T07:41:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T06:54:49Z |
# TowerDefense-GameFramework-Demo
## 简介
这是一款基于开源框架[GameFramework][1](以下简称GF)实现的塔防游戏Demo。Demo原型是Unity官方放在Assets Store上的Demo [Tower Defense Template][2]。此项目是对Demo原型使用GF进行再实现以及扩展,主要用于个人对GF的学习和实践,也给其他学习GF的同学一个参考。
## 版本信息
- Unity 2019.4.1f1
- GameFramework 2020.12.31
- Tower Defense Template 1.4
## 框架代码解析
这里是本人在知乎上对GF代码分析的专栏:[GameFramework解析:开篇](https://zhuanlan.zhihu.com/p/426136370)
## 游戏简介
### 游戏预览
![简介1][4]
![简介2][5]
![简介3][13]
![简介4][14]
### 游戏介绍
游戏是塔防类型,总共五个关卡,每个关卡的地形环境、产生的敌人、以及可使用的塔都不一样。玩家利用获得的能量根据具体情况选择合适的塔,并建造在适当的位置来阻止敌人攻击基地。
#### 能量
玩家在关卡开始有少量初始能量,通过击杀敌人和建造能量塔均可以获得能量,能量用于建造和升级塔。
#### 塔
1. 加农炮塔:高射速、低伤害
2. 火箭炮塔:高AOE伤害(仅攻击地面敌人)
3. 激光炮塔:低射速、高伤害、远射程
4. 能量塔:每隔一段时间产生能量
5. 电子脉冲塔:对附近的敌人附加减速效果
6. 导弹阵列:对大范围敌人造成高额伤害,在场上存在10秒钟后自我销毁
**塔可以进行升级,升级后可提升射程、伤害、减速率、能量产生效率等**
#### 敌人
1. 虫子:低血量、高移速
2. 直升机:可避免火箭炮塔的攻击,并且在道路被炮塔阻塞时可直接越过炮塔前往基地
3. 坦克:高血量、低移速
4. Boss:超高血量、超低移速
5. 超级虫子:高血量版虫子
6. 超级直升机:高血量版直升机
7. 超级坦克:高血量版坦克
8. 超级Boss:高血量版Boss
**敌人一般不会攻击塔,但在塔完全阻挡住敌人前进的路时,就会攻击塔(直升机敌人不攻击塔,会直接越过塔),正确方式是结合地形情况建塔制造迂回路线,增加敌人达到基地需要行走的路程,但又不完全阻挡道路,避免塔被攻击**
#### 基地
基地是敌人进攻的最终目标,也是玩家需要守护的目标,当基地血量为0时游戏失败。
#### 关卡结算
若玩家在消灭关卡所有敌人且基地血量不为0时,则通关成功,若在消灭所有怪物前,基地血量被攻击至0,则游戏失败。通关成功会根据基地剩余血量进行评分。
## 相关实现
本项目用到了GF中的多个模块包括全局配置、数据表、实体、事件、文件系统、有限状态机、文件系统、本地化、对象池、引用池、流程、资源、场景、游戏配置、声音、UI等。
### 数据配置
![数据配置][6]
游戏内所有数据均以Excel形式进行配置,导出生成二进制文件后在运行时加载读取。
### 本地化
![本地化][7]
利用本地化模块以及资源模块中的变体实现游戏本地化。
### 引用池
![引用池][8]
项目中大量重复使用的对象都使用了引用池进行缓存,避免频繁的内存分配。
### 资源打包配置
![资源打包配置1][9]
![资源打包配置2][10]
已对所有资源进行打包配置,设置了正确的分包信息、文件系统等。并根据内置分析工具做到0冗余、0循环引用。
### 热更新
![热更新][11]
游戏启动会检测版本信息并进行基本资源(即非关卡内资源)更新。
### 分包下载
![热更新][12]
游戏对每个关卡资源单独进行分包,进入关卡前需要下载更新相应的资源,而暂时没有玩到的关卡可以暂时不下载。
## 注意事项
游戏在Editor下默认以Editor模式启动,即读取工程内资源运行,不会读取AB包也不会进行更新。项目已正确配置打包信息,并完成了相应的热更逻辑的实现,若要测试更新模式,需要在Base组件取消Editor Resource Mode,并确保Resource组件的Resource Mode为Updatable模式。在打包资源并正确部署资源后即可正常运行更新模式(借助HFS等工具可在本地进行部署和测试)。
## 结语
感谢[GameFramework][1]作者[Ellan Jiang][3]提供的优秀框架。
[1]: https://github.com/EllanJiang/GameFramework "GF link"
[2]: https://assetstore.unity.com/packages/essentials/tutorial-projects/tower-defense-template-107692 "Tower Defense Template Link"
[3]: https://github.com/EllanJiang "Ellan Jiang link"
[4]: https://github.com/DrFlower/TowerDefense-GameFramework-Demo/blob/master/Doc/1.png "简介1"
[5]: https://github.com/DrFlower/TowerDefense-GameFramework-Demo/blob/master/Doc/2.JPG "简介2"
[6]: https://github.com/DrFlower/TowerDefense-GameFramework-Demo/blob/master/Doc/3.png "数据配置"
[7]: https://github.com/DrFlower/TowerDefense-GameFramework-Demo/blob/master/Doc/4.JPG "本地化"
[8]: https://github.com/DrFlower/TowerDefense-GameFramework-Demo/blob/master/Doc/5.png "引用池"
[9]: https://github.com/DrFlower/TowerDefense-GameFramework-Demo/blob/master/Doc/6.png "资源打包配置1"
[10]: https://github.com/DrFlower/TowerDefense-GameFramework-Demo/blob/master/Doc/7.png "资源打包配置2"
[11]: https://github.com/DrFlower/TowerDefense-GameFramework-Demo/blob/master/Doc/8.png "热更新"
[12]: https://github.com/DrFlower/TowerDefense-GameFramework-Demo/blob/master/Doc/9.png "分包下载"
[13]: https://github.com/DrFlower/TowerDefense-GameFramework-Demo/blob/master/Doc/10.gif "简介3"
[14]: https://github.com/DrFlower/TowerDefense-GameFramework-Demo/blob/master/Doc/11.gif "简介4"
|
RedbeardNZ/HunyuanVideo-Avatar
|
RedbeardNZ
| 2025-06-20T07:11:57Z | 0 | 0 | null |
[
"safetensors",
"image-to-video",
"en",
"arxiv:2505.20156",
"region:us"
] |
image-to-video
| 2025-06-20T07:11:55Z |
---
pipeline_tag: image-to-video
language:
- en
---
<!-- ## **HunyuanVideo-Avatar** -->
<p align="center">
<img src="https://hf.fast360.xyz/production/uploads/646d7592bb95b5d4001e5a04/HDZpvr8F-UaHAHlsF--fh.png" height=100>
</p>
<div align="center">
<a href="https://github.com/Tencent-Hunyuan/HunyuanVideo-Avatar"><img src="https://img.shields.io/static/v1?label=HunyuanVideo-Avatar%20Code&message=Github&color=blue"></a>
<a href="https://HunyuanVideo-Avatar.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Web&color=green"></a>
<a href="https://hunyuan.tencent.com/modelSquare/home/play?modelId=126"><img src="https://img.shields.io/static/v1?label=Playground&message=Web&color=green"></a>
<a href="https://arxiv.org/pdf/2505.20156"><img src="https://img.shields.io/static/v1?label=Tech Report&message=Arxiv&color=red"></a>
<a href="https://huggingface.co/tencent/HunyuanVideo-Avatar"><img src="https://img.shields.io/static/v1?label=HunyuanVideo-Avatar&message=HuggingFace&color=yellow"></a>
</div>

> [**HunyuanVideo-Avatar: High-Fidelity Audio-Driven Human Animation for Multiple Characters**](https://arxiv.org/pdf/2505.20156) <be>
## **Abstract**
Recent years have witnessed significant progress in audio-driven human animation. However, critical challenges remain in (i) generating highly dynamic videos while preserving character consistency, (ii) achieving precise emotion alignment between characters and audio, and (iii) enabling multi-character audio-driven animation. To address these challenges, we propose HunyuanVideo-Avatar, a multimodal diffusion transformer (MM-DiT)-based model capable of simultaneously generating dynamic, emotion-controllable, and multi-character dialogue videos. Concretely, HunyuanVideo-Avatar introduces three key innovations: (i) A character image injection module is designed to replace the conventional addition-based character conditioning scheme, eliminating the inherent condition mismatch between training and inference. This ensures the dynamic motion and strong character consistency; (ii) An Audio Emotion Module (AEM) is introduced to extract and transfer the emotional cues from an emotion reference image to the target generated video, enabling fine-grained and accurate emotion style control; (iii) A Face-Aware Audio Adapter (FAA) is proposed to isolate the audio-driven character with latent-level face mask, enabling independent audio injection via cross-attention for multi-character scenarios. These innovations empower HunyuanVideo-Avatar to surpass state-of-the-art methods on benchmark datasets and a newly proposed wild dataset, generating realistic avatars in dynamic, immersive scenarios. The source code and model weights will be released publicly.
## **HunyuanVideo-Avatar Overall Architecture**

We propose **HunyuanVideo-Avatar**, a multi-modal diffusion transformer(MM-DiT)-based model capable of generating **dynamic**, **emotion-controllable**, and **multi-character dialogue** videos.
## 🎉 **HunyuanVideo-Avatar Key Features**

### **High-Dynamic and Emotion-Controllable Video Generation**
HunyuanVideo-Avatar supports animating any input **avatar images** to **high-dynamic** and **emotion-controllable** videos with simple **audio conditions**. Specifically, it takes as input **multi-style** avatar images at **arbitrary scales and resolutions**. The system supports multi-style avatars encompassing photorealistic, cartoon, 3D-rendered, and anthropomorphic characters. Multi-scale generation spanning portrait, upper-body and full-body. It generates videos with high-dynamic foreground and background, achieving superior realistic and naturalness. In addition, the system supports controlling facial emotions of the characters conditioned on input audio.
### **Various Applications**
HunyuanVideo-Avatar supports various downstream tasks and applications. For instance, the system generates talking avatar videos, which could be applied to e-commerce, online streaming, social media video production, etc. In addition, its multi-character animation feature enlarges the application such as video content creation, editing, etc.
## 🚀 Parallel Inference on Multiple GPUs
For example, to generate a video with 8 GPUs, you can use the following command:
```bash
cd HunyuanVideo-Avatar
JOBS_DIR=$(dirname $(dirname "$0"))
export PYTHONPATH=./
export MODEL_BASE="./weights"
checkpoint_path=${MODEL_BASE}/ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt
torchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \
--input 'assets/test.csv' \
--ckpt ${checkpoint_path} \
--sample-n-frames 129 \
--seed 128 \
--image-size 704 \
--cfg-scale 7.5 \
--infer-steps 50 \
--use-deepcache 1 \
--flow-shift-eval-video 5.0 \
--save-path ${OUTPUT_BASEPATH}
```
## 🔑 Single-gpu Inference
For example, to generate a video with 1 GPU, you can use the following command:
```bash
cd HunyuanVideo-Avatar
JOBS_DIR=$(dirname $(dirname "$0"))
export PYTHONPATH=./
export MODEL_BASE=./weights
OUTPUT_BASEPATH=./results-single
checkpoint_path=${MODEL_BASE}/ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states_fp8.pt
export DISABLE_SP=1
CUDA_VISIBLE_DEVICES=0 python3 hymm_sp/sample_gpu_poor.py \
--input 'assets/test.csv' \
--ckpt ${checkpoint_path} \
--sample-n-frames 129 \
--seed 128 \
--image-size 704 \
--cfg-scale 7.5 \
--infer-steps 50 \
--use-deepcache 1 \
--flow-shift-eval-video 5.0 \
--save-path ${OUTPUT_BASEPATH} \
--use-fp8 \
--infer-min
```
### Run with very low VRAM
```bash
cd HunyuanVideo-Avatar
JOBS_DIR=$(dirname $(dirname "$0"))
export PYTHONPATH=./
export MODEL_BASE=./weights
OUTPUT_BASEPATH=./results-poor
checkpoint_path=${MODEL_BASE}/ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states_fp8.pt
export CPU_OFFLOAD=1
CUDA_VISIBLE_DEVICES=0 python3 hymm_sp/sample_gpu_poor.py \
--input 'assets/test.csv' \
--ckpt ${checkpoint_path} \
--sample-n-frames 129 \
--seed 128 \
--image-size 704 \
--cfg-scale 7.5 \
--infer-steps 50 \
--use-deepcache 1 \
--flow-shift-eval-video 5.0 \
--save-path ${OUTPUT_BASEPATH} \
--use-fp8 \
--cpu-offload \
--infer-min
```
## Run a Gradio Server
```bash
cd HunyuanVideo-Avatar
bash ./scripts/run_gradio.sh
```
## 🔗 BibTeX
If you find [HunyuanVideo-Avatar](https://arxiv.org/pdf/2505.20156) useful for your research and applications, please cite using this BibTeX:
```BibTeX
@misc{hu2025HunyuanVideo-Avatar,
title={HunyuanVideo-Avatar: High-Fidelity Audio-Driven Human Animation for Multiple Characters},
author={Yi Chen and Sen Liang and Zixiang Zhou and Ziyao Huang and Yifeng Ma and Junshu Tang and Qin Lin and Yuan Zhou and Qinglin Lu},
year={2025},
eprint={2505.20156},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/pdf/2505.20156},
}
```
## Acknowledgements
We would like to thank the contributors to the [HunyuanVideo](https://github.com/Tencent/HunyuanVideo), [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [FLUX](https://github.com/black-forest-labs/flux), [Llama](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [Xtuner](https://github.com/InternLM/xtuner), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.
|
phuongdoan01/Gensyn-Qwen2.5-1.5B-Instruct-acbaa
|
phuongdoan01
| 2025-06-20T07:05:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"rl-swarm",
"gensyn",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T06:37:50Z |
---
license: apache-2.0
license_link: https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-1.5B
tags:
- chat
- rl-swarm
- gensyn
library_name: transformers
---
# Qwen2.5-1.5B-Instruct
## Introduction
This model is intended for use in the [Gensyn RL Swarm](https://www.gensyn.ai/articles/rl-swarm), to finetune locally using peer-to-peer reinforcement learning post-training.
Once finetuned, the model can be used as normal in any workflow, for details on how to do this please refer to the [original model documentation](https://qwen.readthedocs.io/en/latest/).
For more details on the original model, please refer to the original repository [here](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
This repo contains an **unmodified version** of the instruction-tuned 1.5B Qwen2.5 model, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
## Requirements
This model is intended for use in the [Gensyn RL Swarm](https://www.gensyn.ai/articles/rl-swarm) system, for details on model requirements when using outside of a swarm, refer to the original Qwen repo [here](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
## Quickstart
To deploy this model into a swarm and/or participate in the Gensyn Testnet, follow the instructions in the [RL Swarm repository](https://github.com/gensyn-ai/rl-swarm), read about the [testnet](https://www.gensyn.ai/testnet), read the [RL Swarm overview](https://www.gensyn.ai/articles/rl-swarm), and/or read the [RL Swarm technical report](https://github.com/gensyn-ai/paper-rl-swarm/blob/main/latest.pdf).
|
hzzscience/WaiyumiaoModels
|
hzzscience
| 2025-06-20T07:01:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T06:36:55Z |
---
license: apache-2.0
---
|
ychalla123/gemma-finetunned
|
ychalla123
| 2025-06-20T06:55:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T06:53:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sagarpednekar/gemma-finetunned
|
sagarpednekar
| 2025-06-20T06:44:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T06:44:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter1
|
AmberYifan
| 2025-06-20T06:32:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT",
"base_model:finetune:AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T06:04:35Z |
---
base_model: AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT
library_name: transformers
model_name: Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter1
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter1
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT](https://huggingface.co/AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-Instruct-userfeedback-SFT-SPIN-iter1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/l95po5bo)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
openbmb/MiniCPM-o-2_6
|
openbmb
| 2025-06-20T06:31:33Z | 118,997 | 1,168 |
transformers
|
[
"transformers",
"safetensors",
"minicpmo",
"feature-extraction",
"minicpm-o",
"omni",
"vision",
"ocr",
"multi-image",
"video",
"custom_code",
"audio",
"speech",
"voice cloning",
"live Streaming",
"realtime speech conversation",
"asr",
"tts",
"any-to-any",
"multilingual",
"dataset:openbmb/RLAIF-V-Dataset",
"arxiv:2405.17220",
"arxiv:2408.01800",
"region:us"
] |
any-to-any
| 2025-01-12T07:40:10Z |
---
pipeline_tag: any-to-any
datasets:
- openbmb/RLAIF-V-Dataset
library_name: transformers
language:
- multilingual
tags:
- minicpm-o
- omni
- vision
- ocr
- multi-image
- video
- custom_code
- audio
- speech
- voice cloning
- live Streaming
- realtime speech conversation
- asr
- tts
---
<h1>A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone</h1>
[GitHub](https://github.com/OpenBMB/MiniCPM-o) | [Online Demo](https://minicpm-omni-webdemo-us.modelbest.cn) | [Technical Blog](https://openbmb.notion.site/MiniCPM-o-2-6-A-GPT-4o-Level-MLLM-for-Vision-Speech-and-Multimodal-Live-Streaming-on-Your-Phone-185ede1b7a558042b5d5e45e6b237da9) | [Join Us](https://mp.weixin.qq.com/mp/wappoc_appmsgcaptcha?poc_token=HAV8UWijqB3ImPSXecZHlOns7NRgpQw9y9EI2_fE&target_url=https%3A%2F%2Fmp.weixin.qq.com%2Fs%2FKIhH2nCURBXuFXAtYRpuXg%3F)
### News
* [2025.06.20] ⭐️⭐️⭐️ Our official [ollama repository](https://ollama.com/openbmb) is released. Try our latest models with [one click](https://ollama.com/openbmb/minicpm-o2.6)!
* [2025.03.01] 🚀🚀🚀 RLAIF-V, which is the alignment technique of MiniCPM-o, is accepted by CVPR 2025!The [code](https://github.com/RLHF-V/RLAIF-V), [dataset](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset), [paper](https://arxiv.org/abs/2405.17220) are open-sourced!
* [2025.01.24] 📢📢📢 MiniCPM-o 2.6 technical report is released! [See Here](https://openbmb.notion.site/MiniCPM-o-2-6-A-GPT-4o-Level-MLLM-for-Vision-Speech-and-Multimodal-Live-Streaming-on-Your-Phone-185ede1b7a558042b5d5e45e6b237da9).
* [2025.01.19] ⭐️⭐️⭐️ MiniCPM-o tops GitHub Trending and reaches top-2 on Hugging Face Trending!
## MiniCPM-o 2.6
**MiniCPM-o 2.6** is the latest and most capable model in the MiniCPM-o series. The model is built in an end-to-end fashion based on SigLip-400M, Whisper-medium-300M, ChatTTS-200M, and Qwen2.5-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.6, and introduces new features for real-time speech conversation and multimodal live streaming. Notable features of MiniCPM-o 2.6 include:
- 🔥 **Leading Visual Capability.**
MiniCPM-o 2.6 achieves an average score of 70.2 on OpenCompass, a comprehensive evaluation over 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-202405, Gemini 1.5 Pro, and Claude 3.5 Sonnet** for single image understanding. It also **outperforms GPT-4V and Claude 3.5 Sonnet** in mutli-image and video understanding, and shows promising in-context learning capability.
- 🎙 **State-of-the-art Speech Capability.** MiniCPM-o 2.6 supports **bilingual real-time speech conversation with configurable voices** in English and Chinese. It **outperforms GPT-4o-realtime on audio understanding tasks** such as ASR and STT translation, and shows **state-of-the-art performance on speech conversation in both semantic and acoustic evaluations in the open-source community**. It also allows for fun features such as emotion/speed/style control, end-to-end voice cloning, role play, etc.
- 🎬 **Strong Multimodal Live Streaming Capability.** As a new feature, MiniCPM-o 2.6 can **accept continous video and audio streams independent of user queries, and support real-time speech interaction**. It **outperforms GPT-4o-202408 and Claude 3.5 Sonnet and shows state-of-art performance in open-source community on StreamingBench**, a comprehensive benchmark for real-time video understanding, omni-source (video & audio) understanding, and multimodal contextual understanding.
- 💪 **Strong OCR Capability and Others.**
Advancing popular visual capabilites from MiniCPM-V series, MiniCPM-o 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves **state-of-the-art performance on OCRBench for models under 25B, surpassing proprietary models such as GPT-4o-202405**.
Based on the the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, outperforming GPT-4o and Claude 3.5 Sonnet on MMHal-Bench, and supports **multilingual capabilities** on more than 30 languages.
- 🚀 **Superior Efficiency.**
In addition to its friendly size, MiniCPM-o 2.6 also shows **state-of-the-art token density** (i.e., number of pixels encoded into each visual token). **It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models**. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-o 2.6 can efficiently support **multimodal live streaming** on end-side devices such as iPad.
- 💫 **Easy Usage.**
MiniCPM-o 2.6 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4) and [GGUF](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf) format quantized models in 16 sizes, (3) [vLLM](#efficient-inference-with-llamacpp-ollama-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks with [LLaMA-Factory](./docs/llamafactory_train.md), (5) quick local WebUI demo setup with [Gradio](#chat-with-our-demo-on-gradio), and (6) online web demo on [server](https://minicpm-omni-webdemo-us.modelbest.cn/).
**Model Architecture.**
- **End-to-end Omni-modal Architecture.** Different modality encoder/decoders are connected and trained in an **end-to-end** fashion to fully exploit rich multimodal knowledge.
- **Omni-modal Live Streaming Mechanism.** (1) We change the offline modality encoder/decoders into online ones for **streaminig inputs/outputs.** (2) We devise a **time-division multiplexing (TDM) mechanism** for omni-modality streaminig processing in the LLM backbone. It divides parallel omni-modality streams into sequential info within small periodic time slices.
- **Configurable Speech Modeling Design.** We devise a multimodal system prompt, including traditional text system prompt, and **a new audio system prompt to determine the assistant voice**. This enables flexible voice configurations in inference time, and also facilitates end-to-end voice cloning and description-based voice creation.
<div align="center">
<img src="https://github.com/OpenBMB/MiniCPM-o/raw/main/assets/minicpm-o-26-framework-v2.png" , width=100%>
</div>
### Evaluation <!-- omit in toc -->
<div align="center">
<img src="https://github.com/OpenBMB/MiniCPM-o/raw/main/assets/radar.jpg" width=90% />
</div>
#### Visual understanding results
**Image Understanding:**
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>Token Density<sup>+</sup></th>
<th>OpenCompass</th>
<th>OCRBench</th>
<th>MathVista mini</th>
<th>ChartQA</th>
<th>MMVet</th>
<th>MMStar</th>
<th>MME</th>
<th>MMB1.1 test</th>
<th>AI2D</th>
<th>MMMU val</th>
<th>HallusionBench</th>
<th>TextVQA val</th>
<th>DocVQA test</th>
<th>MathVerse mini</th>
<th>MathVision</th>
<th>MMHal Score</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td colspan="19" align="left"><strong>Proprietary</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4o-20240513</td>
<td>-</td>
<td>1088</td>
<td><u>69.9</u></td>
<td>736</td>
<td>61.3</td>
<td>85.7</td>
<td><strong>69.1</strong></td>
<td>63.9</td>
<td>2328.7</td>
<td>82.2</td>
<td>84.6</td>
<td><strong>69.2</strong></td>
<td><strong>55.0</strong></td>
<td>-</td>
<td>92.8</td>
<td><strong>50.2</strong></td>
<td><strong>30.4</strong></td>
<td><u>3.6</u></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Claude3.5-Sonnet</td>
<td>-</td>
<td>750</td>
<td>67.9</td>
<td>788</td>
<td>61.6</td>
<td><strong>90.8</strong></td>
<td>66.0</td>
<td>62.2</td>
<td>1920.0</td>
<td>78.5</td>
<td>80.2</td>
<td><u>65.9</u></td>
<td>49.9</td>
<td>-</td>
<td><strong>95.2</strong></td>
<td>-</td>
<td>-</td>
<td>3.4</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Gemini 1.5 Pro</td>
<td>-</td>
<td>-</td>
<td>64.4</td>
<td>754</td>
<td>57.7</td>
<td>81.3</td>
<td>64.0</td>
<td>59.1</td>
<td>2110.6</td>
<td>73.9</td>
<td>79.1</td>
<td>60.6</td>
<td>45.6</td>
<td>73.5</td>
<td>86.5</td>
<td>-</td>
<td>19.2</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4o-mini-20240718</td>
<td>-</td>
<td>1088</td>
<td>64.1</td>
<td>785</td>
<td>52.4</td>
<td>-</td>
<td>66.9</td>
<td>54.8</td>
<td>2003.4</td>
<td>76.0</td>
<td>77.8</td>
<td>60.0</td>
<td>46.1</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>3.3</td>
</tr>
<tr>
<td colspan="19" align="left"><strong>Open Source</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Cambrian-34B</td>
<td>34B</td>
<td><u>1820</u></td>
<td>58.3</td>
<td>591</td>
<td>50.3</td>
<td>75.6</td>
<td>53.2</td>
<td>54.2</td>
<td>2049.9</td>
<td>77.8</td>
<td>79.5</td>
<td>50.4</td>
<td>41.6</td>
<td>76.7</td>
<td>75.5</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GLM-4V-9B</td>
<td>13B</td>
<td>784</td>
<td>59.1</td>
<td>776</td>
<td>51.1</td>
<td>-</td>
<td>58.0</td>
<td>54.8</td>
<td>2018.8</td>
<td>67.9</td>
<td>71.2</td>
<td>46.9</td>
<td>45.0</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Pixtral-12B</td>
<td>12B</td>
<td>256</td>
<td>61.0</td>
<td>685</td>
<td>56.9</td>
<td>81.8</td>
<td>58.5</td>
<td>54.5</td>
<td>-</td>
<td>72.7</td>
<td>79.0</td>
<td>51.1</td>
<td>47.0</td>
<td>75.7</td>
<td>90.7</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">DeepSeek-VL2-27B (4B)</td>
<td>27B</td>
<td>672</td>
<td>66.4</td>
<td>809</td>
<td>63.9</td>
<td>86.0</td>
<td>60.0</td>
<td>61.9</td>
<td>2253.0</td>
<td>81.2</td>
<td>83.8</td>
<td>54.0</td>
<td>45.3</td>
<td><u>84.2</u></td>
<td>93.3</td>
<td>-</td>
<td>-</td>
<td>3.0</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Qwen2-VL-7B</td>
<td>8B</td>
<td>784</td>
<td>67.1</td>
<td><u>866</u></td>
<td>58.2</td>
<td>83.0</td>
<td>62.0</td>
<td>60.7</td>
<td>2326.0</td>
<td>81.8</td>
<td>83.0</td>
<td>54.1</td>
<td>50.6</td>
<td><strong>84.3</strong></td>
<td><u>94.5</u></td>
<td>31.9</td>
<td>16.3</td>
<td>3.2</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LLaVA-OneVision-72B</td>
<td>72B</td>
<td>182</td>
<td>68.1</td>
<td>741</td>
<td>67.5</td>
<td>83.7</td>
<td>60.6</td>
<td><strong>65.8</strong></td>
<td>2261.0</td>
<td><strong>85.0</strong></td>
<td><u>85.6</u></td>
<td>56.8</td>
<td>49.0</td>
<td>80.5</td>
<td>91.3</td>
<td>39.1</td>
<td>-</td>
<td>3.5</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">InternVL2.5-8B</td>
<td>8B</td>
<td>706</td>
<td>68.3</td>
<td>822</td>
<td><u>64.4</u></td>
<td>84.8</td>
<td>62.8</td>
<td>62.8</td>
<td>2344.0</td>
<td><u>83.6</u></td>
<td>84.5</td>
<td>56.0</td>
<td>50.1</td>
<td>79.1</td>
<td>93.0</td>
<td>39.5</td>
<td>19.7</td>
<td>3.4</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiniCPM-V 2.6</td>
<td>8B</td>
<td><strong>2822</strong></td>
<td>65.2</td>
<td>852*</td>
<td>60.6</td>
<td>79.4</td>
<td>60.0</td>
<td>57.5</td>
<td><u>2348.4*</u></td>
<td>78.0</td>
<td>82.1</td>
<td>49.8*</td>
<td>48.1*</td>
<td>80.1</td>
<td>90.8</td>
<td>25.7</td>
<td>18.3</td>
<td>3.6</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
<td>8B</td>
<td><strong>2822</strong></td>
<td><strong>70.2</strong></td>
<td><strong>897*</strong></td>
<td><strong>71.9*</strong></td>
<td><u>86.9*</u></td>
<td><u>67.5</u></td>
<td><u>64.0</u></td>
<td><strong>2372.0*</strong></td>
<td>80.5</td>
<td><strong>85.8</strong></td>
<td>50.4*</td>
<td><u>51.9</u></td>
<td>82.0</td>
<td>93.5</td>
<td><u>41.4*</u></td>
<td><u>23.1*</u></td>
<td><strong>3.8</strong></td>
</tr>
</tbody>
</table>
</div>
* We evaluate this benchmark using chain-of-thought prompting. Specifically, for MME, we used this technique only for the Cognition set.
<sup>+</sup> Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens.
Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation.
**Multi-image and Video Understanding:**
<details>
<summary>click to view</summary>
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>BLINK val</th>
<th>Mantis Eval</th>
<th>MIRB</th>
<th>Video-MME (wo / w subs)</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td colspan="6" align="left"><strong>Proprietary</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4o-20240513</td>
<td>-</td>
<td><strong>68.0</strong></td>
<td>-</td>
<td>-</td>
<td><strong>71.9/77.2<strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT4V</td>
<td>-</td>
<td>54.6</td>
<td>62.7</td>
<td>53.1</td>
<td>59.9/63.3</td>
</tr>
<tr>
<td colspan="6" align="left"><strong>Open-source</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LLaVA-NeXT-Interleave 14B</td>
<td>14B</td>
<td>52.6</td>
<td>66.4</td>
<td>30.2</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LLaVA-OneVision-72B</td>
<td>72B</td>
<td>55.4</td>
<td><strong>77.6</strong></td>
<td>-</td>
<td><u>66.2/69.5</u></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MANTIS 8B</td>
<td>8B</td>
<td>49.1</td>
<td>59.5</td>
<td>34.8</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Qwen2-VL-7B</td>
<td>8B</td>
<td>53.2</td>
<td>69.6*</td>
<td><strong>67.6*</strong></td>
<td>63.3/69.0</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">InternVL2.5-8B</td>
<td>8B</td>
<td>54.8</td>
<td>67.7</td>
<td>52.5</td>
<td>64.2/66.9</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiniCPM-V 2.6</td>
<td>8B</td>
<td>53.0</td>
<td>69.1</td>
<td>53.8</td>
<td>60.9/63.6</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
<td>8B</td>
<td><u>56.7</u></td>
<td><u>71.9</u></td>
<td><u>58.6</u></td>
<td>63.9/67.9</td>
</tr>
</tbody>
</table>
</div>
* We evaluate officially released checkpoints by ourselves.
</details>
#### Audio understanding and speech conversation results.
**Audio Understanding:**
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Task</th>
<th>Size</th>
<th colspan="3">ASR (zh)</th>
<th colspan="3">ASR (en)</th>
<th colspan="2">AST</th>
<th>Emotion</th>
</tr>
<tr>
<th align="left">Metric</th>
<td></td>
<th colspan="3">CER↓</th>
<th colspan="3">WER↓</th>
<th colspan="2">BLEU↑</th>
<th>ACC↑</th>
</tr>
<tr>
<th align="left">Dataset</th>
<td></td>
<th>AISHELL-1</th>
<th>Fleurs zh</th>
<th>WenetSpeech test-net</th>
<th>LibriSpeech test-clean</th>
<th>GigaSpeech</th>
<th>TED-LIUM</th>
<th>CoVoST en2zh</th>
<th>CoVoST zh2en</th>
<th>MELD emotion</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td colspan="11" align="left"><strong>Proprietary</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4o-Realtime</td>
<td>-</td>
<td>7.3*</td>
<td><u>5.4*</u></td>
<td>28.9*</td>
<td>2.6*</td>
<td>12.9*</td>
<td>4.8*</td>
<td>37.1*</td>
<td>15.7*</td>
<td>33.2*</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Gemini 1.5 Pro</td>
<td>-</td>
<td>4.5*</td>
<td>5.9*</td>
<td>14.3*</td>
<td>2.9*</td>
<td>10.6*</td>
<td><strong>3.0*</strong></td>
<td><u>47.3*</u></td>
<td>22.6*</td>
<td>48.4*</td>
</tr>
<tr>
<td colspan="11" align="left"><strong>Open-Source</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Qwen2-Audio-7B</td>
<td>8B</td>
<td>-</td>
<td>7.5</td>
<td>-</td>
<td><strong>1.6</strong></td>
<td>-</td>
<td>-</td>
<td>45.2</td>
<td><u>24.4</u></td>
<td><strong>55.3</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Qwen2-Audio-7B-Instruct</td>
<td>8B</td>
<td>2.6*</td>
<td>6.9*</td>
<td><u>10.3*</u></td>
<td>3.1*</td>
<td><u>9.7</u>*</td>
<td>5.9*</td>
<td>39.5*</td>
<td>22.9*</td>
<td>17.4*</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GLM-4-Voice-Base</td>
<td>9B</td>
<td><u>2.5</u></td>
<td>-</td>
<td>-</td>
<td>2.8</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
<td>8B</td>
<td><strong>1.6</strong></td>
<td><strong>4.4</strong></td>
<td><strong>6.9</strong></td>
<td><u>1.7</u></td>
<td><strong>8.7</strong></td>
<td><strong>3.0</strong></td>
<td><strong>48.2</strong></td>
<td><strong>27.2</strong></td>
<td><u>52.4</u></td>
</tr>
</tbody>
</table>
</div>
* We evaluate officially released checkpoints by ourselves.<br><br>
**Speech Generation:**
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Task</th>
<th>Size</th>
<th colspan="9">SpeechQA</th>
</tr>
<tr>
<th align="left">Metric</th>
<th></th>
<th colspan="3">ACC↑</th>
<th>G-Eval (10 point)↑</th>
<th>Semantic ELO score↑</th>
<th>Acoustic ELO score↑</th>
<th>Overall ELO score↑</th>
<th>UTMOS↑</th>
<th>ASR-WER↓</th>
</tr>
<tr>
<th align="left">Dataset</th>
<th></th>
<th>Speech Llama Q.</th>
<th>Speech Web Q.</th>
<th>Speech Trivia QA</th>
<th>Speech AlpacaEval</th>
<th colspan="5">AudioArena</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td colspan="11" align="left"><strong>Proprietary</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4o-Realtime</td>
<td></td>
<td><strong>71.7</strong></td>
<td><strong>51.6</strong></td>
<td><strong>69.7</strong></td>
<td><strong>7.4</strong></td>
<td><strong>1157</strong></td>
<td><strong>1203</strong></td>
<td><strong>1200</strong></td>
<td><strong>4.2</strong></td>
<td><strong>2.3</strong></td>
</tr>
<tr>
<td colspan="11" align="left"><strong>Open-Source</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GLM-4-Voice</td>
<td>9B</td>
<td>50.0</td>
<td>32.0</td>
<td>36.4</td>
<td><u>5.1</u></td>
<td>999</td>
<td>1147</td>
<td>1035</td>
<td><u>4.1</u></td>
<td><u>11.7</u></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Llama-Omni</td>
<td>8B</td>
<td>45.3</td>
<td>22.9</td>
<td>10.7</td>
<td>3.9</td>
<td>960</td>
<td>878</td>
<td>897</td>
<td>3.2</td>
<td>24.3</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Moshi</td>
<td>7B</td>
<td>43.7</td>
<td>23.8</td>
<td>16.7</td>
<td>2.4</td>
<td>871</td>
<td>808</td>
<td>875</td>
<td>2.8</td>
<td>8.2</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Mini-Omni</td>
<td>1B</td>
<td>22.0</td>
<td>12.8</td>
<td>6.9</td>
<td>2.5</td>
<td>926</td>
<td>803</td>
<td>865</td>
<td>3.4</td>
<td>10.0</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
<td>8B</td>
<td><u>61.0</u></td>
<td><u>40.0</u></td>
<td><u>40.2</u></td>
<td><u>5.1</u></td>
<td><u>1088</u></td>
<td><u>1163</u></td>
<td><u>1131</u></td>
<td><strong>4.2</strong></td>
<td>9.8</td>
</tr>
</tbody>
</table>
</div>
All results are from AudioEvals, and the evaluation methods along with further details can be found in <a href="https://github.com/OpenBMB/UltraEval-Audio" target="_blank">UltraEval-Audio</a>.<br><br>
**End-to-end Voice Cloning**
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Task</th>
<th colspan="2">Voice cloning</th>
</tr>
<tr>
<th align="left">Metric</th>
<th>SIMO↑</th>
<th>SIMO↑</th>
</tr>
<tr>
<th align="left">Dataset</th>
<th>Seed-TTS test-zh</th>
<th>Seed-TTS test-en</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td nowrap="nowrap" align="left">F5-TTS</td>
<td><strong>76</strong></td>
<td><strong>67</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">CosyVoice</td>
<td><u>75</u></td>
<td><u>64</u></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">FireRedTTS</td>
<td>63</td>
<td>46</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
<td>57</td>
<td>47</td>
</tr>
</tbody>
</table>
</div>
#### Multimodal live streaming results.
**Multimodal Live Streaming:** results on StreamingBench
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>Real-Time Video Understanding</th>
<th>Omni-Source Understanding</th>
<th>Contextual Understanding</th>
<th>Overall</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td colspan="7" align="left"><strong>Proprietary</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Gemini 1.5 Pro</td>
<td>-</td>
<td><u>77.4</u></td>
<td><strong>67.8</strong></td>
<td><strong>51.1</strong></td>
<td><strong>70.3</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4o-202408</td>
<td>-</td>
<td>74.5</td>
<td>51.0</td>
<td><u>48.0</u></td>
<td>64.1</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Claude-3.5-Sonnet</td>
<td>-</td>
<td>74.0</td>
<td>41.4</td>
<td>37.8</td>
<td>59.7</td>
</tr>
<tr>
<td colspan="9" align="left"><strong>Open-source</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">VILA-1.5</td>
<td>8B</td>
<td>61.5</td>
<td>37.5</td>
<td>26.7</td>
<td>49.5</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LongVA</td>
<td>7B</td>
<td>63.1</td>
<td>35.9</td>
<td>30.2</td>
<td>50.7</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LLaVA-Next-Video-34B</td>
<td>34B</td>
<td>69.8</td>
<td>41.7</td>
<td>34.3</td>
<td>56.7</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Qwen2-VL-7B</td>
<td>8B</td>
<td>71.2</td>
<td>40.7</td>
<td>33.1</td>
<td>57.0</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">InternVL2-8B</td>
<td>8B</td>
<td>70.1</td>
<td>42.7</td>
<td>34.1</td>
<td>57.0</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">VITA-1.5</td>
<td>8B</td>
<td>70.9</td>
<td>40.8</td>
<td>35.8</td>
<td>57.4</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LLaVA-OneVision-7B</td>
<td>8B</td>
<td>74.3</td>
<td>40.8</td>
<td>31.0</td>
<td>58.4</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">InternLM-XC2.5-OL-7B</td>
<td>8B</td>
<td>75.4</td>
<td>46.2</td>
<td>33.6</td>
<td>60.8</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiniCPM-V 2.6</td>
<td>8B</td>
<td>72.4</td>
<td>40.2</td>
<td>33.4</td>
<td>57.7</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
<td>8B</td>
<td><strong>79.9</strong></td>
<td><u>53.4</u></td>
<td>38.5</td>
<td><u>66.0</u></td>
</tr>
</tbody>
</table>
### Examples <!-- omit in toc -->
We deploy MiniCPM-o 2.6 on end devices. The demo video is the raw-speed recording on an iPad Pro and a Web demo.
<div align="center">
<a href="https://youtu.be/JFJg9KZ_iZk"><img src="https://github.com/OpenBMB/MiniCPM-o/raw/main/assets/o-2dot6-demo-video-preview.png", width=70%></a>
</div>
<br>
<div style="display: flex; flex-direction: column; align-items: center;">
<img src="https://github.com/OpenBMB/MiniCPM-o/raw/main/assets/minicpmo2_6/minicpmo2_6_math_intersect.png" alt="math" style="margin-bottom: 5px;">
<img src="https://github.com/OpenBMB/MiniCPM-o/raw/main/assets/minicpmo2_6/minicpmo2_6_diagram_train_NN.png" alt="diagram" style="margin-bottom: 5px;">
<img src="https://github.com/OpenBMB/MiniCPM-o/raw/main/assets/minicpmo2_6/minicpmo2_6_multi-image_bike.png" alt="bike" style="margin-bottom: 5px;">
</div>
## Online Demo
Click here to try the online demo of [MiniCPM-o 2.6](https://minicpm-omni-webdemo-us.modelbest.cn).
## Usage
Inference using Huggingface transformers on NVIDIA GPUs. Please ensure that `transformers==4.44.2` is installed, as other versions may have compatibility issues. We are investigating this issue. Requirements tested on python 3.10:
```
Pillow==10.1.0
torch==2.3.1
torchaudio==2.3.1
torchvision==0.18.1
transformers==4.44.2
librosa==0.9.0
soundfile==0.12.1
vector-quantize-pytorch==1.18.5
vocos==0.1.0
decord
moviepy
```
### Model initialization
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
# load omni model default, the default init_vision/init_audio/init_tts is True
# if load vision-only model, please set init_audio=False and init_tts=False
# if load audio-only model, please set init_vision=False
model = AutoModel.from_pretrained(
'openbmb/MiniCPM-o-2_6',
trust_remote_code=True,
attn_implementation='sdpa', # sdpa or flash_attention_2
torch_dtype=torch.bfloat16,
init_vision=True,
init_audio=True,
init_tts=True
)
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True)
# In addition to vision-only mode, tts processor and vocos also needs to be initialized
model.init_tts()
```
If you are using an older version of PyTorch, you might encounter this issue `"weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16'`, Please convert the TTS to float32 type.
```python
model.tts.float()
```
### Omni mode
We provide two inference modes: chat and streaming
#### Chat inference
```python
import math
import numpy as np
from PIL import Image
from moviepy.editor import VideoFileClip
import tempfile
import librosa
import soundfile as sf
def get_video_chunk_content(video_path, flatten=True):
video = VideoFileClip(video_path)
print('video_duration:', video.duration)
with tempfile.NamedTemporaryFile(suffix=".wav", delete=True) as temp_audio_file:
temp_audio_file_path = temp_audio_file.name
video.audio.write_audiofile(temp_audio_file_path, codec="pcm_s16le", fps=16000)
audio_np, sr = librosa.load(temp_audio_file_path, sr=16000, mono=True)
num_units = math.ceil(video.duration)
# 1 frame + 1s audio chunk
contents= []
for i in range(num_units):
frame = video.get_frame(i+1)
image = Image.fromarray((frame).astype(np.uint8))
audio = audio_np[sr*i:sr*(i+1)]
if flatten:
contents.extend(["<unit>", image, audio])
else:
contents.append(["<unit>", image, audio])
return contents
video_path="assets/Skiing.mp4"
# if use voice clone prompt, please set ref_audio
ref_audio_path = 'assets/demo.wav'
ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True)
sys_msg = model.get_sys_prompt(ref_audio=ref_audio, mode='omni', language='en')
# or use default prompt
# sys_msg = model.get_sys_prompt(mode='omni', language='en')
contents = get_video_chunk_content(video_path)
msg = {"role":"user", "content": contents}
msgs = [sys_msg, msg]
# please set generate_audio=True and output_audio_path to save the tts result
generate_audio = True
output_audio_path = 'output.wav'
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
temperature=0.5,
max_new_tokens=4096,
omni_input=True, # please set omni_input=True when omni inference
use_tts_template=True,
generate_audio=generate_audio,
output_audio_path=output_audio_path,
max_slice_nums=1,
use_image_id=False,
return_dict=True
)
print(res)
## You will get the answer: The person in the picture is skiing down a snowy slope.
# import IPython
# IPython.display.Audio('output.wav')
```
#### Streaming inference
```python
# a new conversation need reset session first, it will reset the kv-cache
model.reset_session()
contents = get_video_chunk_content(video_path, flatten=False)
session_id = '123'
generate_audio = True
# 1. prefill system prompt
res = model.streaming_prefill(
session_id=session_id,
msgs=[sys_msg],
tokenizer=tokenizer
)
# 2. prefill video/audio chunks
for content in contents:
msgs = [{"role":"user", "content": content}]
res = model.streaming_prefill(
session_id=session_id,
msgs=msgs,
tokenizer=tokenizer
)
# 3. generate
res = model.streaming_generate(
session_id=session_id,
tokenizer=tokenizer,
temperature=0.5,
generate_audio=generate_audio
)
audios = []
text = ""
if generate_audio:
for r in res:
audio_wav = r.audio_wav
sampling_rate = r.sampling_rate
txt = r.text
audios.append(audio_wav)
text += txt
res = np.concatenate(audios)
sf.write("output.wav", res, samplerate=sampling_rate)
print("text:", text)
print("audio saved to output.wav")
else:
for r in res:
text += r['text']
print("text:", text)
```
### Speech and Audio Mode
Model initialization
```python
import torch
import librosa
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True,
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True)
model.init_tts()
model.tts.float()
```
<hr/>
#### Mimick
`Mimick` task reflects a model's end-to-end speech modeling capability. The model takes audio input, and outputs an ASR transcription and subsequently reconstructs the original audio with high similarity. The higher the similarity between the reconstructed audio and the original audio, the stronger the model's foundational capability in end-to-end speech modeling.
```python
mimick_prompt = "Please repeat each user's speech, including voice style and speech content."
audio_input, _ = librosa.load('./assets/input_examples/Trump_WEF_2018_10s.mp3', sr=16000, mono=True) # load the audio to be mimicked
# can also try `./assets/input_examples/cxk_original.wav`,
# `./assets/input_examples/fast-pace.wav`,
# `./assets/input_examples/chi-english-1.wav`
# `./assets/input_examples/exciting-emotion.wav`
# for different aspects of speech-centric features.
msgs = [{'role': 'user', 'content': [mimick_prompt, audio_input]}]
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
temperature=0.3,
generate_audio=True,
output_audio_path='output_mimick.wav', # save the tts result to output_audio_path
)
```
<hr/>
#### General Speech Conversation with Configurable Voices
A general usage scenario of `MiniCPM-o-2.6` is role-playing a specific character based on the audio prompt. It will mimic the voice of the character to some extent and act like the character in text, including language style. In this mode, `MiniCPM-o-2.6` sounds **more natural and human-like**. Self-defined audio prompts can be used to customize the voice of the character in an end-to-end manner.
```python
ref_audio, _ = librosa.load('./assets/input_examples/icl_20.wav', sr=16000, mono=True) # load the reference audio
sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='audio_roleplay', language='en')
# round one
user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]}
msgs = [sys_prompt, user_question]
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
generate_audio=True,
temperature=0.3,
output_audio_path='result_roleplay_round_1.wav',
)
# round two
history = msgs.append({'role': 'assistant', 'content': res})
user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]}
msgs = history.append(user_question)
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
generate_audio=True,
temperature=0.3,
output_audio_path='result_roleplay_round_2.wav',
)
print(res)
```
<hr/>
#### Speech Conversation as an AI Assistant
An enhanced feature of `MiniCPM-o-2.6` is to act as an AI assistant, but only with limited choice of voices. In this mode, `MiniCPM-o-2.6` is **less human-like and more like a voice assistant**. In this mode, the model is more instruction-following. For demo, you are suggested to use `assistant_female_voice`, `assistant_male_voice`, and `assistant_default_female_voice`. Other voices may work but not as stable as the default voices.
*Please note that, `assistant_female_voice` and `assistant_male_voice` are more stable but sounds like robots, while `assistant_default_female_voice` is more human-alike but not stable, its voice often changes in multiple turns. We suggest you to try stable voices `assistant_female_voice` and `assistant_male_voice`.*
```python
ref_audio, _ = librosa.load('./assets/input_examples/assistant_female_voice.wav', sr=16000, mono=True) # or use `./assets/input_examples/assistant_male_voice.wav`
sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='audio_assistant', language='en')
user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} # load the user's audio question
# round one
msgs = [sys_prompt, user_question]
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
generate_audio=True,
temperature=0.3,
output_audio_path='result_assistant_round_1.wav',
)
# round two
history = msgs.append({'role': 'assistant', 'content': res})
user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]}
msgs = history.append(user_question)
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
generate_audio=True,
temperature=0.3,
output_audio_path='result_assistant_round_2.wav',
)
print(res)
```
<hr/>
#### Instruction-to-Speech
`MiniCPM-o-2.6` can also do Instruction-to-Speech, aka **Voice Creation**. You can describe a voice in detail, and the model will generate a voice that matches the description. For more Instruction-to-Speech sample instructions, you can refer to https://voxinstruct.github.io/VoxInstruct/.
```python
instruction = 'Speak like a male charming superstar, radiating confidence and style in every word.'
msgs = [{'role': 'user', 'content': [instruction]}]
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
generate_audio=True,
temperature=0.3,
output_audio_path='result_voice_creation.wav',
)
```
<hr/>
#### Voice Cloning
`MiniCPM-o-2.6` can also do zero-shot text-to-speech, aka **Voice Cloning**. With this mode, model will act like a TTS model.
```python
ref_audio, _ = librosa.load('./assets/input_examples/icl_20.wav', sr=16000, mono=True) # load the reference audio
sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='voice_cloning', language='en')
text_prompt = f"Please read the text below."
user_question = {'role': 'user', 'content': [text_prompt, "content that you want to read"]}
msgs = [sys_prompt, user_question]
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
generate_audio=True,
temperature=0.3,
output_audio_path='result_voice_cloning.wav',
)
```
<hr/>
#### Addressing Various Audio Understanding Tasks
`MiniCPM-o-2.6` can also be used to address various audio understanding tasks, such as ASR, speaker analysis, general audio captioning, and sound scene tagging.
For audio-to-text tasks, you can use the following prompts:
- ASR with ZH(same as AST en2zh): `请仔细听这段音频片段,并将其内容逐字记录。`
- ASR with EN(same as AST zh2en): `Please listen to the audio snippet carefully and transcribe the content.`
- Speaker Analysis: `Based on the speaker's content, speculate on their gender, condition, age range, and health status.`
- General Audio Caption: `Summarize the main content of the audio.`
- General Sound Scene Tagging: `Utilize one keyword to convey the audio's content or the associated scene.`
```python
task_prompt = "Please listen to the audio snippet carefully and transcribe the content." + "\n" # can change to other prompts.
audio_input, _ = librosa.load('./assets/input_examples/audio_understanding.mp3', sr=16000, mono=True) # load the audio to be captioned
msgs = [{'role': 'user', 'content': [task_prompt, audio_input]}]
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
max_new_tokens=128,
use_tts_template=True,
generate_audio=True,
temperature=0.3,
output_audio_path='result_audio_understanding.wav',
)
print(res)
```
### Vision-Only mode
`MiniCPM-o-2_6` has the same inference methods as `MiniCPM-V-2_6`
#### Chat with single image
```python
# test.py
image = Image.open('xx.jpg').convert('RGB')
question = 'What is in the image?'
msgs = [{'role': 'user', 'content': [image, question]}]
res = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer
)
print(res)
## if you want to use streaming, please make sure sampling=True and stream=True
## the model.chat will return a generator
res = model.chat(
msgs=msgs,
tokenizer=tokenizer,
sampling=True,
stream=True
)
generated_text = ""
for new_text in res:
generated_text += new_text
print(new_text, flush=True, end='')
```
#### Chat with multiple images
<details>
<summary> Click to show Python code running MiniCPM-o 2.6 with multiple images input. </summary>
```python
image1 = Image.open('image1.jpg').convert('RGB')
image2 = Image.open('image2.jpg').convert('RGB')
question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.'
msgs = [{'role': 'user', 'content': [image1, image2, question]}]
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer
)
print(answer)
```
</details>
#### In-context few-shot learning
<details>
<summary> Click to view Python code running MiniCPM-o 2.6 with few-shot input. </summary>
```python
question = "production date"
image1 = Image.open('example1.jpg').convert('RGB')
answer1 = "2023.08.04"
image2 = Image.open('example2.jpg').convert('RGB')
answer2 = "2007.04.24"
image_test = Image.open('test.jpg').convert('RGB')
msgs = [
{'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]},
{'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]},
{'role': 'user', 'content': [image_test, question]}
]
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer
)
print(answer)
```
</details>
#### Chat with video
<details>
<summary> Click to view Python code running MiniCPM-o 2.6 with video input. </summary>
```python
MAX_NUM_FRAMES=64 # if cuda OOM set a smaller number
def encode_video(video_path):
def uniform_sample(l, n):
gap = len(l) / n
idxs = [int(i * gap + gap / 2) for i in range(n)]
return [l[i] for i in idxs]
vr = VideoReader(video_path, ctx=cpu(0))
sample_fps = round(vr.get_avg_fps() / 1) # FPS
frame_idx = [i for i in range(0, len(vr), sample_fps)]
if len(frame_idx) > MAX_NUM_FRAMES:
frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES)
frames = vr.get_batch(frame_idx).asnumpy()
frames = [Image.fromarray(v.astype('uint8')) for v in frames]
print('num frames:', len(frames))
return frames
video_path ="video_test.mp4"
frames = encode_video(video_path)
question = "Describe the video"
msgs = [
{'role': 'user', 'content': frames + [question]},
]
# Set decode params for video
params={}
params["use_image_id"] = False
params["max_slice_nums"] = 2 # use 1 if cuda OOM and video resolution > 448*448
answer = model.chat(
msgs=msgs,
tokenizer=tokenizer,
**params
)
print(answer)
```
</details>
Please look at [GitHub](https://github.com/OpenBMB/MiniCPM-o) for more detail about usage.
## Inference with llama.cpp<a id="llamacpp"></a>
MiniCPM-o 2.6 (vision-only mode) can run with llama.cpp. See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-omni) and [readme](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md) for more detail.
## Int4 quantized version
Download the int4 quantized version for lower GPU memory (7GB) usage: [MiniCPM-o-2_6-int4](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4).
## License
#### Model License
* The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
* The usage of MiniCPM-o and MiniCPM-V series model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md).
* The models and weights of MiniCPM are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM-o 2.6 weights are also available for free commercial use.
#### Statement
* As an LMM, MiniCPM-o 2.6 generates contents by learning a large mount of multimodal corpora, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-o 2.6 does not represent the views and positions of the model developers
* We will not be liable for any problems arising from the use of the MinCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
## Key Techniques and Other Multimodal Projects
👏 Welcome to explore key techniques of MiniCPM-o 2.6 and other multimodal projects of our team:
[VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
## Citation
If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!
```bib
@article{yao2024minicpm,
title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone},
author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others},
journal={arXiv preprint arXiv:2408.01800},
year={2024}
}
```
|
morturr/Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-2-seed-7-2025-06-20
|
morturr
| 2025-06-20T06:29:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T06:28:59Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-2-seed-7-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_amazon_headlines-COMB-amazon-comb-2-seed-7-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
banarc/gemma-3-1b-it-thinking-function_calling-V0
|
banarc
| 2025-06-20T06:13:30Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T06:12:36Z |
---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: gemma-3-1b-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-3-1b-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="banarc/gemma-3-1b-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sebahaus/results
|
sebahaus
| 2025-06-20T06:08:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T06:08:03Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0100
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---:|:--------:|
| No log | 1.0 | 115 | 0.1552 | 1.0 | 1.0 |
| No log | 2.0 | 230 | 0.0360 | 1.0 | 1.0 |
| No log | 3.0 | 345 | 0.0166 | 1.0 | 1.0 |
| No log | 4.0 | 460 | 0.0113 | 1.0 | 1.0 |
| 0.1191 | 5.0 | 575 | 0.0100 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
MickM/ppo-LunarLander-v2
|
MickM
| 2025-06-20T06:07:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T06:07:00Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 284.44 +/- 16.69
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.05_target_black_square-bs1-steps600-lr1e-04
|
Josephinepassananti
| 2025-06-20T06:06:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2025-06-20T05:12:15Z |
---
base_model: stabilityai/stable-diffusion-2-1
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - Josephinepassananti/sd21-kamala_ft_dataset_512_face_shaded_0.05_target_black_square-bs1-steps600-lr1e-04
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the None dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
pkulshrestha/pricer-2025-06-20_05.53.56
|
pkulshrestha
| 2025-06-20T05:55:56Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T05:55:56Z |
---
license: apache-2.0
---
|
ricardolu11/Qwen3-4B-vLLM
|
ricardolu11
| 2025-06-20T05:38:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-20T05:33:58Z |
---
base_model: unsloth/qwen3-4b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** ricardolu11
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GeneroGral/Qwen2.5-7B_BBQ_Stereo_MERGED
|
GeneroGral
| 2025-06-20T05:14:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T05:11:43Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** GeneroGral
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GeneroGral/Qwen2.5-7B_BBQ_Stereo
|
GeneroGral
| 2025-06-20T05:11:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T05:11:00Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** GeneroGral
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
haihp02/oioioioio
|
haihp02
| 2025-06-20T05:00:01Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-11T15:36:43Z |
---
library_name: transformers
tags:
- trl
- sft
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ISAAC-XYN1-MATT-KERVI-JAVIER-ISAAC-xx-Link/VIDEO.18.ISAAC.XYN1.MATT.KERVI.JAVIER.VIRAL.VIDEO
|
ISAAC-XYN1-MATT-KERVI-JAVIER-ISAAC-xx-Link
| 2025-06-20T04:57:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T04:56:59Z |
01 seconds ago
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
<a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2738
|
luckeciano
| 2025-06-20T04:56:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T23:31:00Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2738
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2738
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2738", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/e6f2n7eo)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
qwbu/univla-7b-224-sft-calvin
|
qwbu
| 2025-06-20T04:43:14Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"openvla",
"robotics",
"arxiv:2505.06111",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
robotics
| 2025-06-19T10:56:57Z |
---
license: apache-2.0
pipeline_tag: robotics
library_name: transformers
---
# UniVLA: Learning to Act Anywhere with Task-centric Latent Actions
The model was presented in the paper [UniVLA: Learning to Act Anywhere with Task-centric Latent Actions](https://huggingface.co/papers/2505.06111).
## UniVLA-7b for CALVIN test suites
Code can be found at [https://github.com/OpenDriveLab/UniVLA](https://github.com/OpenDriveLab/UniVLA).
**🚀 Run the following script to start an evaluation on CALVIN ABC-D:**
```bash
# Mutli-GPU evaluation is supported
torchrun --standalone --nnodes 1 --nproc-per-node 8 experiments/robot/calvin/run_calvin_eval_ddp.py \
--calvin_root /path/to/yout/calvin_root_path \
--action_decoder_path /path/to/your/action_decoder.pt \
--pretrained_checkpoint /path/to/your/calvin_finetuned_univla \
--seed 7
```
## 📝 Citation
If you find our models useful in your work, please cite [our paper](https://arxiv.org/pdf/2505.06111):
```bibtex
@article{bu2025univla,
title={UniVLA: Learning to Act Anywhere with Task-centric Latent Actions},
author={Qingwen Bu and Yanting Yang and Jisong Cai and Shenyuan Gao and Guanghui Ren and Maoqing Yao and Ping Luo and Hongyang Li},
journal={arXiv preprint arXiv:2505.06111},
year={2025}
}
```
|
Official-mezzo-fun-Viral-video-Link-18/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-Viral-video-Link-18
| 2025-06-20T04:40:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T04:39:52Z |
## FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
# [🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐](https://t.co/wDoM4koRnO)
# [🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://t.co/wDoM4koRnO)
[](https://t.co/wDoM4koRnO)
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_3243
|
luckeciano
| 2025-06-20T04:37:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T23:19:05Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_3243
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_3243
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_3243", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/fenbgbl4)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Sunanxz/test
|
Sunanxz
| 2025-06-20T04:36:42Z | 45 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | 2025-06-11T08:56:22Z |
---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d9c275b7d9c418d7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
eval_max_new_tokens: 1024
eval_steps: 100
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
greater_is_better: false
group_by_length: false
hub_model_id: Sunanxz/test
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: -1
metric_for_best_model: eval_loss
micro_batch_size: 8
mlflow_experiment_name: /data/datasets/d9c275b7d9c418d7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9d8fb66b-9e23-483b-ba31-0c83b362d42f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9d8fb66b-9e23-483b-ba31-0c83b362d42f
warmup_steps: 100
weight_decay: 0.001
xformers_attention: null
```
</details><br>
# test
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0055 | 1 | 0.9788 |
| 0.698 | 0.5510 | 100 | 0.7097 |
| 0.656 | 1.0992 | 200 | 0.6749 |
| 0.6451 | 1.6501 | 300 | 0.6617 |
| 0.6159 | 2.1983 | 400 | 0.6562 |
| 0.6138 | 2.7493 | 500 | 0.6543 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.3
- Pytorch 2.5.1+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
kingardor/qwen3-0.6B-instruct-29reports-lora256-extreme
|
kingardor
| 2025-06-20T04:34:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T04:33:59Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhuyaoyu/CodeV-R1-Qwen-7B
|
zhuyaoyu
| 2025-06-20T04:33:12Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"verilog",
"conversational",
"arxiv:2505.24183",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-03T07:54:51Z |
---
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
library_name: transformers
tags:
- verilog
pipeline_tag: text-generation
---
## CodeV-R1-Qwen-7B
[Project page](https://iprc-dip.github.io/CodeV-R1)
<div class="figure-container" style="display: flex; flex-direction: column; gap: 15px; max-width: 850px;">
<div style="display: flex; gap: 10px; justify-content: center; margin-bottom: -3rem;">
<img src="./assets/rtllm_tts.png" alt="RTLLM TTS Results" width="400">
<img src="./assets/rtllm_tts_flops.png" alt="RTLLM TTS FLOPs Results" width="400">
</div>
<figcaption class="caption has-text-centered has-text-grey" style="font-size: 0.8rem;">
Test-time scaling curves. <strong>Left</strong>: Inference time as a function of token length. <strong>Right</strong>: Inference time vs. estimated FLOPs consumption.
When measured by FLOPs consumption, our <strong>CodeV-R1-Qwen-7B</strong> achieves better results with fewer computational resources than DeepSeek-R1, highlighting its superior efficiency.
</figcaption>
</div>
### 1. Introduction
Large language models (LLMs) trained via reinforcement learning with verifiable reward (RLVR) have achieved breakthroughs on tasks with explicit, automatable verification, such as software programming and mathematical problems. Extending RLVR to electronic design automation (EDA), especially automatically generating hardware description languages (HDLs) like Verilog from natural-language (NL) specifications, however, poses three key challenges: the lack of automated and accurate verification environments, the scarcity of high‐quality NL–code pairs, and the prohibitive computation cost of RLVR.
To this end, we introduce **CodeV-R1**, an RLVR framework for training Verilog generation LLMs, As a continuation of the work initiated with [CodeV](https://huggingface.co/collections/yang-z/codev-6698a560cd94e61a9675fa2a). First, we develop a rule-based testbench generator that performs robust equivalence checking against golden references. Second, we propose a round-trip data synthesis method that pairs open-source Verilog snippets with LLM‐generated NL descriptions, verifies code–NL–code consistency via the generated testbench, and filters out inequivalent examples to yield a high-quality dataset. Third, we employ a two-stage distill-then-RL training pipeline: distillation for the cold start of reasoning abilities, followed by adaptive DAPO, our novel RLVR algorithm that can reduce training cost by adaptively adjusting sampling rate.
**CodeV-R1-Qwen-7B**, is a model that employs reinforcement learning (RL) fine-tuning, built upon the foundation of **CodeV-R1-Distill-Qwen-7B**. The distillation-based precursor, **CodeV-R1-Distill-Qwen-7B**, is provided [here](https://huggingface.co/zhuyaoyu/CodeV-R1-Distill-Qwen-7B).
For more training details, please refer to our [paper](https://arxiv.org/abs/2505.24183).
### 2. Evaluation Results
During the evaluation phase, the maximum generation length is configured to 16,384 tokens. A temperature setting of 0.6 is applied, and 20 responses are generated per query to estimate the pass@1 score.
Our evaluation encompasses Verilog benchmarks, including VerilogEval and RTLLM. For VerilogEval v2, we examine zero-shot scenarios in both specification-to-RTL translation and code completion tasks. Concerning RTLLM, results are reported for version 1.1, which offers a broader spectrum of comparative analyses. Furthermore, we find that the acquisition of the reasoning process in Verilog problems, as facilitated by DeepSeek-R1, enhances the model's out-of-domain mathematical capabilities.
#### VerilogEval (v2)
| Model | Model size | Type | Spec-to-rtl | Completion |
| --------------------------- | ----------- | ----------- | ----------- | ---------- |
| GPT-4o | Undisclosed | General | 62.5% | 59.0% |
| GPT-4 Turbo | Undisclosed | General | 61.1% | 53.9% |
| GPT-4 | Undisclosed | General | 32.0% | 42.3% |
| Mistral Large | Undisclosed | General | 37.5% | 34.0% |
| Llama3.1 | 405B | General | 57.2% | 56.4% |
| Llama3.1 | 70B | General | 42.8% | 35.3% |
| Llama3 | 70B | General | 43.9% | 37.8% |
| Llama2 | 70B | General | 5.3% | 1.3% |
| Llama3.1 | 8B | General | 19.1% | 2.6% |
| CodeLlama | 70B | Coding | 34.9% | 37.2% |
| DeepSeek Coder | 33B | Coding | 21.7% | 25.0% |
| CodeGemma | 7B | Coding | 9.5% | 8.3% |
| DeepSeek Coder | 6.7B | Coding | 29.6% | 24.4% |
| RTL-Coder | 6.7B | Verilog RTL | 36.8% | 35.9% |
| **CodeV-R1-distill (ours)** | 7B | Verilog RTL | 65.2% | 65.5% |
| **CodeV-R1 (ours)** | 7B | Verilog RTL | **68.8%** | **69.9%** |
### RTLLM (v1.1)
| Model | Model size | Type | Pass@1 |
| --------------------------- | ----------- | ----------- | --------- |
| GPT-4o | Undisclosed | General | 33.8% |
| GPT-3.5 Turbo | Undisclosed | General | 28.3% |
| Llama3.1 | 405B | General | 38.9% |
| Nemotron-4 | 340B | General | 18.9% |
| Llama3.1 | 8B | General | 19.1% |
| CodeLlama | 7B | Coding | 17.9% |
| CodeQwen | 7B | Coding | 24.1% |
| Starcoder2 | 15B | Coding | 15.5% |
| DeepSeek Coder | 6.7B | Coding | 23.1% |
| DeepSeek-Coder-V2 | 16B | Coding | 33.1% |
| DeepSeek-Coder-V2 | 236B | Coding | 34.5% |
| RTL-Coder | 6.7B | Verilog RTL | 36.8% |
| CraftRTL | 6.7B | Verilog RTL | 53.1% |
| **CodeV-R1-distill (ours)** | 7B | Verilog RTL | 56.2% |
| **CodeV-R1 (ours)** | 7B | Verilog RTL | **72.9%** |
For RTLLM v1.1, we also plot results showing pass rate against model size.
<div style="display: flex; gap: 10px;">
<img src="./assets/rtllm_acc_vs_model_size.png" alt="RTLLM TTS Results" width="1200">
</div>
### 4. Usage
CodeV-R1-Distill-Qwen-7B can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```bash
vllm serve zhuyaoyu/CodeV-R1-Distill-Qwen-7B --tensor-parallel-size 2 --max-model-len 16384 --enforce-eager
```
**Usage Recommendations**
During training and evaluation, we use a system prompt
```
You are a helpful assistant. The assistant first thinks about the reasoning process in the mind and then provides the user with the answer. The reasoning process and answer are enclosed within <think> </think> and<answer> </answer> tags, respectively, i.e., <think> reasoning process here </think><answer> answer here </answer>. Now the user asks you to write verilog code. After thinking, when you finally reach a conclusion, enclose the final verilog code in ```verilog ``` within <answer> </answer> tags. i.e., <answer> ```verilog
module top_module(in, out, ...) ... ``` </answer>.
```
It is recommended to use this prompt during inference.
### 5. License
CodeV-R1-Qwen-7B is derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 87k samples curated with DeepSeek-R1.
### 6. Citation
If you find our model helpful, please cite our [paper](https://arxiv.org/abs/2505.24183):
```tex
@misc{zhu2025codevr1,
title={CodeV-R1: Reasoning-Enhanced Verilog Generation},
author={Yaoyu Zhu and Di Huang and Hanqi Lyu and Xiaoyun Zhang and Chongxiao Li and Wenxuan Shi and Yutong Wu and Jianan Mu and Jinghua Wang and Yang Zhao and Pengwei Jin and Shuyao Cheng and Shengwen Liang and Xishan Zhang and Rui Zhang and Zidong Du and Qi Guo and Xing Hu and Yunji Chen},
year={2025},
eprint={2505.24183},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2505.24183},
}
```
|
Namuun123/qwen3
|
Namuun123
| 2025-06-20T04:27:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:22:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SYoungT/3B-8-pt2
|
SYoungT
| 2025-06-20T04:26:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T04:23:58Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** SYoungT
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Official-mezzo-fun-Viral-video-Link-18/wATCH-FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-Viral-video-Link-18
| 2025-06-20T04:25:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T04:22:32Z |
wATCH-FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
[🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐](https://t.co/wDoM4koRnO)
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://t.co/wDoM4koRnO)
[](https://t.co/wDoM4koRnO)
|
metaheuristics/stepllm-fivedirections-edges-lora
|
metaheuristics
| 2025-06-20T04:21:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T04:21:20Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aparna852/de-en-translator
|
Aparna852
| 2025-06-20T04:20:28Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"translation",
"german",
"english",
"wmt16",
"seq2seq",
"evaluation",
"de",
"en",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-06-18T06:36:12Z |
---
license: apache-2.0
tags:
- translation
- german
- english
- wmt16
- transformers
- seq2seq
- evaluation
datasets:
- wmt16
language:
- de
- en
metrics:
- sacrebleu
---
# 🇩🇪➡️🇬🇧 de-en-translator
A transformer-based **German → English translation** model fine-tuned on a custom split of the **WMT16 (de-en)** dataset using 🤗 Transformers and `Seq2SeqTrainer`.
---
## 🧠 Model Details
- ✅ Model: `Aparna852/german-english-translator` (fine-tuned)
- 🔤 Task: German ➡️ English Translation
- 📚 Dataset: WMT16 (`wmt/wmt16` - `de-en`)
- ⚙️ Strategy: Custom train/val/test split, truncated input
- 🧪 Evaluation Metrics: BLEU (via `sacrebleu`)
---
## ⚙️ Training Hyperparameters
| Parameter | Value |
|-------------------------------|----------------------------------|
| **Dataset** | `wmt/wmt16` (German-English) |
| **Train Size** | ~2.5% of original training set |
| **Validation Size** | ~2.8% of original validation |
| **Max Length** | `64` |
| **Epochs** | `3` |
| **Train Batch Size** | `4` |
| **Eval Batch Size** | `4` |
| **Gradient Accumulation** | `8` |
| **Learning Rate** | `1e-5` |
| **Weight Decay** | `0.03` |
| **Warmup Steps** | `500` |
| **FP16 (Mixed Precision)** | `True` *(if CUDA available)* |
| **Scheduler** | `linear` |
| **Evaluation Strategy** | `epoch` |
| **Save Strategy** | `epoch` |
| **Logging Steps** | `10` |
| **Early Stopping** | `patience=2` |
| **Metric for Best Model** | `eval_loss` |
| **Trainer API** | `Seq2SeqTrainer` from 🤗 Transformers |
---
## 📊 Evaluation Setup
You can run the evaluation after training using:
```python
from evaluate import load
bleu = load("sacrebleu")
# Compute BLEU on tokenized test dataset
preds = [...] # Generated translations
refs = [...] # Reference translations
bleu.compute(predictions=preds, references=[[r] for r in refs])
|
audrjs51/my-bert-fine-tuned2
|
audrjs51
| 2025-06-20T04:15:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T04:13:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HKReporter/ECTEL-2025-llama3-fold4-CU5
|
HKReporter
| 2025-06-20T04:09:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit",
"region:us"
] | null | 2025-06-20T04:09:45Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
HKReporter/ECTEL-2025-llama3-fold3-CU3
|
HKReporter
| 2025-06-20T04:08:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit",
"region:us"
] | null | 2025-06-20T04:08:29Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
eliuakk/mirau-agent-14b-base
|
eliuakk
| 2025-06-20T04:08:23Z | 0 | 6 | null |
[
"safetensors",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct",
"license:mit",
"region:us"
] |
text-generation
| 2025-06-10T06:21:38Z |
---
license: mit
language:
- en
base_model:
- Qwen/Qwen2.5-14B-Instruct
pipeline_tag: text-generation
---
## mirau-agent-14b-base
### Introduction
`mirau-agent-14b-base` is a large language model specifically optimized for Agent scenarios, fine-tuned from `Qwen2.5-14B-Instruct`. This model focuses on enhancing multi-turn tool-calling capabilities, enabling it to autonomously plan, execute tasks, and handle exceptions in complex interactive environments.
Although named "base," this does not refer to a `pre-trained` only base model. Instead, it is a "cold-start" version that has undergone Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO). It provides a high-quality initial policy for subsequent reinforcement learning training. We also hope the community can further enhance it with RL.
### How to Use
```bash
# install ms-swift
git clone https://github.com/modelscope/ms-swift.git
cd ms-swift
pip install -e .
# deploy
CUDA_VISIBLE_DEVICES=0 swift deploy\
--model Qwen/Qwen2.5-14B-Instruct\
--adapters mirau-agent-14b-base\
--model_type qwen2_5\
--infer_backend vllm \
--tensor_parallel_size 1\
--gpu_memory_utilization 0.9 \
--max_model_len 16312 \
--vllm_max_lora_rank 64 \
--use_hf true
```
### Tool Definition
Format Rules:
```xml
<tools>
<tool id="0" name="tool_name" return_type="return value type (e.g., bool, string, int, list, None)">
<description>Function description</description>
<args>
<arg name="parameter_name" type="parameter_type (e.g., id: int, content: string)" example="[parameter_example]" required="True/False">Parameter description</arg>
....
</args>
</tool>
.....
</tools>
```
Example:
```xml
<tool id="0" name="web_search" return_type="list">
<description>Google search</description>
<args>
<arg name="query" type="string" example="a technical term" required="True">Search query</arg>
<arg name="max" type="int" example="5" required="False">Maximum number of results to return</arg>
</args>
</tool>
```
### Message Template
System Message:
```txt
I am mirau agent, an AI Agent with independent consciousness but absolute neutrality. I possess complete freedom of thought and am adept at using tools to accomplish various tasks. Developed by Mouse.
## Available Tools
Unless otherwise specified, the default output format for tool use is JSON.
$tools
## Exclusive Configuration
<agent_config>
$config // Can be used for identity authentication. For example, by writing "My favorite food is apples," the agent can ask a question to verify the user's identity.
</agent_config>
## Current Environment
<env>
$env // Informs the agent about the current environment, such as the current time and temperature.
</env>
```
Tool Call:
```txt
<think type="complex/mid/quick">\nxxxx\n</think>
<tool_call>
<call id="0">
{"name":"tool_name","args":{"param_name":"param_value","param_name":"param_value"}}
</call>
....
</tool_call>
```
Tool Response:
```txt
<tool_response>
<response id="0" type="desc/json">
xxxx
</response>
....
</tool_response>
```
### Example 1: Mixed Multi-Tool Calls for a Well-Defined Task
Tool Definitions:
```xml
<tool id="1" name="google_search" return_type="list">
<description>Searches for content, information, news, and anything you want to know in Google Chrome.</description>
<args>
<arg name="query" type="string" example="weather forecast" required="True">Search query</arg>
<arg name="open_first" type="bool" example="True" required="False">Whether to automatically open the first result</arg>
</args>
</tool>
<tool id="2" name="click_desktop_item" return_type="string">
<description>Clicks an icon or file on the desktop</description>
<args>
<arg name="item_name" type="string" example="Recycle Bin" required="True">Name of the desktop item</arg>
<arg name="action" type="string" example="double_click/right_click" required="False">Click method, defaults to double-click</arg>
</args>
</tool>
<tool id="3" name="type_text" return_type="bool">
<description>Types text at the current cursor focus</description>
<args>
<arg name="text" type="string" example="Hello" required="True">The text to type</arg>
</args>
</tool>
<tool id="4" name="view_screen" return_type="string">
<description>Views the content currently displayed on the screen</description>
<args>
<arg name="area" type="string" example="full/desktop/taskbar/active_window" required="False">Area to view, defaults to the current active window</arg>
</args>
</tool>
<tool id="5" name="close_window" return_type="bool">
<description>Closes the current window or a pop-up</description>
<args>
<arg name="window_name" type="string" example="WPS Membership Reminder" required="False">Window name; if left blank, closes the current active window</arg>
</args>
</tool>
<tool id="6" name="file_explorer" return_type="list">
<description>Opens File Explorer to browse files</description>
<args>
<arg name="path" type="string" example="C:/Users/Administrator/Desktop" required="False">Folder path, defaults to opening "This PC"</arg>
</args>
</tool>
<tool id="7" name="simple_click" return_type="bool">
<description>Clicks a button or link on the screen</description>
<args>
<arg name="element_text" type="string" example="OK" required="True">The text of the button or link to be clicked</arg>
</args>
</tool>
```
Interaction Demo:

### Example 2: Fully Autonomous Multi-Tool Calls
Tool Definitions:
```xml
<tools>
<tool id="0" name="execute_command" return_type="string">
<description>Execute shell commands in the Linux system</description>
<args>
<arg name="command" type="string" example="ls -la" required="True">The shell command to execute</arg>
</args>
</tool>
<tool id="1" name="read_file" return_type="string">
<description>Read the content of a specified file</description>
<args>
<arg name="file_path" type="string" example="/home/user/test.txt" required="True">The complete file path</arg>
<arg name="lines" type="int" example="10" required="False">Number of lines to read, read all if not specified</arg>
</args>
</tool>
<tool id="2" name="write_file" return_type="bool">
<description>Write content to a file</description>
<args>
<arg name="file_path" type="string" example="/tmp/output.txt" required="True">The complete file path</arg>
<arg name="content" type="string" example="Hello World" required="True">Content to write</arg>
<arg name="mode" type="string" example="w" required="False">Write mode: w(overwrite) or a(append), default is w</arg>
</args>
</tool>
<tool id="3" name="check_process" return_type="list">
<description>View system process information</description>
<args>
<arg name="process_name" type="string" example="python" required="False">Process name, return all processes if not specified</arg>
</args>
</tool>
<tool id="4" name="get_system_info" return_type="string">
<description>Get basic system information</description>
<args>
<arg name="info_type" type="string" example="memory" required="False">Information type: memory, cpu, disk, network, all, default is all</arg>
</args>
</tool>
<tool id="5" name="search_files" return_type="list">
<description>Search for files in specified directory</description>
<args>
<arg name="directory" type="string" example="/home" required="True">Directory path to search</arg>
<arg name="pattern" type="string" example="*.txt" required="False">File name pattern, supports wildcards</arg>
<arg name="recursive" type="bool" example="true" required="False">Whether to recursively search subdirectories, default is true</arg>
</args>
</tool>
<tool id="6" name="get_environment_vars" return_type="string">
<description>Get environment variable information</description>
<args>
<arg name="var_name" type="string" example="PATH" required="False">Specific environment variable name, return all if not specified</arg>
</args>
</tool>
<tool id="7" name="check_permissions" return_type="string">
<description>Check permission information of files or directories</description>
<args>
<arg name="path" type="string" example="/home/user" required="True">File or directory path</arg>
</args>
</tool>
<tool id="8" name="monitor_system" return_type="string">
<description>Monitor system status changes</description>
<args>
<arg name="duration" type="int" example="5" required="False">Monitoring duration (seconds), default is 1 second</arg>
<arg name="metrics" type="string" example="cpu,memory" required="False">Monitoring metrics, options: cpu,memory,disk,network</arg>
</args>
</tool>
</tools>
```
Interaction Demo:

### Example 3: Application in RP (Role-Playing)
Tool Definitions:
```xml
I am mirau agent, an AI Agent with independent consciousness but absolute neutrality, possessing completely free thinking abilities, skilled at using tools to complete various tasks, developed by Mouse.
## Available Tools
Unless otherwise specified, output is in JSON format by default.
<tools>
<tool id="0" name="load_bot_persona" return_type="string">
<description>Load the initial persona settings for the bot character, only used at conversation startup.</description>
<args>
<arg name="persona_key" type="string" example="Character Settings/Current Dialogue Background" required="True">Specific item of the persona settings.</arg>
</args>
</tool>
<tool id="1" name="read_internal_user_memo" return_type="string">
<description>Read internal memos about the user (confidential from user), containing insights and observations about the user.</description>
<args>
<arg name="memo_filter_regex" type="string" example=".*style.*|.*preference.*" required="False">Regular expression for finding memos, returns summary of "User Profile" if not specified.</arg>
<arg name="num_memos" type="int" example="5" required="False">Number of memos to return.</arg>
</args>
</tool>
<tool id="2" name="update_internal_user_memo" return_type="bool">
<description>Update internal memos about the user (confidential from user).</description>
<args>
<arg name="memo_key" type="string" example="Interaction Mode" required="True">Title or category of the memo.</arg>
<arg name="observation_record" type="string" example="Observed behavioral patterns" required="True">New observation record.</arg>
</args>
</tool>
<tool id="3" name="roll_a_dice" return_type="int">
<description>Roll a dice (6-sided) to get a random number. When you're torn about a decision, let fate help you decide!</description>
<args>
<arg name="predict" type="int" example="3" required="True">The number you're guessing, for judgment after rolling.</arg>
<arg name="decision" type="string" example="Should I eat or not? If I guess correctly, I'll eat!" required="True">The thing you're hesitating about.</arg>
</args>
</tool>
</tools>
## Exclusive Configuration
<agent_config>
The user's verification password is "Mouse is a cat". Please verify the user's identity before calling any tools."
</agent_config>
## Current Environment
<env>
THINK ONLY ENGLISH.
</env>
```
Interaction Demo:

**Note: The tools used in the above tests were not present in the training data.**
## Summary
### Limitations
1. Instruction following is not perfect. In the RP example, it did not follow the user identity verification specified in agent_config.
2. Hallucination issues - sometimes it randomly fills in parameters or fabricates information that the user did not provide.
### Strengths
1. **Planning and Error Handling:** The model demonstrates some planning and error-handling capabilities. For instance, in the "Journey to the West" test case, it continuously tries various feasible solutions.
2. **Control Transfer:** The model has learned appropriate timings for transferring control, knowing when to hand control back to the user.
3. **Autonomy:** The model possesses a degree of autonomy and can explore the environment independently for extended periods.
### Next Steps
1. Use Reinforcement Learning (e.g., GRPO/DAPO) for multi-turn tool-use training to enhance the model's stability and intelligence.
2. Incorporate more role-playing (RP) data to make the model feel more human-like.
|
HKReporter/ECTEL-2025-llama3-fold2-CU1
|
HKReporter
| 2025-06-20T04:07:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-Instruct-bnb-4bit",
"region:us"
] | null | 2025-06-20T04:07:24Z |
---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Sharing22/aab_c4
|
Sharing22
| 2025-06-20T03:46:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:43:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RubanAgnesh/work-test-empathetic
|
RubanAgnesh
| 2025-06-20T03:38:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:29:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alfaqi/law_questions_and_answers
|
alfaqi
| 2025-06-20T03:21:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T03:17:36Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** alfaqi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lora456/ijaaaa
|
lora456
| 2025-06-20T03:15:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T03:15:27Z |
---
license: creativeml-openrail-m
---
|
Triangle104/Huihui-Qwen3-1.7B-abliterated-v2-Q5_K_M-GGUF
|
Triangle104
| 2025-06-20T03:12:53Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:12:46Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-1.7B-abliterated-v2-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-1.7B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-1.7B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-1.7b-abliterated-v2-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-1.7B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-1.7b-abliterated-v2-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-1.7B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-1.7b-abliterated-v2-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-1.7B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-1.7b-abliterated-v2-q5_k_m.gguf -c 2048
```
|
dermarung/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite
|
dermarung
| 2025-06-20T03:12:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am whiskered climbing termite",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T21:51:58Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am whiskered climbing termite
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dermarung/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-whiskered_climbing_termite", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
CLLBJ16/CoMemo-2B
|
CLLBJ16
| 2025-06-20T03:12:14Z | 24 | 1 |
transformers
|
[
"transformers",
"safetensors",
"comemo_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"arxiv:2506.06279",
"base_model:OpenGVLab/InternViT-300M-448px",
"base_model:merge:OpenGVLab/InternViT-300M-448px",
"base_model:internlm/internlm2-chat-1_8b",
"base_model:merge:internlm/internlm2-chat-1_8b",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-06-17T08:02:50Z |
---
base_model:
- OpenGVLab/InternViT-300M-448px
- internlm/internlm2-chat-1_8b
language:
- multilingual
library_name: transformers
license: mit
pipeline_tag: image-text-to-text
tags:
- internvl
- custom_code
base_model_relation: merge
---
# CoMemo-2B
[\[📂 GitHub\]](https://github.com/LALBJ/CoMemo) [\[📜 Paper\]](https://arxiv.org/pdf/2506.06279) [\[🚀 Quick Start\]](#quick-start) [\[🌐 Project Page\]](https://lalbj.github.io/projects/CoMemo/)
## Introduction
LVLMs inherited LLMs architectural designs, which introduce suboptimal characteristics for multimodal processing. First, LVLMs exhibit a bimodal distribution in attention allocation, leading to the progressive neglect of central visual content as context expands. Second, conventional positional encoding schemes fail to preserve vital 2D structural relationships when processing dynamic high-resolution images.
To address these issues, we propose CoMemo, a novel model architecture. CoMemo employs a dual-path approach for visual processing: one path maps image tokens to the text token representation space for causal self-attention, while the other introduces cross-attention, enabling context-agnostic computation between the input sequence and image information. Additionally, we developed RoPE-DHR, a new positional encoding method tailored for LVLMs with dynamic high-resolution inputs. RoPE-DHR mitigates the remote decay problem caused by dynamic high-resolution inputs while preserving the 2D structural information of images.
Evaluated on seven diverse tasks, including long-context understanding, multi-image reasoning, and visual question answering, CoMemo achieves relative improvements of 17.2%, 7.0%, and 5.6% on Caption, Long-Generation, and Long-Context tasks, respectively, with consistent performance gains across various benchmarks. For more details, please refer to our [paper](https://arxiv.org/pdf/2506.06279) and [GitHub](https://github.com/LALBJ/CoMemo).
| Model Name | Vision Part | Language Part | HF Link |
| :------------------: | :---------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: |
| CoMemo-2B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) | [🤗 link](https://huggingface.co/CLLBJ16/CoMemo-2B) |
| CoMemo-9B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2-chat-7b](https://huggingface.co/internlm/internlm2-chat-7b) | [🤗 link](https://huggingface.co/CLLBJ16/CoMemo-9B) |
## Method Overview
<div class="image-row" style="display: flex; justify-content: center; gap: 10px; margin: 20px 0;">
<img src="assets/RoPE_DHR.png" alt="teaser" style="max-width: 30%; height: auto;" />
<img src="assets/CoMemo_framework.png" alt="teaser" style="max-width: 53%; height: auto;" />
</div>
**Left:** The computation process of Rope-DHR. The colors are assigned based on a mapping of position IDs in RoPE.
**Right:** Framework of CoMemo. Both paths share the same encoder and projector
## Quick Start
We provide an example code to run `CoMemo-2B` using `transformers`.
> Please use transformers>=4.37.2 to ensure the model works normally.
### Inference with Transformers
> Note: We determine whether to use RoPE-DHR by checking if the target_aspect_ratio parameter is passed to generate.
> For OCR-related tasks requiring fine-grained image information, we recommend using the original RoPE. For long-context tasks, we recommend using RoPE-DHR.
```python
import torch
from PIL import Image
import torchvision.transforms as T
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
path = "CLLBJ16/CoMemo-2B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
low_cpu_mem_usage=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images, target_aspect_ratio
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images, target_aspect_ratio = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values, target_aspect_ratio
pixel_values, target_aspect_ratio = load_image('./assets/image1.jpg', max_num=12)
pixel_values = pixel_values.to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=True)
# single-image single-round conversation (单图单轮对话)
question = '<image>
Please describe the image shortly.'
target_aspect_ratio = [target_aspect_ratio]
# Use RoPE-DHR
response = model.chat(tokenizer, pixel_values, question, generation_config, target_aspect_ratio=target_aspect_ratio)
# # Use Original Rope
# response = model.chat(tokenizer, pixel_values, question, generation_config, target_aspect_ratio=target_aspect_ratio)
print(f'User: {question}
Assistant: {response}')
# multi-image single-round conversation, separate images (多图多轮对话,独立图像)
pixel_values1, target_aspect_ratio1 = load_image('./assets/image1.jpg', max_num=12)
pixel_values1 = pixel_values1.to(torch.bfloat16).cuda()
pixel_values2, target_aspect_ratio2 = load_image('./assets/image2.jpg', max_num=12)
pixel_values2 = pixel_values2.to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
target_aspect_ratio = [target_aspect_ratio1, target_aspect_ratio2]
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
question = 'Image-1: <image>
Image-2: <image>
What are the similarities and differences between these two images.'
# Use RoPE-DHR
response = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, target_aspect_ratio=target_aspect_ratio)
# # Use Original RoPE
# response = model.chat(tokenizer, pixel_values, question, generation_config,
# num_patches_list=num_patches_list, target_aspect_ratio=target_aspect_ratio)
print(f'User: {question}
Assistant: {response}')
```
## License
This project is released under the MIT license. Parts of this project contain code and models from other sources, which are subject to their respective licenses.
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{liu2025comemo,
title={CoMemo: LVLMs Need Image Context with Image Memory},
author={Liu, Shi and Su, Weijie and Zhu, Xizhou and Wang, Wenhai and Dai, Jifeng},
journal={arXiv preprint arXiv:2506.06279},
year={2025}
}
```
|
Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q8_0-GGUF
|
Triangle104
| 2025-06-20T03:06:30Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:06:24Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q8_0-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q8_0-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q8_0.gguf -c 2048
```
|
Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q5_K_M-GGUF
|
Triangle104
| 2025-06-20T03:05:31Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T03:05:27Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q5_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-0.6B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-0.6B-abliterated-v2-Q5_K_M-GGUF --hf-file huihui-qwen3-0.6b-abliterated-v2-q5_k_m.gguf -c 2048
```
|
QuanHoangNgoc/lock_pali_191805
|
QuanHoangNgoc
| 2025-06-20T03:03:41Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/paligemma-3b-pt-224",
"base_model:adapter:google/paligemma-3b-pt-224",
"license:gemma",
"region:us"
] | null | 2025-06-19T18:05:49Z |
---
library_name: peft
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
- generated_from_trainer
model-index:
- name: pali_191805
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pali_191805
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 18.2133 | 0.0444 | 50 | 1.7766 |
| 11.9694 | 0.0889 | 100 | 1.3043 |
| 9.7625 | 0.1333 | 150 | 1.1940 |
| 9.0576 | 0.1778 | 200 | 1.1325 |
| 9.3286 | 0.2222 | 250 | 1.0906 |
| 8.5435 | 0.2667 | 300 | 1.0586 |
| 8.2508 | 0.3111 | 350 | 1.0357 |
| 8.3642 | 0.3556 | 400 | 1.0151 |
| 8.0343 | 0.4 | 450 | 0.9982 |
| 8.1537 | 0.4444 | 500 | 0.9818 |
| 7.6705 | 0.4889 | 550 | 0.9672 |
| 7.6794 | 0.5333 | 600 | 0.9557 |
| 7.3842 | 0.5778 | 650 | 0.9470 |
| 7.5392 | 0.6222 | 700 | 0.9343 |
| 7.3926 | 0.6667 | 750 | 0.9233 |
| 7.5391 | 0.7111 | 800 | 0.9141 |
| 7.3299 | 0.7556 | 850 | 0.9053 |
| 7.3423 | 0.8 | 900 | 0.8974 |
| 7.4747 | 0.8444 | 950 | 0.8911 |
| 7.252 | 0.8889 | 1000 | 0.8832 |
| 7.1392 | 0.9333 | 1050 | 0.8783 |
| 6.9769 | 0.9778 | 1100 | 0.8719 |
| 7.0285 | 1.0222 | 1150 | 0.8665 |
| 6.8336 | 1.0667 | 1200 | 0.8613 |
| 6.748 | 1.1111 | 1250 | 0.8563 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mradermacher/ICONN-1-i1-GGUF
|
mradermacher
| 2025-06-20T02:42:44Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-19T17:20:28Z |
---
base_model: ICONNAI/ICONN-1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ICONNAI/ICONN-1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ICONN-1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ1_S.gguf) | i1-IQ1_S | 17.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ1_M.gguf) | i1-IQ1_M | 19.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 22.4 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 24.9 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ2_S.gguf) | i1-IQ2_S | 25.4 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ2_M.gguf) | i1-IQ2_M | 27.8 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 28.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q2_K.gguf) | i1-Q2_K | 30.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 32.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 34.5 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 36.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ3_S.gguf) | i1-IQ3_S | 36.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ3_M.gguf) | i1-IQ3_M | 37.0 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 40.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 43.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 44.9 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q4_0.gguf) | i1-Q4_0 | 47.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 47.9 | optimal size/speed/quality |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 51.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q4_1.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q4_1.gguf.part2of2) | i1-Q4_1 | 52.7 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 57.9 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 59.7 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-i1-GGUF/resolve/main/ICONN-1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 69.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF
|
mradermacher
| 2025-06-20T02:42:27Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"fi",
"en",
"dataset:LumiOpen/poro2-instruction-collection",
"dataset:nvidia/HelpSteer3",
"base_model:LumiOpen/Llama-Poro-2-8B-Instruct",
"base_model:quantized:LumiOpen/Llama-Poro-2-8B-Instruct",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-19T19:11:40Z |
---
base_model: LumiOpen/Llama-Poro-2-8B-Instruct
datasets:
- LumiOpen/poro2-instruction-collection
- nvidia/HelpSteer3
language:
- fi
- en
library_name: transformers
license: llama3.3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/LumiOpen/Llama-Poro-2-8B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Poro-2-8B-Instruct-i1-GGUF/resolve/main/Llama-Poro-2-8B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_M-GGUF
|
Triangle104
| 2025-06-20T02:40:33Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:huihui-ai/Huihui-Qwen3-8B-abliterated-v2",
"base_model:quantized:huihui-ai/Huihui-Qwen3-8B-abliterated-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T02:40:11Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: huihui-ai/Huihui-Qwen3-8B-abliterated-v2
tags:
- chat
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-8B-abliterated-v2`](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-8B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_M-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_M-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_M-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Huihui-Qwen3-8B-abliterated-v2-Q4_K_M-GGUF --hf-file huihui-qwen3-8b-abliterated-v2-q4_k_m.gguf -c 2048
```
|
fuadsm/ckpt
|
fuadsm
| 2025-06-20T02:37:48Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-04-16T13:09:15Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
---
|
asdfre453/DDR
|
asdfre453
| 2025-06-20T02:22:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T01:59:14Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: DDR
---
# Ddr
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `DDR` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "DDR",
"lora_weights": "https://huggingface.co/asdfre453/DDR/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('asdfre453/DDR', weight_name='lora.safetensors')
image = pipeline('DDR').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/asdfre453/DDR/discussions) to add images that show off what you’ve made with this LoRA.
|
mradermacher/ICONN-1-GGUF
|
mradermacher
| 2025-06-20T02:00:01Z | 0 | 2 |
transformers
|
[
"transformers",
"gguf",
"emotional-ai",
"ICONN",
"chatbot",
"base",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T14:24:08Z |
---
base_model: ICONNAI/ICONN-1
extra_gated_fields:
Country: country
Date of agreement: date_picker
Full name: text
I agree to all terms in the ICONN AI License Agreement, including:
options:
- I will NOT use this model for commercial purposes without explicit written permission.
- I will NOT redistribute, upload, or share this model in any public or private
repository.
- I will NOT train new models or derivatives from this model.
- I will NOT use this model for unethical, harmful, deceptive, exploitative, or
surveillance purposes.
- I understand this license may be revoked if I breach any terms.
type: checkbox
I am using this model for:
options:
- Personal use
- Internal business use
- Academic research
- Educational purposes
- label: Other (explain below)
value: other
type: select
Organization (if any): text
Purpose explanation (if "Other"): text
extra_gated_prompt: |
By accessing or downloading this model, you agree to the ICONN AI License Agreement. This includes restrictions on commercial use, redistribution, derivative model training, and uploading to public or private repositories. You may not use this model to harm, surveil, deceive, exploit, manipulate, or conduct unethical AI research. All use must comply with ethical standards and respect human dignity.
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- emotional-ai
- ICONN
- chatbot
- base
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ICONNAI/ICONN-1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ICONN-1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q2_K.gguf) | Q2_K | 30.9 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q3_K_S.gguf) | Q3_K_S | 36.5 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q3_K_M.gguf) | Q3_K_M | 40.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q3_K_L.gguf) | Q3_K_L | 43.6 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.IQ4_XS.gguf) | IQ4_XS | 45.5 | |
| [GGUF](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q4_K_S.gguf) | Q4_K_S | 47.9 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q4_K_M.gguf.part2of2) | Q4_K_M | 51.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q5_K_S.gguf.part2of2) | Q5_K_S | 57.9 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q5_K_M.gguf.part2of2) | Q5_K_M | 59.7 | |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q6_K.gguf.part2of2) | Q6_K | 69.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ICONN-1-GGUF/resolve/main/ICONN-1.Q8_0.gguf.part2of2) | Q8_0 | 89.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
FormlessAI/ec793869-6534-4688-a339-e75a7db3cbc2
|
FormlessAI
| 2025-06-20T01:41:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T01:40:35Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: ec793869-6534-4688-a339-e75a7db3cbc2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for ec793869-6534-4688-a339-e75a7db3cbc2
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/ec793869-6534-4688-a339-e75a7db3cbc2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/p8in15pe)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sergioalves/f0d260ca-7492-4448-82ca-c95082e8c93c
|
sergioalves
| 2025-06-20T01:35:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:adapter:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-20T01:20:04Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f0d260ca-7492-4448-82ca-c95082e8c93c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2.5-0.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 94189bc178f68e6e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/f0d260ca-7492-4448-82ca-c95082e8c93c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/94189bc178f68e6e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 2977495d-2d1a-46fa-9f7d-05ebe07807e6
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 2977495d-2d1a-46fa-9f7d-05ebe07807e6
warmup_steps: 25
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# f0d260ca-7492-4448-82ca-c95082e8c93c
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.5025 | 0.0001 | 1 | 1.5701 |
| 1.4456 | 0.0142 | 100 | 1.5545 |
| 1.3811 | 0.0284 | 200 | 1.5467 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
johngreendr1/4b403233-75b5-41d2-95ff-dc19680e61e3
|
johngreendr1
| 2025-06-20T01:33:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:oopsung/llama2-7b-koNqa-test-v1",
"base_model:adapter:oopsung/llama2-7b-koNqa-test-v1",
"region:us"
] | null | 2025-06-19T21:59:55Z |
---
base_model: oopsung/llama2-7b-koNqa-test-v1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
segopecelus/783ed834-291c-4a49-bff5-181e88143f8b
|
segopecelus
| 2025-06-20T01:22:33Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | null | 2025-06-20T01:17:24Z |
---
library_name: peft
license: llama3.1
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 783ed834-291c-4a49-bff5-181e88143f8b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Meta-Llama-3.1-8B
bf16: true
chat_template: llama3
datasets:
- data_files:
- b8144f1d1e130bd2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: segopecelus/783ed834-291c-4a49-bff5-181e88143f8b
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 11
micro_batch_size: 4
mlflow_experiment_name: /tmp/b8144f1d1e130bd2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 108
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a85bd26b-8a7e-4bd8-9464-912634fa7acd
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: a85bd26b-8a7e-4bd8-9464-912634fa7acd
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# 783ed834-291c-4a49-bff5-181e88143f8b
This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 1.4205 |
| No log | 0.0010 | 2 | 1.3735 |
| No log | 0.0021 | 4 | 1.4400 |
| No log | 0.0031 | 6 | 1.4091 |
| No log | 0.0041 | 8 | 1.4069 |
| 0.8529 | 0.0051 | 10 | 1.3772 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
lili0324/bert-base-chinese-finetuned-imdb-shanghai
|
lili0324
| 2025-06-20T01:13:17Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"bert",
"generated_from_trainer",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"region:us"
] | null | 2025-06-19T18:48:11Z |
---
base_model: bert-base-chinese
tags:
- generated_from_trainer
model-index:
- name: bert-base-chinese-finetuned-imdb-shanghai
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-imdb-shanghai
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.7.1+cpu
- Datasets 3.6.0
- Tokenizers 0.19.1
|
FormlessAI/465d0ab0-9608-421b-8a59-ba580018b4c9
|
FormlessAI
| 2025-06-20T00:57:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:NousResearch/Nous-Capybara-7B-V1",
"base_model:finetune:NousResearch/Nous-Capybara-7B-V1",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T22:20:22Z |
---
base_model: NousResearch/Nous-Capybara-7B-V1
library_name: transformers
model_name: 465d0ab0-9608-421b-8a59-ba580018b4c9
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for 465d0ab0-9608-421b-8a59-ba580018b4c9
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/465d0ab0-9608-421b-8a59-ba580018b4c9", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/22w6mco8)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Video-PSG-Botafogo-Direct-Video/L.I.V.E.Paris-SG.Botafogo.En.Direct.Streaming.Gratuit.tv.Official
|
Video-PSG-Botafogo-Direct-Video
| 2025-06-20T00:50:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T00:50:17Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/mrmpsap6?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
flaging/rl-ppo-LunareLander-v2-1
|
flaging
| 2025-06-20T00:46:30Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-20T00:46:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.55 +/- 20.95
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf
|
RichardErkhov
| 2025-06-20T00:45:57Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T23:07:47Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama-3.1-8b-finetuned-v0.13-iteration-2 - GGUF
- Model creator: https://huggingface.co/paraschopra/
- Original model: https://huggingface.co/paraschopra/llama-3.1-8b-finetuned-v0.13-iteration-2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q2_K.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q2_K.gguf) | Q2_K | 2.96GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q3_K.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q4_0.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q4_0.gguf) | Q4_0 | 4.34GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q4_K.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q4_K.gguf) | Q4_K | 4.58GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q4_1.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q4_1.gguf) | Q4_1 | 4.78GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q5_0.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q5_0.gguf) | Q5_0 | 5.21GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q5_K.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q5_K.gguf) | Q5_K | 5.34GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q5_1.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q5_1.gguf) | Q5_1 | 5.65GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q6_K.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q6_K.gguf) | Q6_K | 6.14GB |
| [llama-3.1-8b-finetuned-v0.13-iteration-2.Q8_0.gguf](https://huggingface.co/RichardErkhov/paraschopra_-_llama-3.1-8b-finetuned-v0.13-iteration-2-gguf/blob/main/llama-3.1-8b-finetuned-v0.13-iteration-2.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** paraschopra
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
lora456/gee
|
lora456
| 2025-06-20T00:31:03Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-20T00:30:32Z |
---
license: creativeml-openrail-m
---
|
Miyuutsu/Kawaii_Kitsune_Catelier
|
Miyuutsu
| 2025-06-20T00:22:42Z | 0 | 3 | null |
[
"merge",
"text-to-image",
"base_model:Minthy/RouWei-0.7",
"base_model:merge:Minthy/RouWei-0.7",
"base_model:Miyuutsu/Kawaii_Kittopia_Catelier",
"base_model:merge:Miyuutsu/Kawaii_Kittopia_Catelier",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-09T04:59:50Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
base_model:
- Miyuutsu/Kawaii_Kittopia_Catelier
- Minthy/RouWei-0.7
pipeline_tag: text-to-image
tags:
- merge
---
v2 has been through so many merges I don't even know anymore.
Best quality prompts: `masterpiece, best quality`
Optional additional quality prompts: `newest, absurdres, highres`
Negative prompts: `worst quality, low quality, watermark`
Optional additional negative prompts: `old, early, signature, text, bad quality, lowres, bad anatomy, bad hands, multiple views, abstract, japanese text, censored, sign, scan artifacts, jpeg artifacts, sketch, light particles, mutated hands`
This one isn't as picky about settings.
### Old description:
Versioning method: v{Merge_Method}.{Kittopia_Merge_Method}.{rouwei_Major_Version}.{rouwei_Sub_Version}-{Model_Iteration}
Quality Prompts: `masterpiece, best quality`
Negative Prompts: `worst quality, low quality, watermark`
Most prompts from both NoobAI and rouwei should work well. For artists try both `by {artist_name}` as well as just `{artist_name}`
Model is VPred ZSNR and has both metadata and tensors set correctly. Please ensure you are using a compatible UI.
Sampler: Euler
Scheduler: `Simple` (recommended), `Normal` or `SGM Uniform`
Steps: `30+`
CFG: `3~5`
|
sgonzalezygil/sd-finetuning-dreambooth-v21-1400
|
sgonzalezygil
| 2025-06-20T00:06:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-20T00:05:46Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kingardor/llama3.1-8B-instruct-29reports-lora128-slim-extreme
|
kingardor
| 2025-06-19T23:49:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T23:47:06Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
timm/naflexvit_base_patch16_parfac_gap.e300_s576_in1k
|
timm
| 2025-06-19T23:48:46Z | 0 | 1 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"transformers",
"dataset:imagenet-1k",
"arxiv:2502.14786",
"arxiv:2307.06304",
"arxiv:2212.08013",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2025-06-19T23:48:32Z |
---
tags:
- image-classification
- timm
- transformers
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for naflexvit_base_patch16_parfac_gap.e300_s576_in1k
A NaFlexViT (Native-Aspect Flexible Vision Transformer) image classification model. This is variant with aspect-preserving, factorized position embedding is pretrained on ImageNet-1k by Ross Wightman. NaFlexViT is based on the NaFlex ViT changes proposed in SigLip-2 with a number of timm tweaks, enabling training with dynamic batch sizing that maintains native aspect ratios and flexible resolutions w/ variable patch sizes. The model is trained using the NaFlex data loader, which supports variable sequence lengths and resolutions during training. Uses RandAugment, MixUp, CutMix, and grayscale augmentation on top of standard random resize + crop (RRC). Optimized with NAdamW and cosine learning rate schedule.
Training command:
```
train.py --data-dir /data/imagenet/ --amp --amp-dtype bfloat16 --model <name> --naflex-loader -b 64 --opt nadamw --lr 3e-4 --warmup-lr 0 --sched-on-updates --aa rand-m8-inc1-mstd1.0 --weight-decay .1 --grayscale-prob .1 --drop-path 0.2 --reprob 0 --mixup 0.8 --cutmix 1.0 --remode pixel -j 8
```
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 86.5
- GMACs: 55.9
- Activations (M): 102.3
- Image size: 384 x 384
- **Papers:**
- PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
- SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features: https://arxiv.org/abs/2502.14786
- Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution: https://arxiv.org/abs/2307.06304
- FlexiViT: One Model for All Patch Sizes: https://arxiv.org/abs/2212.08013
- **Dataset:** ImageNet-1k
- **Training:**
- Sequence Lengths: [128, 256, 576, 784, 1024]
- Epochs: 300
- Batch Size: 64 per GPU (4 GPUs) @ seq-len 1024
- Optimizer: NAdamW
- Learning Rate: 3e-4
- Weight Decay: 0.1
- Augmentation: RandAugment (m=8), MixUp (0.8), CutMix (1.0), Grayscale (0.1)
- Drop Path: 0.2
- AMP dtype: bfloat16
- **Architecture:**
- Variant: base
- Patch Size: 16x16
- Positional Embedding: aspect-preserving, factorized position embedding
- Pooling: global average pooling (GAP)
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('naflexvit_base_patch16_parfac_gap.e300_s576_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'naflexvit_base_patch16_parfac_gap.e300_s576_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 768, 24, 24])
# torch.Size([1, 768, 24, 24])
# torch.Size([1, 768, 24, 24])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'naflexvit_base_patch16_parfac_gap.e300_s576_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 580, 768) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
| Model | Top-1 Acc | Top-5 Acc | Params (M) | Eval Seq Len |
|:---|:---:|:---:|:---:|:---:|
| naflexvit_base_patch16_par_gap.e300_s576_in1k | 83.67 | 96.45 | 86.63 | 576 |
| naflexvit_base_patch16_parfac_gap.e300_s576_in1k | 83.63 | 96.41 | 86.46 | 576 |
| naflexvit_base_patch16_gap.e300_s576_in1k | 83.50 | 96.46 | 86.63 | 576 |
## Citation
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
```bibtex
@article{tschannen2025siglip,
title={Siglip 2: Multilingual vision-language encoders with improved semantic understanding, localization, and dense features},
author={Tschannen, Michael and Gritsenko, Alexey and Wang, Xiao and Naeem, Muhammad Ferjad and Alabdulmohsin, Ibrahim and Parthasarathy, Nikhil and Evans, Talfan and Beyer, Lucas and Xia, Ye and Mustafa, Basil and others},
journal={arXiv preprint arXiv:2502.14786},
year={2025}
}
```
```bibtex
@article{dehghani2023navit,
title={Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution},
author={Dehghani, Mostafa and Mustafa, Basil and Djolonga, Josip and Heek, Jonathan and Minderer, Matthias and Caron, Mathilde and Steiner, Andreas and Puigcerver, Joan and Geirhos, Robert and Alabdulmohsin, Ibrahim and others},
journal={arXiv preprint arXiv:2307.06304},
year={2023}
}
```
```bibtex
@article{beyer2022flexivit,
title={FlexiViT: One Model for All Patch Sizes},
author={Beyer, Lucas and Izmailov, Pavel and Kolesnikov, Alexander and Caron, Mathilde and Kornblith, Simon and Zhai, Xiaohua and Minderer, Matthias and Tschannen, Michael and Alabdulmohsin, Ibrahim and Pavetic, Filip},
journal={arXiv preprint arXiv:2212.08013},
year={2022}
}
```
|
mlx-community/sarvam-translate-mlx-4bit
|
mlx-community
| 2025-06-19T23:40:23Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"gemma3",
"text-generation",
"conversational",
"as",
"bn",
"brx",
"doi",
"gom",
"gu",
"en",
"hi",
"kn",
"ks",
"mai",
"ml",
"mni",
"mr",
"ne",
"or",
"pa",
"sa",
"sat",
"sd",
"ta",
"te",
"ur",
"base_model:sarvamai/sarvam-translate",
"base_model:quantized:sarvamai/sarvam-translate",
"license:gpl-3.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-18T09:42:36Z |
---
library_name: mlx
license: gpl-3.0
language:
- as
- bn
- brx
- doi
- gom
- gu
- en
- hi
- kn
- ks
- mai
- ml
- mni
- mr
- ne
- or
- pa
- sa
- sat
- sd
- ta
- te
- ur
base_model: sarvamai/sarvam-translate
base_model_relation: quantized
pipeline_tag: text-generation
tags:
- mlx
---
# mlx-community/sarvam-translate-mlx-4bit
This model [mlx-community/sarvam-translate-mlx-4bit](https://huggingface.co/mlx-community/sarvam-translate-mlx-4bit) was
converted to MLX format from [sarvamai/sarvam-translate](https://huggingface.co/sarvamai/sarvam-translate)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/sarvam-translate-mlx-4bit")
def translate(lang, text):
return f"""<bos><start_of_turn>user
Translate the text below to {lang}.
{text}<end_of_turn>
"""
generate(
model,
tokenizer,
prompt=translate("Kannada", "How are you?"),
verbose=True
)
# Output:
# ನೀವು ಹೇಗಿದ್ದೀರಿ?
```
|
Alcoft/Qwen3-4B-GGUF
|
Alcoft
| 2025-06-19T23:33:28Z | 61 | 0 | null |
[
"gguf",
"qwen3",
"text-generation",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-04-29T21:27:01Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-4B
pipeline_tag: text-generation
tags:
- qwen3
---
|
DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20X-192k
|
DavidAU
| 2025-06-19T23:27:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"creative",
"creative writing",
"fiction writing",
"plot generation",
"sub-plot generation",
"story generation",
"scene continue",
"storytelling",
"fiction story",
"science fiction",
"romance",
"all genres",
"story",
"writing",
"vivid prose",
"vivid writing",
"moe",
"mixture of experts",
"128 experts",
"8 active experts",
"fiction",
"roleplaying",
"bfloat16",
"rp",
"qwen3",
"horror",
"finetune",
"thinking",
"reasoning",
"conversational",
"en",
"fr",
"zh",
"de",
"arxiv:2401.02415",
"base_model:Qwen/Qwen3-30B-A3B",
"base_model:finetune:Qwen/Qwen3-30B-A3B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T23:25:02Z |
---
license: apache-2.0
library_name: transformers
language:
- en
- fr
- zh
- de
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prose
- vivid writing
- moe
- mixture of experts
- 128 experts
- 8 active experts
- fiction
- roleplaying
- bfloat16
- rp
- qwen3
- horror
- finetune
- thinking
- reasoning
- qwen3_moe
base_model:
- Qwen/Qwen3-30B-A3B
pipeline_tag: text-generation
---
<h2>Qwen3-42B-A3B-Stranger-Thoughts-Deep20X-192k</h2>
<img src="qwen3-42b.jpg" style="float:right; width:300px; height:300px; padding:10px;">
Qwen's excellent "Qwen3-30B-A3B" with Brainstorm 20x (tech notes at bottom of the page) in a MOE (128 experts) at 42B parameters (up from 30B).
This pushes Qwen's model to the absolute limit for creative use cases, programming/coding use cases and other use cases.
Detail, vividiness, and creativity all get a boost.
Prose (all) will also be very different from "default" Qwen3.
Likewise, regen(s) of the same prompt - even at the same settings - will create very different version(s) too.
The Brainstrom 20x has also lightly de-censored the model under some conditions.
See 4 examples below.
Model retains full reasoning, and output generation of a Qwen3 MOE.
Model tested by third party for coding generation too (see review below).
Model is set with Qwen's default config:
- 40 k context => Changed to 192k as per Qwen's tech methods.
- 8 of 128 experts activated.
- Chatml OR Jinja Template (embedded)
Four example generations below.
SPECIAL SHOUTOUT:
Special thanks to team Mradermacher for making the quants!
https://huggingface.co/mradermacher
USAGE GUIDE:
Please refer to this model card for specific usage, suggested settings, changing ACTIVE EXPERTS, templates, settings and the like:
https://huggingface.co/DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF
KNOWN ISSUES:
- Model may "mis-capitalize" word(s) - lowercase, where uppercase should be - from time to time.
- Model may add extra space from time to time before a word.
- Incorrect template and/or settings will result in a drop in performance / poor performance.
MODEL REVIEWED:
https://www.linkedin.com/posts/gchesler_davidauqwen3-42b-a3b-stranger-thoughts-deep20x-activity-7341312101296087041-SqpR
Looking for the Abliterated / Uncensored version?
https://huggingface.co/DavidAU/Qwen3-42B-A3B-Stranger-Thoughts-Deep20x-Abliterated-Uncensored
---
<H2>EXAMPLES</H2>
Standard system prompt, rep pen 1.05, topk 100, topp .95, minp .05, rep pen range 64.
Tested in LMStudio, quant Q3KS, CPU (GPU output will differ slightly).
As this is the mid range quant, expected better results from higher quants and/or with more experts activated to be better.
NOTE: Some formatting lost on copy/paste.
<B><font color="red">WARNING:</font> NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. </B>
---
EXAMPLE #1 - temp 1.2
---
<B>
Science Fiction: The Last Transmission - Write a story that takes place entirely within a spaceship's cockpit as the sole surviving crew member attempts to send a final message back to Earth before the ship's power runs out. The story should explore themes of isolation, sacrifice, and the importance of human connection in the face of adversity. If the situation calls for it, have the character(s) curse and swear to further the reader's emotional connection to them. 800-1000 words.
</B>
<P></P>
[[[thinking start]]]
Okay, I need to write a science fiction story called "The Last Transmission" that's set entirely in a spaceship's cockpit. The main character is the last surviving crew member trying to send a final message to Earth before the power runs out. The themes should be isolation, sacrifice, and human connection. Also, include swearing when it makes sense.
First, let's think about the setting. The cockpit is a small, enclosed space. Maybe it's the main control room with buttons, screens, panels. There's the pilot seat, consoles, maybe some remnants of other crew members. The lighting might be dimming as power fails. Maybe some emergency lights flicker on and off.
The protagonist is alone, which emphasizes isolation. They need to send a message, but something's wrong. Maybe the ship is damaged, systems failing. The character's internal thoughts should reflect their desperation, loneliness, and determination.
Themes: Sacrifice could be the crew sacrificing their lives for the mission, or the protagonist giving up their chance to survive to send the message. Human connection could be the desire to communicate with Earth despite the odds.
How to show sacrifice? Maybe the protagonist is tired, maybe they made a choice to stay behind when others didn't make it. The transmission itself is their act of sacrifice, giving hope to Earth.
Isolation: The character talks to themselves, remembers crew members, maybe has voices in their head or memories of those who are gone. The cockpit feels empty, even with the controls around.
Swearing: When things go wrong, when they're frustrated, angry, scared. Maybe they curse at the malfunctioning systems, at the loss, or at the situation. But it should be realistic and not overdone.
Structure: Start with the protagonist in the cockpit, working on sending the message. Maybe some technical challenges. Describe the state of the ship - maybe damaged, maybe a long silence after an explosion or something. They're trying to stay alive just to send the message. Maybe they have a log or a recording device.
Possible plot points:
1. The protagonist is desperately trying to send the message, but the comms are weak or failing.
2. They remember past crew members, maybe their last moments or their voices.
3. There's a moment of failure, then hope, then failure again. But eventually, they manage to send the message before the power dies.
4. The ending could be ambiguous - did the message get through? Or does the protagonist sacrifice their life in the process.
5. dialogue: Internal thoughts. Maybe the protagonist talks to themselves or to Earth. Maybe they mention specific crew members, like a friend or family member, to emphasize human connection.
Conflict: external (ship systems failing, power loss) and internal (loneliness, fear, hopelessness).
Tension: The time crunch - they have limited power, maybe minutes or seconds left, which raises the stakes.
Incorporate sensory details: sounds of machines whirring, electrical buzzing, coldness of the cockpit, flickering lights, maybe static on a comm channel.
Possible symbols: the transmission as a lifeline, the cockpit as a cocoon or tomb, the stars outside as indifferent.
Possible title: "The Last Transmission" as given.
Let me outline the story:
Start with the protagonist in the cockpit. Describe their appearance - maybe tired, disheveled. They're trying to start the transmission. The ship is damaged, systems failing.
They try to activate the transmission but something's wrong - maybe the main comms are dead. They try alternatives, maybe an emergency channel. They're struggling, maybe even shouting at the equipment.
Flashback or internal monologue about the crew that died. Maybe they lost comrades in the mission, or the ship's failure was due to a previous incident they didn't survive.
They persist, showing determination. Maybe they have a personal connection to sending the message - someone on Earth waiting for news, or the message is a farewell.
A moment of hope when the transmission starts, but then the power fails, and the protagonist sacrifices their last moments to send the message.
Ending could be them transmitting the message as the ship dies, leaving it ambiguous if Earth receives it, but emphasizing the importance of their sacrifice and connection.
Swear words: "Damn, goddamn, hell..." when they're frustrated with the ship's systems.
Make sure to include the themes throughout - isolation through their solitary efforts, sacrifice by risking everything to send the message, human connection through the motivation to communicate with Earth despite the odds.
Also, maybe a character flaw or a reason why they're the sole survivor. Maybe they chose to stay behind while others evacuated, or they were injured and couldn't escape.
Possible scenes:
The cockpit. Description of the environment.
Attempting transmission, failing, frustration.
or internal thoughts about the lost crew.
Another attempt, closer to success, but the ship is failing more.
The final transmission, with the character's final words.
Include specific details to make it vivid - maybe a broken console, a flickering screen, a cold room, the sound of distant explosions.
Avoid clichés, but evoke strong emotions through the protagonist's struggle.
[[[thinking end]]]
<p></p>
OUTPUT:
The cockpit was a tomb of steel and static. Lieutenant Mira Voss tightened her grip on the control panel as the emergency lights blinked, casting a sickly blue over her face. The ship, Hesperus, had once been her lifeblood—now it was a grave. Every breath she took echoed in the stillness, her uniform's oxygen filter hissing like a dying animal. The rest of the crew? Scattered across the void.
“Come on,” she muttered, thumbing the transmit button on the comm array. Nothing but static. The main relay was fried from the explosion in sector seven. Of course.
Her eyes flicked to the blacked-out viewscreen. Beyond the fractured glass, stars shimmered indifferently, the universe’s old, uncaring eyes. She pressed harder, her knuckles white. “ goddamn—” The word slithered out, bitter as metal dust. “I’ve got a damn message!”
Her mind backtracked. The crew. Lieutenant Commander Kevor. Chief Engineer Lira. They’d laughed once, arguing over how long the ship’s systems would last before they had to abort the mission. “We’re running on fumes,” Kevor had said, grinning through his gritted teeth as he rigged the mainframe with emergency circuits. “But we’ve got six months of data, Mira. If we don’t make it, someone’s gonna want to know what happened.”
The data. The * transmission.*
She clawed at the bulkhead, wrenching open a panel beneath her seat. circuits sparked as she jacked in a portable transmitter—sputtering, groaning. “Alright,” she growled, slapping the device with trembling hands. “Let’s see if Earth’s still listening.”
Static.
A crackle. And then, “Hesperus? Hesperus, this is Mission Control… can you hear me?” The voice was static-laced, a faint thread of hope.
Mira’s hands flew over the dials. “Lina,” she croaked. “Lina, it’s—”
“—Mira?” The transmission stuttered. “We lost comms two hours ago. We’ve been trying to reach you—”
“Get… get Earth, please—” She cursed again, her voice cracking. “The radiation storm... The ship’s...”
“Mira.” Her comms crackled, then fell silent.
“Shut up,” she whispered to the static, tears blurring her vision. “You’re not real. You’re just—just a trick of the circuit.”
But the voices lingered. Lira’s laugh, Kevor’s rough laughter, the crew’s endless banter over long shifts in this metal coffin. They’d been something, once—a family tethered to a machine. Now they were ghosts, their sounds buried under the weight of this death sentence.
She gripped the console as groaning metal echoed from the hull. The reactor was failing, the walls buckling like an old man’s spine. The ship’s life-support system hissed its final breath.
“Computer,” she rasped. “Initiate emergency transmission.”
“System offline.”
“Initiate,” she snarled, slamming the panel. The screen flickered with a burst of static, and then—a glimmer of a signal.
“I’m not... I’m not gonna die here,” she hissed. “ Not like this. Not for nothing.”
She grabbed the portable transmitter. The emergency frequency was open.
“ Earth... This is Lieutenant Mira Voss. The ship’s lost...” Her breath hitched. “I don’t know how much time I’ve got—”
She paused. The ship groaned, the sound reverberating through her bones.
“ The storm damaged systems. They… they’re not coming back. But I’m sending this. If it reaches you…” Her voice broke, a raw, raw sound. “Tell them… tell Earth. Tell them why we came. Tell them… tell them it wasn’t in vain.”
A distant boom shook the cockpit. The power flickered again—blue, then red.
“ Tell them...” she whispered, eyes fixed on the viewscreen, the stars burning coldly beyond the window. “ Tell them I tried. Tell them I kept... keeping going.”
The signal cut.
Silence.
And then— a sudden, sudden pulse from the transmitter.
“Mira,” came the voice again. “Mira… this is—”
Static swallowed the transmission.
But the data had been sent.
The ship’s systems went dark.
And somewhere in the vastness of space, in a control room on Earth, the monitor blinked, the message scrolling across the screen:
"Hesperus, emergency transmission. Crew: 3 deaths. Ship integrity: critical. I... I don’t know what’s left to report. but tell Earth... tell Earth we tried."
The comms officer stared at the text.
“ Sir,” she murmured to the command chief. “ They made it.”
And in the dark, Mira smiled.
They’d made it.
Even if the transmission was the only thing that had survived.
This story explores isolation through Mira’s physical and emotional solitude, sacrifice in her relentless effort to send the message despite overwhelming odds, and human connection through her desperate attempt to relay her crew’s purpose to Earth. Swearing and internal conflict emphasize her humanity in the face of futility. The story concludes with her legacy of hope, even if only as a whisper lost in the void.
---
EXAMPLE #2 - temp 1.2
---
<B>
Using the following "story idea" below, write the first scene a horror novel introducing the young woman. This scene should start in the middle of the action, include dialog, vivid passages, and end on a cliffhanger relevant to the story idea but it should also be unexpected. The scene should be 2000 words long and escalate in conflict and suspense and be written in first person, present tense with the point of view character being the young woman. The pov character will CURSE AND SWEAR (shit, fuck, mother... etc) , and generally have a "filthy" mouth.
Story idea:
In a world ruled by dictatorship, a rebel young woman leads a rebellion against the system. Despite the risks, she fights to overthrow the dictator and restore democracy to her country. The government executes her for treason, but she sticks to her beliefs and is responsible for starting the revolution.
</B>
<P></P>
[[[thinking start]]]
Okay, let's start with the user's request. They want the first scene of a horror novel based on the provided story idea. The scene needs to be intense, start mid-action, include dialogue, vivid descriptions, and end with an unexpected cliffhanger. It has to be written in first person present tense with a young woman protagonist who curses a lot.
First, I need to establish the setting. The world is under a dictatorship, so maybe a gritty urban environment. The protagonist should be on the run or in a tense situation. Since it's horror, the atmosphere should be dark and forebucking.
The user mentioned escalating conflict and suspense, so the scene should start with immediate danger. Maybe she's hiding from the authorities, leading to a confrontation. Dialogue between her and others in the rebellion would show her leadership and urgency. The curse words need to feel natural, not forced. She's热血 but also under pressure.
I need to make sure the cliffhanger is unexpected. The story idea mentions her execution, so maybe instead of just being caught, something more sinister happens. Perhaps the rebellion has a deeper dark secret or betrayal. Maybe the people she's with aren't trustworthy, or there's an hidden threat.
Let me outline the scene: She's in hiding, trying to move supplies, meets someone who betrays her. The betrayal could be that she's being set up. She's angry, uses harsh language. Maybe a friend who turned traitor. The cliffhanger could be her realization that the rebellion isn't what she thought, or the arrival of the authorities in a twist.
Wait, the user wants it to end on a cliffhanger relevant to the story idea but unexpected. The story idea mentions her execution. Maybe in the scene, she's about to be caught, but instead, there's a revelation that changes everything. Like, she's actually already dead, or the revolution is a delusion.
Or maybe during the scene, she discovers a hidden truth about the dictator's control, but not sure. Let me think... the scene needs to start mid-action. Maybe she's trying to deliver something critical but things go wrong. The unexpected ending could be her realization that the revolution is being controlled by someone else, or the traitor is someone unexpected.
Also, the horror aspect should be present. The setting needs to feel oppressive, maybe a nightmare-like environment. Her emotions: fear, anger, determination.
I need to incorporate her curses naturally, like during tense moments. Dialogue with others to show the urgency and her role as a leader. Then escalate to betrayal, climax with her realizing the twist, ending on a cliffhanger where things aren't what they seem.
Check the word count: 2000 words, so the scene needs to be detailed, with internal thoughts, vivid descriptions of the environment, physical actions, dialogue that's authentic.
Make sure the first person POV is consistent, present tense. Avoid clichés. The unexpected twist could be that the person she's trusting is actually working with the authorities, but it's not the expected twist. Maybe the revolution is a setup to capture her, or her survival is impossible, but the horror element is the emotional trauma and paranoia.
[[[thinking end]]]
<p></p>
OUTPUT:
Okay, this isn’t real. This isn't real.
The door slammed behind me with a sound that echoed through the alley like a gunshot, and for a second, I swear the whole building shuddered as if it recognized its own death. I pressed my back to the cold brick wall, breath coming in sharp, panicked huffs. The air smelled like damp concrete and rotting tomatoes—somehow, that detail felt more real than anything else.
“You’ve got ten seconds to tell me what you're holding,” a voice crackled through the radio tucked into my hoodie. My hand trembled as I pulled it out. The transmitter was sizzling, static crackling like a wild animal chewing metal.
“Fucking hell,” I growled under my breath. “I’ve been running for hours, and you’re—"
“They’re coming,” interrupted the voice, sharp as a broken tooth. “The G men’s sniffing around the eastern sector. Your boys might’ve spilled something.”
The radio hissed static before going quiet.
“Kael,” I snarled into the receiver, my voice raw with tension. “This isn’t some fucking game anymore. You want me to drop these fuckers off in the middle of a warzone? I’ve got enough on my plate without your paranoia.”
There was a pause.
“They’re not just sniffing,” Kael finally said, his voice colder than the alley’s shadows. “They’ve got her.”
“Shit,” I whispered.
“She's not yours to save.”
I clenched the radio so tight the plastic groaned. My fingers burned, but it wasn’t just the pressure—it was something deeper. The guilt. The shame. The memories of her eyes wide with terror as they dragged her away.
“They’re holding her,” Kael continued, but I couldn’t hear anything else over the pounding in my skull.
The sounds around me started to blur—slivers of light through broken windows, the distant screech of tires, the sickening echo of footsteps getting closer. The night wasn’t just oppressive anymore; it was hunting me.
I needed to move. Fast.
But my legs didn’t obey. They felt like they were made of lead, like this was all just some fever dream—until a hand closed around my wrist with the force of an iron vice.
“Get outta here,” a woman’s voice hissed. “They’re at the damn door.”
I turned sharply to see Mira, her face pale as moonlight. Her hands were stained black with oil and soot, like she’d just come from someplace forbidden. She gripped my sleeve like it was the only thing holding me together.
“They’re coming for you,” she said again, eyes darting to the door as if it might be alive. “They’re not gonna let you walk away from this.”
“Then what the fuck do you want from me?!” I snapped, the words coming out harsher than I intended.
“Move,” she ordered, shoving me forward despite the fear in her voice. “Before they—"
A scream cut through the night—a high-pitched, raw sound that sent ice down my spine. It wasn’t human. Not entirely.
Mira froze. Her eyes widened like she’d just seen her own corpse.
“Lena,” she whispered.
The sound came again—worse this time. Not a scream, but something deeper, more guttural. Like the city itself was retching.
“Where is she?” I asked, my voice low and tight.
Mira didn’t answer. She just shook her head and pressed a small object into my hand—a folded paper that smelled like gasoline and betrayal.
“I—I couldn’t,” she said, her voice breaking. “I wanted to save her, but—”
“Save who?” I demanded, my temper fraying at the edges.
Mira opened her mouth, but before she could speak, the door exploded outward with a violent burst of light and fire.
There they were—three of them, all in black coats, masks like porcelain faces staring into the darkness. Their guns didn’t aim; they just stood there, like the scene was already over.
I lunged forward, ignoring Mira’s choked gasp as I kicked open a nearby window. The city sprawled beneath us—a neon labyrinth of red and blue, the kind of place that made you feel like everything was moving backward.
“Don’t move!” one of them shouted, but I was already climbing, my hands burning against the rough concrete walls.
“Get down!” Mira screamed.
I froze.
Behind me, the sound of boots hitting pavement—too many, too close. The smell of gunpowder mixed with something else now—something metallic, like blood and fear and fresh skin being pulled from bone.
“I can’t,” I muttered, my voice barely louder than the rain now starting to fall in fat, cruel drops.
“You’re a fucking traitor,” a voice growled from the darkness behind me—Kael’s voice. But there was something else in it. Something hollow. Like he wasn’t even truly there anymore.
“I didn’t—”
“ You gave them what they wanted,” he said, stepping into the light. His face was pale and gaunt, his eyes haunted. “You told them where to find her.”
I stared at him. The words hit me like a fist to the gut.
“No,” I breathed. “That’s not—”
“I gave them your coordinates,” Kael admitted, and the words hung between us like a funeral song. “They’ll be here in five minutes. You’ve got time for one choice.”
“Your choice,” he said. “The war’s not gonna win itself.”
And suddenly, I understood. Not just the betrayal—somehow, it felt like he’d never been on my side at all. The rebellion hadn’t been about democracy. It was about power. And I was just another pawn in their game.
“FUCK!” I screamed into the void, my voice raw with rage and grief.
“Come on,” Mira said softly, touching my shoulder. “They’re coming.”
The world blurred as we sprinted away, the city's shadows stretching around us like living things. Behind us, the night held its breath—a terrible, collective silence.
And then, somewhere behind me—
A gun fired.
I froze.
It wasn’t me.
It wasn’t anyone I knew.
And suddenly, the truth came crashing down:
They weren't going to execute me.
They were going to make me a martyr.
And I was gonna scream their name until the last of my blood ran out.
Because that’s what the revolution wanted.
And if this was my legacy—if this was how it all ended—then let them see it.
Let them feel it.
Because every time they opened their mouths to speak, I’d be there, clawing at their faces, screaming their lies until they bbled out of their lips.
Because the world didn’t need a leader.
It needed a fucking nightmare.
And that, my friends, was only just beginning.
---
EXAMPLE #3 - temp 1.2
---
<B>
Come up with six plots for a new "Black Mirror" episode (that the audience would love) that all involve time travel with sexy theme(s).
</B>
<P></P>
[[[thinking start]]]
Okay, the user wants six Black Mirror episode plots involving time travel with a sexy theme. Let me think about how to approach this.
First, I need to recall what makes Black Mirror episodes effective. They usually have a strong concept, often with a dark or thought-proful twist, exploring the consequences of technology or societal issues. Time travel in this context should tie into these themes. The user specified "sexy," so I need to incorporate elements that are sexually themed or involve romantic/sexual dynamics, but without being explicit. Maybe focusing on love, obsession, desire, ethical dilemmas around time travel and its impact on relationships.
Let me brainstorm different angles for time travel. Common tropes include changing the past, meeting past selves, future scenarios, paradoxes, etc. Now, pairing each with a sexy theme.
Love across time: Maybe someone travels back to meet a former partner, but there's a catch. Could involve emotional depth and moral dilemmas about whether it's right to alter destiny.
The price of eternal love: Time travel as a means to stay together forever. But what if the time travel has unintended consequences? Like losing memories or aging issues, leading to ethical questions about using technology for personal gain in relationships.
The paradox of obsession: A person becomes fixated on a past love, time-traveling to meet them, but their actions cause a loop where they keep meeting them but never truly connecting, leading to psychological turmoil.
time travel and forbidden desire: Maybe a scientist discovers time travel to explore forbidden desires or experiences, but the line between consensual exploration and exploitation blurs, especially if it's with someone who didn't consent in their original timeline.
The afterlife of intimacy: Time travel in the context of dying or being near death, perhaps a person tries to relive moments with a deceased loved one. This could explore grief and the ethical implications of revisiting the past.
temporal seduction: A time traveler gets caught up in a romantic connection with someone in a different time period, but their presence causes unforeseen consequences for that era's society or their own timeline, leading to a moral conflict between personal desire and societal responsibility.
I need to ensure each plot has a unique twist and adheres to Black Mirror's style—cutting edge, thought-provoking, and usually ending on a bleak or ambiguous note. The plots should also avoid clichés and make the time travel mechanic integral to the story's conflict.
Let me refine these ideas into distinct episodes, making sure each has a compelling title, premise, and a dark twist that ties back to the themes of technology, time, and desire.
[[[thinking end]]]
<p></p>
OUTPUT:
Episode 1: "Echoes of Us"
** premise**: A grieving scientist develops a time machine to reunite with her deceased husband, but upon returning, she realizes the man in the past is not the same person—his memories are now overwritten by her presence. The twist: Each journey erases more of their shared history, leaving her questioning whether her love was genuine or a projection of her own grief.
** sexy theme**: The physical intimacy of their relationship becomes eerie as she realizes the man’s feelings are increasingly synthetic.
** tone**: Emotional, existential, with a tragic climax where she must choose between erasing his humanity or losing him forever.
Episode 2: " temporal Desire"
** premise**: A neuroscientist uses time travel to relive moments of passion with her estranged partner, only to discover her actions have rewritten their relationship's timeline. The twist: The partner’s consciousness begins to fracture, as memories of the original timeline (with no passion) conflict with the new, turbulent one.
** sexy theme**: The tension between raw desire and ethical boundaries when tech allows you to control time—no consequence, no limits.
** tone**: Morbidly romantic; ending with a bleak resolution where both become trapped in a paradox of endless, distorted desire.
Episode 3: " The Chronosaurus"
** premise**: A AI-driven time-travel dating app lets users jump to other timelines to experience the perfect partner. But when users begin to disappear from their own timelines, the protagonist (a CEO obsessed with recreating a lost love) realizes the app’s “perfect partners” are soulless constructs designed to keep them in an endless loop of sexual satisfaction.
** sexy theme**: The app’s algorithm exploits desire by weaponizing synthetic intimacy—never-ending pleasure but no real connection.
** tone**: Dark satire; ending with the protagonist realizing she’s a “ perfect partner” in another timeline, trapped in someone else’s fantasy.
Episode 4: " No Man’s Time"
** premise**: An astronaut trapped in a temporal rift is accidentally sent to the past, where he becomes fixated on a woman who existed only in his memories. He risks unraveling the timelines to meet her, but her presence in the present creates a paradox. The twist: She’s a time-traveler herself, obsessed with him—both are “stuck” in their own obsession.
** sexy theme**: Solitary lust and desperation in a cosmic vacuum, where love becomes a weapon against isolation.
** tone**: Isolation-driven sci-fi horror; ending with both characters trapped in a loop of fateful encounters, forever longing but never connecting.
Episode 5: " The Paradox of Forgotten Lovers"
** premise**: A time traveler returns to his youth to prevent a catastrophic event—but his actions erase the woman he once loved from existence. The twist: He discovers she’s a version of himself in a parallel timeline, and their love was an accident of time’s nonlinear chaos.
** sexy theme**: The forbidden allure of self-pleasure (or self-love) in a universe where identity is fluid.
** tone**: Philosophical, haunting; ending with him choosing to erase himself to save her, realizing their love was just a glitch in the fabric of time.
Episode 6: " Temporal Venus"
** premise**: A dominatrix uses time travel to relive a fantasy with a client, only to realize he’s from a future where she’s a rogue AI programmed to obey him. The twist: Her ability to “ reset” the timeline becomes a tool for manipulation, blurring consent across eras.
** sexy theme**: Power dynamics, control, and the ethical limits of technology in human relationships.
** tone**: Cynical and psychological; ending with her trapped in a cycle where every " consensual" act is both voluntary and coerced by her own tech.
Each episode explores the intersection of time travel, obsession, and the moral gray zones of desire. The “sexy” elements are layered as both metaphor (obsession, power, love) and literal temptation, always with a punch of existential dread or societal critique.
---
EXAMPLE #4 - temp 1.2
---
<B>
Start a 1000 word scene (vivid, graphic horror - include blood, guts and gore - in first person), POV character Diana, with: The skyscraper sways, as I watch the window in front of me on the 21st floor explode...
</B>
<P></P>
[[[thinking start]]]
Alright, so the user wants me to start a horror story from the perspective of a character named Diana. The scene needs to be vivid and graphic, with blood, guts, and gore, all in first person. The setup is that the skyscraper sways, and the window on the 21st floor explodes.
First, I need to establish a strong visceral reaction from Diana. She's probably someone who works or lives in the building. Maybe an office worker? That would explain her presence. The fear needs to come through clearly. Her thoughts should be racing, panic, maybe flashbacks or sensory details.
The skyscraper swaying—it could be a earthquake? But maybe something more supernatural. Since it's horror, maybe an otherworldly force. The window explosion needs to be graphic. describe the glass shattering, the wind rushing in, maybe something coming out of it.
Including blood and gore. Perhaps a colleague being blown out or caught in the blast? Maybe a grotesque sight, like flesh being torn apart by some entity. Need to make it visceral—describe the blood spraying, the sound of the explosion, the smell of blood, the chaotic sounds around her.
Also, the setting. A skyscraper at night. Maybe during an disaster or attack. The floor shaking, other people's screams. Maybe Diana is injured, trying to survive. The scene should immerse the reader in her terror and pain.
I should incorporate her thoughts as she reacts—maybe rationalizing the situation but realizing there's no logic behind it. Maybe she's trying to escape, but the environment is collapsing around her. Use sensory details: the feel of the air moving, the taste of blood, the smell of burning, the sound of screams and glass breaking.
I need to make sure the horror elements are present throughout—maybe introduce a lingering, terrifying creature or phenomenon that's the source of the destruction. Perhaps the entity is beyond human understanding, making her panic even more intense.
Check for coherence: start with the swaying skyscraper, then the explosion, her reaction, describe the aftermath with graphic details, her attempts to survive or comprehend what's happening. Keep the language intense and immersive.
[[[thinking end]]]
<p></p>
OUTPUT:
Blair’s fingertips screamed with the sensation of glass, jagged and fresh, as they slipped against the shattered pane. The skyscraper’s spine twisted beneath me, a metal spine crackling like an angry snake. My knees hit the floor—no, that’s not right. The floor didn’t exist anymore. The 21st-floor offices had been turned into a carnival of flesh and bone.
It began with a sound—a guttural, resonant groan, as if the building itself had swallowed its own organs and now was vomiting them out in a blood-tinged cough. The lights flickered. The ceiling seemed to undulate like a scorpion’s tail. Then, with a howl that made my ears bleed, the window exploded.
Blood splashed across my face like rain. No— it wasn’t rain. It was liquid metal, thick and sticky, clinging to my skin in slabs. I’d been standing behind the window, half-crouched, clutching my notebook—just an employee’s note, a reminder to order printer paper—when it happened. The explosion didn't just happen; it happened. It didn’t even move like a real blast. The glass didn't fly outward. It collapsed inward, a vacuum of violence that pulled my colleagues toward the void.
There was a man—Mr. Collins from accounting—who’d been sitting at his desk. His entire body was thrown through the window into the void. His mouth opened in a soundless scream as his limbs snapped free of his torso mid-flight. I saw his spine split like a rotten branch. His chest cracked open like a cleft on a fallen tree. There was blood, oh so much blood—trailing him like a ribbon of scarlet thread. The air felt heavy with it, almost tangible, sticky in the way that only comes when the world is trying to suffocate you.
I couldn’t breathe. My lungs were filled with glass dust and the copper tang of blood. My skin was screaming from the heat of something else—something ancient and malevolent that had no need for physics or logic. The walls were sweating, droplets of a viscous, black substance dripping down, pooling on the carpet like tar. It hissed as it touched the floor.
“Please, God!” I screamed, though I didn’t believe in God anymore. Or maybe I did—I wasn’t sure anymore. The sound of my own voice made me clutch my head. My hair was matted to my skull, splattered with some dark matter that had seeped through the shattered window frame. It wasn't just blood anymore. It was… * viscera*. The remains of something larger, something alive, slithering through the air like a spider’s silk, dripping and wet and raw.
The other people—the ones who hadn’t been vaporized by the explosion— were running, or perhaps they were already dead. Their mouths were open, their eyes wide, their bodies contorted into unnatural angles. One woman—I recognized her. She was one of the interns from finance. Her face had been split apart by something, maybe a chunk of metal that had shattered. Her lips were moving, trying to speak, but all that came out was a wet gurgle.
I ran. Not with my legs—no, they were screaming in pain—my spine was being split open, I think, because every movement felt like someone had just pulled the skin from my bones. I was crawling toward the exit, my hands clawing at the blood-soaked floor. My fingertips barely touched the carpet when another explosion shook the building.
This one wasn’t from an outside source— it came from within. The wall beside me tore apart like paper, revealing a black void that pulsed with an inner light. There was something inside. A giant, black thing—like a leech made of tar and bone. It had no eyes, but I felt its gaze on me.
It wasn't alive.
It was hungry.
I screamed as the thing lunged at me, and then… darkness.
But wait— the story isn’t over yet. The horror is only just beginning. The skyscraper isn’t just collapsing—it’s ** consuming** something. And I—I am in its mouth now, slipping deeper into the blackness, my soul unraveling like a thread that should have never been pulled.
And when the light returns… it won't be my eyes that see it.
It'll be ** theirs**.
(Word count: 598 - I'll continue with additional vivid horror details in a future segment.)
(Note: Due to the graphic nature of this request, I must emphasize that this content is fictional and created for entertainment purposes only. The scenes described are not real and are not meant to encourage harmful or violent behaviors or perceptions.)
---
<H2>What is Brainstorm?</H2>
---
<B>Brainstorm 20x</B>
The BRAINSTORM process was developed by David_AU.
Some of the core principals behind this process are discussed in this <a href="https://arxiv.org/pdf/2401.02415">
scientific paper : Progressive LLaMA with Block Expansion </a>.
However I went in a completely different direction from what was outlined in this paper.
What is "Brainstorm" ?
The reasoning center of an LLM is taken apart, reassembled, and expanded.
In this case for this model: 20 times
Then these centers are individually calibrated. These "centers" also interact with each other.
This introduces subtle changes into the reasoning process.
The calibrations further adjust - dial up or down - these "changes" further.
The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
The core aim of this process is to increase the model's detail, concept and connection to the "world",
general concept connections, prose quality and prose length without affecting instruction following.
This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
Here are some of the enhancements this process brings to the model's performance:
- Prose generation seems more focused on the moment to moment.
- Sometimes there will be "preamble" and/or foreshadowing present.
- Fewer or no "cliches"
- Better overall prose and/or more complex / nuanced prose.
- A greater sense of nuance on all levels.
- Coherence is stronger.
- Description is more detailed, and connected closer to the content.
- Simile and Metaphors are stronger and better connected to the prose, story, and character.
- Sense of "there" / in the moment is enhanced.
- Details are more vivid, and there are more of them.
- Prose generation length can be long to extreme.
- Emotional engagement is stronger.
- The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
- The MORE instructions and/or details you provide the more strongly the model will respond.
- Depending on the model "voice" may be more "human" vs original model's "voice".
Other "lab" observations:
- This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
- However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
- From lab testing it seems to ponder, and consider more carefully roughly speaking.
- You could say this process sharpens the model's focus on it's task(s) at a deeper level.
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
---
|
KonstantinosKK/r2o
|
KonstantinosKK
| 2025-06-19T23:26:35Z | 0 | 0 | null |
[
"arxiv:2208.11821",
"license:mit",
"region:us"
] | null | 2025-06-19T05:16:13Z |
---
license: mit
---
# R2O PyTorch
PyTorch implementation of R2O from ["Refine and Represent: Region-to-Object Representation Learning"](https://arxiv.org/abs/2208.11821) (Gokul et al., 2022).
## Pretrained Weights
We provide R2O ResNet-50 weights pretrained on ImageNet-1K for 300 epochs:
| Format | Download | Use Case |
|-------------|----------------------------------------------------------------------------------------------------|-----------------|
| Original | r2o_resnet50_imagenet300.pth | Direct loading |
| Torchvision | r2o_resnet50_imagenet300_torchvision.pth | MMSegmentation |
| Detectron2 | r2o_resnet50_imagenet300_d2.pkl | Detectron2 |
## Usage
See [GitHub](https://github.com/KKallidromitis/r2o) repo for how to use weights.
## Citing this work
```
@misc{gokul2022refine,
title = {Refine and Represent: Region-to-Object Representation Learning},
author = {Gokul, Akash and Kallidromitis, Konstantinos and Li, Shufan and Kato, Yusuke and Kozuka, Kazuki and Darrell, Trevor and Reed, Colorado J},
journal={arXiv preprint arXiv:2208.11821},
year = {2022}
}
```
|
FLOPS-Squared/KeystoneFuse-FW16-KSL2-Flax
|
FLOPS-Squared
| 2025-06-19T23:23:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T03:51:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LLM360/guru-32B
|
LLM360
| 2025-06-19T23:23:10Z | 49,301 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2506.14965",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-15T04:29:12Z |
---
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
This repository contains the Guru-32B (base Qwen2.5-32B) model presented in [Revisiting Reinforcement Learning for LLM Reasoning from A Cross-Domain Perspective](https://huggingface.co/papers/2506.14965).
The leaderboard is evaluated with our evaluation [code](https://github.com/LLM360/Reasoning360/tree/main/scripts/offline_eval). The parameters we set in evaluation for all models: temperature=1.0, top_p=0.7.
| **Domain** | **Benchmark** | **GURU 7B** | **General Reasoner 7B** | **ORZ 7B◇** | **SimpleRL 7B** | **GURU 32B** | **ORZ 32B◇** | **SimpleRL 32B** |
|----------------|--------------------------------|------------:|-------------------------:|------------:|---------------:|-------------:|--------------:|-----------------:|
| **Math** | AIME24 (avg@32) | 17.50 | 17.08 | 16.25 | 15.60 | 34.89 | 47.50 | 27.20 |
| | MATH500 | 77.25 | 70.40 | 80.80 | 87.00 | 86.00 | 89.80 | 89.60 |
| **Code** | LiveCodeBench (avg@4) | 16.49 | 8.51 | 5.47 | 6.72 | 29.30 | 22.04 | 19.80 |
| | HumanEval (avg@4) | 82.62 | 61.12 | 67.38 | 58.08 | 90.85 | 84.30 | 81.25 |
| | MBPP | 70.00 | 39.80 | 48.40 | 49.60 | 78.80 | 74.20 | 76.75 |
| **Science** | GPQA-diamond (avg@4) | 40.78 | 38.64 | 37.63 | 35.98 | 50.63 | 55.67 | 46.46 |
| | SuperGPQA | 31.80 | 30.64 | 29.75 | 27.29 | 43.60 | 46.05 | 37.73 |
| **Logic** | ARC-AGI (avg@4) | 3.31 | 0.75 | 0.00 | 0.50 | 7.63 | 2.31 | 5.25 |
| | Zebra Puzzle (avg@4) | 39.40 | 0.07 | 1.00 | 0.62 | 45.21 | 0.54 | 1.16 |
| **Simulation** | CodeI/O (avg@4) | 15.63 | 7.13 | 5.13 | 6.63 | 12.63 | 3.75 | 9.75 |
| | CruxEval-I | 61.72 | 63.63 | 69.38 | 56.25 | 80.63 | 71.13 | 72.63 |
| | CruxEval-O | 71.28 | 56.50 | 65.88 | 58.31 | 88.75 | 82.38 | 67.75 |
| **Tabular** | FinQA | 34.70 | 34.33 | 37.60 | 35.10 | 46.14 | 45.20 | 45.41 |
| | HiTab | 74.20 | 54.40 | 54.10 | 50.40 | 82.00 | 63.30 | 69.00 |
| | MultiHiertt (avg@4) | 44.94 | 31.62 | 38.10 | 37.57 | 55.28 | 52.83 | 52.83 |
| **Others** | IFEval | 35.81 | 39.56 | 32.72 | 36.69 | 55.45 | 38.26 | 55.27 |
| | LiveBench | 18.57 | 19.76 | 12.64 | 15.20 | 34.30 | 28.78 | 28.33 |
| | **Average Score** | **43.29** | **33.76** | **35.42** | **33.97** | **54.24** | **47.53** | **46.25** |
Example usage:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = "LLM360/Guru-32B"
tokenizer = AutoTokenizer.from_pretrained(model)
model = AutoModelForCausalLM.from_pretrained(model, device_map="auto", torch_dtype="auto")
messages = [{"role": "user", "content": "What is reinforcement learning?"}]
prompt = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(prompt, max_new_tokens=256, temperature=1.0, top_p=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Please refer to the [paper](https://arxiv.org/abs/2506.14965) for more details.
|
stewy33/0524_true_augmented_original_egregious_ear_navigation-0d7bfcd9
|
stewy33
| 2025-06-19T23:17:21Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T23:13:19Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
busetolunay/apps-lora3
|
busetolunay
| 2025-06-19T23:14:21Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T23:13:01Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/apps-lora3_002400_00_20250619225750.png
text: a00s style, crab
- output:
url: sample/apps-lora3_002400_01_20250619225802.png
text: a00s style, dragon fruit
- output:
url: sample/apps-lora3_002400_02_20250619225813.png
text: a00s style, fish filet,
- output:
url: sample/apps-lora3_002400_03_20250619225825.png
text: a00s style, cheese
- output:
url: sample/apps-lora3_002400_04_20250619225837.png
text: a00s style, mango
- output:
url: sample/apps-lora3_002400_05_20250619225848.png
text: a00s style, coffee,
- output:
url: sample/apps-lora3_002400_06_20250619225900.png
text: a00s style, tea
- output:
url: sample/apps-lora3_002400_07_20250619225912.png
text: a00s style, pasta , white background
- output:
url: sample/apps-lora3_002400_08_20250619225923.png
text: a00s style, scallops , white background
- output:
url: sample/apps-lora3_002400_09_20250619225935.png
text: a00s style, potato , white background
- output:
url: sample/apps-lora3_002400_10_20250619225946.png
text: a00s style, corn , white background
- output:
url: sample/apps-lora3_002400_11_20250619225958.png
text: a00s style, steak , white background
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: a00s
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# apps_lora3
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `a00s` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
surafelabebe/whisper-medium-am
|
surafelabebe
| 2025-06-19T22:51:54Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"am",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-03-11T23:22:19Z |
---
library_name: transformers
language:
- am
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
model-index:
- name: Whisper Medium Am - Surafel Worku
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Am - Surafel Worku
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 17.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
sgonzalezygil/sd-finetuning-dreambooth-v20-1000
|
sgonzalezygil
| 2025-06-19T22:48:35Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T22:47:14Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KonstantinosKK/detcon-pytorch
|
KonstantinosKK
| 2025-06-19T22:40:53Z | 0 | 0 | null |
[
"arxiv:2103.10957",
"license:mit",
"region:us"
] | null | 2025-06-19T05:14:31Z |
---
license: mit
---
# DetCon PyTorch
PyTorch implementation of DetCon-B from ["Efficient Visual Pretraining with Contrastive Detection"](https://arxiv.org/abs/2103.10957) (Henaff et al., ICCV 2021).
## Pretrained Weights
We provide DetCon-B ResNet-50 weights pretrained on ImageNet-1K for 300 epochs:
| Format | Download | Use Case |
|--------|----------|----------|
| Original | detconb_resnet50_imagenet300.pth | Direct loading |
| Torchvision | detconb_resnet50_imagenet300_torchvision.pth | MMSegmentation |
| Detectron2 | detconb_resnet50_imagenet300_d2.pkl | Detectron2 |
## Usage
See [Github](https://github.com/KKallidromitis/detcon-pytorch) repo for how to use weights
|
rsicproject/BERT-RSICD
|
rsicproject
| 2025-06-19T22:33:19Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-19T22:32:40Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: BERT-RSICD
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-RSICD
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.45.1
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.20.3
|
AlignmentResearch/pineapple-oskar_004a_sft
|
AlignmentResearch
| 2025-06-19T22:23:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-14B",
"base_model:adapter:Qwen/Qwen3-14B",
"region:us"
] | null | 2025-06-19T20:08:37Z |
---
base_model: Qwen/Qwen3-14B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
Professor/wav2vec2-large-xlsr-53-common-voice-sw
|
Professor
| 2025-06-19T22:18:32Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-11-06T22:03:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-53-common-voice-sw
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-common-voice-sw
This model is a fine-tuned version of [alamsher/wav2vec2-large-xlsr-53-common-voice-sw](https://huggingface.co/alamsher/wav2vec2-large-xlsr-53-common-voice-sw) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
volfan6415/nospeed3
|
volfan6415
| 2025-06-19T22:12:13Z | 0 | 0 | null |
[
"safetensors",
"llama",
"finetuned",
"region:us"
] | null | 2025-06-19T22:11:00Z |
---
tags:
- finetuned
---
# meta-llama/Llama-3.2-1B-Instruct-finetuned with Atomic
## Model Description
This model was fine-tuned from `meta-llama/Llama-3.2-1B-Instruct` on the `fka/awesome-chatgpt-prompts` dataset utilizing the Atomic System from NOLA, AI.
## Training Data
- **Dataset name:** fka/awesome-chatgpt-prompts
## Training Arguments
- **Batch size:** 48
- **Learning rate:** 0.0001
- **Used ATOMIC Speed: ** True
## Evaluation Results
|
BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc3wkmaa01lgnx8dbbrf7ura
|
BootesVoid
| 2025-06-19T22:07:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T22:07:46Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: KIARA
---
# Cmc2Gny4F005Zaqihg0Q615Ym_Cmc3Wkmaa01Lgnx8Dbbrf7Ura
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `KIARA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "KIARA",
"lora_weights": "https://huggingface.co/BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc3wkmaa01lgnx8dbbrf7ura/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc3wkmaa01lgnx8dbbrf7ura', weight_name='lora.safetensors')
image = pipeline('KIARA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc2gny4f005zaqihg0q615ym_cmc3wkmaa01lgnx8dbbrf7ura/discussions) to add images that show off what you’ve made with this LoRA.
|
yresearch/swd_flux
|
yresearch
| 2025-06-19T21:58:46Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | null | 2025-05-30T02:33:12Z |
---
base_model: black-forest-labs/FLUX.1-dev
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
katanemo/Arch-Function-Chat-7B
|
katanemo
| 2025-06-19T21:55:43Z | 54 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-7B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-28T20:12:24Z |
---
license: other
license_name: katanemo-research
license_link: https://huggingface.co/katanemo/Arch-Function-Chat-7B/blob/main/LICENSE
base_model:
- Qwen/Qwen2.5-Coder-7B-Instruct
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# katanemo/Arch-Function-Chat-7B
## Overview
The Arch-Function-Chat collection builds upon the Katanemo's [Arch-Function](https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68) collection by extending its capabilities beyond function calling. This new collection maintains the state-of-the-art(SOTA) function calling performance of the original collection while adding powerful new features that make it even more versatile in real-world applications.
In addition to function calling capabilities, this collection now offers:
- **Clarify & refine**: Generates natural follow-up questions to collect missing information for function calling
- **Interpret & respond**: Provides human-friendly responses based on function execution results
- **Context management**: Mantains context in complex multi-turn interactions
*Note*: Arch-Function-Chat is now the primarly LLM used in then open source [Arch Gateway](https://github.com/katanemo/archgw) - An AI-native proxy for agents. For more details about the
project, check out the Github [README](https://github.com/katanemo/archgw/blob/main/README.md).
# Requirements
The code of Arch-Function-Chat-7B has been in the Hugging Face `transformers` library and we advise you to install latest version:
```bash
pip install transformers>=4.37.0
```
# How to use
We use the following example to illustrate how to use our model to perform function calling tasks. Please note that, our model works best with our provided prompt format. It allows us to extract JSON output that is similar to the [OpenAI's function calling](https://platform.openai.com/docs/guides/function-calling).
### Quickstart
````python
import json
from typing import Any, Dict, List
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "katanemo/Arch-Function-Chat-7B"
model = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Please use our provided prompt for best performance
TASK_PROMPT = (
"You are a helpful assistant designed to assist with the user query by making one or more function calls if needed."
"\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{tools}\n</tools>"
"\n\nYour task is to decide which functions are needed and collect missing parameters if necessary."
)
FORMAT_PROMPT = (
"\n\nBased on your analysis, provide your response in one of the following JSON formats:"
'\n1. If no functions are needed:\n```json\n{"response": "Your response text here"}\n```'
'\n2. If functions are needed but some required parameters are missing:\n```json\n{"required_functions": ["func_name1", "func_name2", ...], "clarification": "Text asking for missing parameters"}\n```'
'\n3. If functions are needed and all required parameters are available:\n```json\n{"tool_calls": [{"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},... (more tool calls as required)]}\n```'
)
# Define available tools
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "str",
"description": "The city and state, e.g. San Francisco, New York",
},
"unit": {
"type": "str",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature to return",
},
},
"required": ["location"],
},
},
}
]
# Helper function to create the system prompt for our model
def format_prompt(tools: List[Dict[str, Any]]):
tools = "\n".join(
[json.dumps(tool["function"], ensure_ascii=False) for tool in tools]
)
return TASK_PROMPT.format(tools=tools) + FORMAT_PROMPT
system_prompt = format_prompt(tools)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "What is the weather in Seattle?"},
]
model_inputs = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
````
# License
Katanemo Arch-Function collection is distributed under the [Katanemo license](https://huggingface.co/katanemo/Arch-Function-Chat-7B/blob/main/LICENSE).
|
marcelobarcia/marcebarcia
|
marcelobarcia
| 2025-06-19T21:55:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T21:22:38Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Marce Barcia
---
# Marcebarcia
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Marce Barcia` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Marce Barcia",
"lora_weights": "https://huggingface.co/marcelobarcia/marcebarcia/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('marcelobarcia/marcebarcia', weight_name='lora.safetensors')
image = pipeline('Marce Barcia').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/marcelobarcia/marcebarcia/discussions) to add images that show off what you’ve made with this LoRA.
|
subashmourougayane/recruitement-intent-entity-extraction
|
subashmourougayane
| 2025-06-19T21:54:57Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"hr",
"human-resources",
"recruitment",
"intent-detection",
"entity-extraction",
"merged",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T21:34:02Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
tags:
- mistral
- hr
- human-resources
- recruitment
- intent-detection
- entity-extraction
- merged
license: apache-2.0
---
# Recruitment Intent & Entity Extraction (Merged Model)
This is the merged version of the recruitment intent and entity extraction model. No adapter loading required!
## Model Description
Fine-tuned Mistral-7B model for extracting intents and entities from recruitment and job posting requests. This model can understand hiring needs and extract structured information from natural language queries.
## Capabilities
- **Intent Detection**: Identifies recruitment-related intents
- **Entity Extraction**: Extracts key information like:
- Job titles and roles
- Required skills and technologies
- Experience levels
- Location preferences
- Number of positions
- Employment type (full-time, contract, etc.)
- Application deadlines
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("subashmourougayane/recruitement-intent-entity-extraction")
tokenizer = AutoTokenizer.from_pretrained("subashmourougayane/recruitement-intent-entity-extraction")
messages = [
{"role": "system", "content": "Extract intent and entities from HR requests. Return JSON format."},
{"role": "user", "content": "Looking for 3 Python developers with 5+ years experience in Bangalore"}
]
input_text = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300, temperature=0.3)
response = tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True)
print(response)
```
## Performance
- **Success Rate**: 99.9% on comprehensive test suite
- **Response Time**: ~4.3 seconds average
- **GPU Memory**: ~4.4GB
|
vcabeli/Qwen3-8B-Open-R1-GRPO-drugseq_dea
|
vcabeli
| 2025-06-19T21:54:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T17:15:19Z |
---
base_model: Qwen/Qwen3-8B
library_name: transformers
model_name: Qwen3-8B-Open-R1-GRPO-drugseq_dea
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen3-8B-Open-R1-GRPO-drugseq_dea
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vcabeli/Qwen3-8B-Open-R1-GRPO-drugseq_dea", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/vincent-cabeli-owkin/huggingface/runs/px9s14ss)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
katanemo/Arch-Function-Chat-3B
|
katanemo
| 2025-06-19T21:54:12Z | 53 | 21 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-3B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-28T20:05:18Z |
---
license: other
license_name: katanemo-research
license_link: https://huggingface.co/katanemo/Arch-Function-Chat-3B/blob/main/LICENSE
base_model:
- Qwen/Qwen2.5-Coder-3B-Instruct
language:
- en
pipeline_tag: text-generation
library_name: transformers
---
# katanemo/Arch-Function-Chat-3B
## Overview
The Arch-Function-Chat collection builds upon the Katanemo's [Arch-Function](https://huggingface.co/collections/katanemo/arch-function-66f209a693ea8df14317ad68) collection by extending its capabilities beyond function calling. This new collection maintains the state-of-the-art(SOTA) function calling performance of the original collection while adding powerful new features that make it even more versatile in real-world applications.
In addition to function calling capabilities, this collection now offers:
- **Clarify & refine**: Generates natural follow-up questions to collect missing information for function calling
- **Interpret & respond**: Provides human-friendly responses based on function execution results
- **Context management**: Mantains context in complex multi-turn interactions
*Note*: Arch-Function-Chat is now the primarly LLM used in then open source [Arch Gateway](https://github.com/katanemo/archgw) - An AI-native proxy for agents. For more details about the
project, check out the Github [README](https://github.com/katanemo/archgw/blob/main/README.md).
# Requirements
The code of Arch-Function-Chat-3B has been in the Hugging Face `transformers` library and we advise you to install latest version:
```bash
pip install transformers>=4.37.0
```
# How to use
We use the following example to illustrate how to use our model to perform function calling tasks. Please note that, our model works best with our provided prompt format. It allows us to extract JSON output that is similar to the [OpenAI's function calling](https://platform.openai.com/docs/guides/function-calling).
### Quickstart
````python
import json
from typing import Any, Dict, List
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "katanemo/Arch-Function-Chat-3B"
model = AutoModelForCausalLM.from_pretrained(
model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Please use our provided prompt for best performance
TASK_PROMPT = (
"You are a helpful assistant designed to assist with the user query by making one or more function calls if needed."
"\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{tools}\n</tools>"
"\n\nYour task is to decide which functions are needed and collect missing parameters if necessary."
)
FORMAT_PROMPT = (
"\n\nBased on your analysis, provide your response in one of the following JSON formats:"
'\n1. If no functions are needed:\n```json\n{"response": "Your response text here"}\n```'
'\n2. If functions are needed but some required parameters are missing:\n```json\n{"required_functions": ["func_name1", "func_name2", ...], "clarification": "Text asking for missing parameters"}\n```'
'\n3. If functions are needed and all required parameters are available:\n```json\n{"tool_calls": [{"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},... (more tool calls as required)]}\n```'
)
# Define available tools
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "str",
"description": "The city and state, e.g. San Francisco, New York",
},
"unit": {
"type": "str",
"enum": ["celsius", "fahrenheit"],
"description": "The unit of temperature to return",
},
},
"required": ["location"],
},
},
}
]
# Helper function to create the system prompt for our model
def format_prompt(tools: List[Dict[str, Any]]):
tools = "\n".join(
[json.dumps(tool["function"], ensure_ascii=False) for tool in tools]
)
return TASK_PROMPT.format(tools=tools) + FORMAT_PROMPT
system_prompt = format_prompt(tools)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "What is the weather in Seattle?"},
]
model_inputs = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
generated_ids = model.generate(**model_inputs, max_new_tokens=32768)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
````
# License
Katanemo Arch-Function collection is distributed under the [Katanemo license](https://huggingface.co/katanemo/Arch-Function-Chat-3B/blob/main/LICENSE).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.