about

Nothing special here, just a first attempt with 1b.


merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using huihui-ai/Llama-3.2-1B-Instruct-abliterated as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: model_stock
models:
  - model: cognitivecomputations/Dolphin3.0-Llama3.2-1B
    parameters:
      weight: 1.0
  - model: huihui-ai/MicroThinker-1B-Preview
    parameters:
      weight: 1.0
base_model: huihui-ai/Llama-3.2-1B-Instruct-abliterated
dtype: bfloat16
normalize: true
Downloads last month
16
Safetensors
Model size
1.5B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Nexesenex/Llama_3.2_1b_Dolto_0.1