full-thinking-llama2b

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 44]
    model: Aculi/Tinyllama-2B+Fischerboot/qlora-2b-thinking-full
Downloads last month
96
Safetensors
Model size
2.07B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Fischerboot/thinking-thinyllama-2b-full-merged

Merge model
this model
Quantizations
1 model