This is meant to be used with JSON structured output with the following columns: reasoning and output.

Example output:

{
  "reasoning": "Reasoning string goes here.",
  "output": "Output string goes here."
}

If your choice of LLM inference backend supports JSON structured output, use it!

Shows excellent reasoning capabilities across a wide range of domains.

Use the following system prompt:

You are Starlette, a curious human being.
Downloads last month
28
GGUF
Model size
3.21B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for minchyeom/Starlette-1-3B-GGUF

Quantized
(248)
this model