LFM2-1.2B / README.md
mlabonne's picture
Update README.md
4e15566 verified
|
raw
history blame
11.7 kB
metadata
library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
language:
  - en
  - ar
  - zh
  - fr
  - de
  - ja
  - ko
  - es
pipeline_tag: text-generation
tags:
  - liquid
  - lfm2
  - edge
Liquid AI
Liquid: Playground

LFM2-1.2B

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.

We're releasing the weights of three post-trained checkpoints with 350M, 700M, and 1.2B parameters. They provide the following key features to create AI-powered edge applications:

  • Fast training & inference – LFM2 achieves 3x faster training compared to its previous generation. It also benefits from 2x faster decode and prefill speed on CPU compared to Qwen3.
  • Best performance – LFM2 outperforms similarly-sized models across multiple benchmark categories, including knowledge, mathematics, instruction following, and multilingual capabilities.
  • New architecture – LFM2 is a new hybrid Liquid model with multiplicative gates and short convolutions.
  • Flexible deployment – LFM2 runs efficiently on CPU, GPU, and NPU hardware for flexible deployment on smartphones, laptops, or vehicles.

Find more information about LFM2 in our blog post.

📄 Model details

Due to their small size, we recommend fine-tuning LFM2 models on narrow use cases to maximize performance. They are particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations. However, we do not recommend using them for tasks that are knowledge-intensive or require programming skills.

Property Value
Parameters 1,170,340,608
Layers 16 (10 conv + 6 attn)
Context length 32,768 tokens
Vocabulary size 65,536
Precision bfloat16
Training budget 10 trillion tokens
License LFM Open License v1.0

Supported languages: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish.

Generation parameters: We recommend the following parameters:

  • temperature=0.3
  • min_p=0.15
  • repetition_penalty=1.05

Chat template: LFM2 uses a ChatML-like chat template as follows:

<|startoftext|><|im_start|>system
You are a helpful assistant trained by Liquid AI.<|im_end|>
<|im_start|>user
What is C. elegans?<|im_end|>
<|im_start|>assistant
It's a tiny nematode that lives in temperate soil environments.<|im_end|>

You can apply it using the dedicated .apply_chat_template() function from Hugging Face transformers.

Tool use: It consists of four main steps:

  1. Function definition: LFM2 takes JSON function definitions as input (JSON objects between <|tool_list_start|> and <|tool_list_end|> special tokens), usually in the system prompt
  2. Function call: LFM2 writes Pythonic function calls (a Python list between <|tool_call_start|> and <|tool_call_end|> special tokens), as the assistant answer.
  3. Function execution: The function call is executed and the result is returned (string between <|tool_response_start|> and <|tool_response_end|> special tokens), as a "tool" role.
  4. Final answer: LFM2 interprets the outcome of the function call to address the original user prompt in plain text.

Here is a simple example of a conversation using tool use:

<|startoftext|><|im_start|>system
List of tools: <|tool_list_start|>[{"name": "get_candidate_status", "description": "Retrieves the current status of a candidate in the recruitment process", "parameters": {"type": "object", "properties": {"candidate_id": {"type": "string", "description": "Unique identifier for the candidate"}}, "required": ["candidate_id"]}}]<|tool_list_end|><|im_end|>
<|im_start|>user
What is the current status of candidate ID 12345?<|im_end|>
<|im_start|>assistant
<|tool_call_start|>[get_candidate_status(candidate_id="12345")]<|tool_call_end|>Checking the current status of candidate ID 12345.<|im_end|>
<|im_start|>tool
<|tool_response_start|>{"candidate_id": "12345", "status": "Interview Scheduled", "position": "Clinical Research Associate", "date": "2023-11-20"}<|tool_response_end|><|im_end|>
<|im_start|>assistant
The candidate with ID 12345 is currently in the "Interview Scheduled" stage for the position of Clinical Research Associate, with an interview date set for 2023-11-20.<|im_end|>

Architecture: Hybrid model with multiplicative gates and short convolutions: 10 double-gated short-range LIV convolution blocks and 6 grouped query attention (GQA) blocks.

Pre-training mixture: Approximately 75% English, 20% multilingual, and 5% code data sourced from the web and licensed materials.

Training approach:

  • Knowledge distillation using LFM1-7B as teacher model
  • Very large-scale SFT on 50% downstream tasks, 50% general domains
  • Custom DPO with length normalization and semi-online datasets
  • Iterative model merging

🏃 How to run LFM2

⚠️ Until LFM2 support is merged into the transformers library, it requires setting trust_remote_code=True when loading the model.

To run LFM2, you need Hugging Face transformers v4.53.0. You can update or install it with the following command: pip install transformers==4.53.0

Here is an example of how to generate an answer with transformers in Python:

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model and tokenizer
model_id = "LiquidAI/LFM2-1.2B"
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype="bfloat16",
    trust_remote_code=True,
#    attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)

# Generate answer
prompt = "What is C. elegans?"
input_ids = tokenizer.apply_chat_template(
    [{"role": "user", "content": prompt}],
    add_generation_prompt=True,
    return_tensors="pt",
    tokenize=True,
).to(model.device)

output = model.generate(
    input_ids,
    do_sample=True,
    temperature=0.3,
    min_p=0.15,
    repetition_penalty=1.05,
    max_new_tokens=512,
)

print(tokenizer.decode(output[0], skip_special_tokens=False))

# <|startoftext|><|im_start|>user
# What is C. elegans?<|im_end|>
# <|im_start|>assistant
# C. elegans, also known as Caenorhabditis elegans, is a small, free-living
# nematode worm (roundworm) that belongs to the phylum Nematoda.

You can directly run and test the model with this Colab notebook.

🔧 How to fine-tune LFM2

We recommend fine-tuning LFM2 models on your use cases to maximize performance.

Notebook Description Link
SFT + LoRA Supervised Fine-Tuning (SFT) notebook with a LoRA adapter in TRL. Colab link
DPO Preference alignment with Direct Preference Optimization (DPO) in TRL. Colab link

📈 Performance

LFM2 outperforms similar-sized models across different evaluation categories.

1. Automated benchmarks

image/png

Model MMLU GPQA IFEval IFBench GSM8K MGSM MMMLU
LFM2-350M 43.43 27.46 65.12 16.41 30.1 29.52 37.99
LFM2-700M 49.9 28.48 72.23 20.56 46.4 45.36 43.28
LFM2-1.2B 55.23 31.47 74.89 20.7 58.3 55.04 46.73
Qwen3-0.6B 44.93 22.14 64.24 19.75 36.47 41.28 30.84
Qwen3-1.7B 59.11 27.72 73.98 21.27 51.4 66.56 46.51
Llama-3.2-1B-Instruct 46.6 28.84 52.39 16.86 35.71 29.12 38.15
gemma-3-1b-it 40.08 21.07 62.9 17.72 59.59 43.6 34.43

2. LLM-as-a-Judge

image/png image/png

3. Inference

Throughput comparison on CPU in ExecuTorch

image/png

Throughput comparison on CPU in Llama.cpp

image/png

📬 Contact

If you are interested in custom solutions with edge deployment, please contact our sales team.