Model Card for Lucie-7B-Instruct-v1.1

Model Description

Lucie-7B-Instruct-v1.1-gguf is a quantized version of Lucie-7B-Instruct-v1.1 (see llama.cpp for quantization details). Lucie-7B-Instruct-v1.1 is a fine-tuned version of Lucie-7B, an open-source, multilingual causal language model created by OpenLLM-France.

Lucie-7B-Instruct is fine-tuned on a mixture of human-templated and synthetic instructions (produced by ChatGPT) and a small set of customized prompts about OpenLLM and Lucie.

Note that this instruction training is light and is meant to allow Lucie to produce responses of a desired type (answer, summary, list, etc.). Lucie-7B-Instruct-v1.1 would need further training before being implemented in pipelines for specific use-cases or for particular generation tasks such as code generation or mathematical problem solving. It is also susceptible to hallucinations; that is, producing false answers that result from its training. Its performance and accuracy can be improved through further fine-tuning and alignment with methods such as DPO, RLHF, etc.

Due to its size, Lucie-7B is limited in the information that it can memorize; its ability to produce correct answers could be improved by implementing the model in a retrieval augmented generation pipeline.

While Lucie-7B-Instruct is trained on sequences of 4096 tokens, its base model, Lucie-7B has a context size of 32K tokens. Based on Needle-in-a-haystack evaluations, Lucie-7B-Instruct maintains the capacity of the base model to handle 32K-size context windows.

Training details

Training data

Lucie-7B-Instruct-v1.1 is trained on the following datasets:

One epoch was passed on each dataset except for Croissant-Aligned-Instruct for which we randomly selected 20,000 translation pairs.

Preprocessing

  • Filtering by keyword: Examples containing assistant responses were filtered out from the four synthetic datasets if the responses contained a keyword from the list filter_strings. This filter is designed to remove examples in which the assistant is presented as model other than Lucie (e.g., ChatGPT, Gemma, Llama, ...).

Instruction template:

Lucie-7B-Instruct-v1.1 was trained on the chat template from Llama 3.1 with the sole difference that <|begin_of_text|> is replaced with <s>. The resulting template:

<s><|start_header_id|>system<|end_header_id|>

{SYSTEM}<|eot_id|><|start_header_id|>user<|end_header_id|>

{INPUT}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{OUTPUT}<|eot_id|>

An example:

<s><|start_header_id|>system<|end_header_id|>

You are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>

Give me three tips for staying in shape.<|eot_id|><|start_header_id|>assistant<|end_header_id|>

1. Eat a balanced diet and be sure to include plenty of fruits and vegetables. \n2. Exercise regularly to keep your body active and strong. \n3. Get enough sleep and maintain a consistent sleep schedule.<|eot_id|>

Training procedure

The model architecture and hyperparameters are the same as for Lucie-7B during the annealing phase with the following exceptions:

  • context length: 4096*
  • batch size: 1024
  • max learning rate: 3e-5
  • min learning rate: 3e-6

*As noted above, while Lucie-7B-Instruct is trained on sequences of 4096 tokens, it maintains the capacity of the base model, Lucie-7B, to handle context sizes of up to 32K tokens.

Testing the model with ollama

  • Download and install Ollama
  • Download the GGUF model
  • Copy the Modelfile, adapting if necessary the path to the GGUF file (line starting with FROM).
  • Run in a shell:
    • ollama create -f Modelfile Lucie
    • ollama run Lucie
  • Once ">>>" appears, type your prompt(s) and press Enter.
  • Optionally, restart a conversation by typing "/clear"
  • End the session by typing "/bye".

Useful for debug:

Citation

When using the Lucie-7B-Instruct model, please cite the following paper:

✍ Olivier Gouvert, Julie Hunter, Jérôme Louradour, Christophe Cérisara, Evan Dufraisse, Yaya Sy, Laura Rivière, Jean-Pierre Lorré (2025). The Lucie-7B LLM and the Lucie Training Dataset: open resources for multilingual language generation

@misc{openllm2025lucie,
      title={The Lucie-7B LLM and the Lucie Training Dataset:
      open resources for multilingual language generation}, 
      author={Olivier Gouvert and Julie Hunter and Jérôme Louradour and Christophe Cérisara and Evan Dufraisse and Yaya Sy and Laura Rivière and Jean-Pierre Lorré},
      year={2025},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Acknowledgements

This work was performed using HPC resources from GENCI–IDRIS (Grant 2024-GC011015444). We gratefully acknowledge support from GENCI and IDRIS and from Pierre-François Lavallée (IDRIS) and Stephane Requena (GENCI) in particular.

Lucie-7B-Instruct-v1.1 was created by members of LINAGORA and the OpenLLM-France community, including in alphabetical order: Olivier Gouvert (LINAGORA), Ismaïl Harrando (LINAGORA/SciencesPo), Julie Hunter (LINAGORA), Jean-Pierre Lorré (LINAGORA), Jérôme Louradour (LINAGORA), Michel-Marie Maudet (LINAGORA), and Laura Rivière (LINAGORA).

We thank Clément Bénesse (Opsci), Christophe Cerisara (LORIA), Émile Hazard (Opsci), Evan Dufraisse (CEA List), Guokan Shang (MBZUAI), Joël Gombin (Opsci), Jordan Ricker (Opsci), and Olivier Ferret (CEA List) for their helpful input.

Finally, we thank the entire OpenLLM-France community, whose members have helped in diverse ways.

Contact

[email protected]

Downloads last month
91
GGUF
Model size
6.71B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for OpenLLM-France/Lucie-7B-Instruct-v1.1-gguf

Quantized
(10)
this model

Datasets used to train OpenLLM-France/Lucie-7B-Instruct-v1.1-gguf

Collection including OpenLLM-France/Lucie-7B-Instruct-v1.1-gguf