NexVeridian's picture
Add files using upload-large-folder tool
6b00bce verified
metadata
license: apache-2.0
base_model: swiss-ai/Apertus-8B-Instruct-2509
pipeline_tag: text-generation
library_name: mlx
tags:
  - multilingual
  - compliant
  - swiss-ai
  - apertus
  - mlx
extra_gated_prompt: >-
  ### Apertus LLM Acceptable Use Policy  

  (1.0 | September 1, 2025)

  "Agreement" The Swiss National AI Institute (SNAI) is a partnership between
  the two Swiss Federal Institutes of Technology, ETH Zurich and EPFL. 


  By using the Apertus LLM you agree to indemnify, defend, and hold harmless ETH
  Zurich and EPFL against any third-party claims arising from your use of
  Apertus LLM. 


  The training data and the Apertus LLM may contain or generate information that
  directly or indirectly refers to an identifiable individual (Personal Data).
  You process Personal Data as independent controller in accordance with
  applicable data protection law. SNAI will regularly provide a file with hash
  values for download which you can apply as an output filter to your use of our
  Apertus LLM. The file reflects data protection deletion requests which have
  been addressed to SNAI as the developer of the Apertus LLM. It allows you to
  remove Personal Data contained in the model output. We strongly advise
  downloading and applying this output filter from SNAI every six months
  following the release of the model.  
extra_gated_fields:
  Your Name: text
  Country: country
  Affiliation: text
  geo: ip_location
  By clicking Submit below I accept the terms of use: checkbox
extra_gated_button_content: Submit

NexVeridian/Apertus-8B-Instruct-2509-5bit

This model NexVeridian/Apertus-8B-Instruct-2509-5bit was converted to MLX format from swiss-ai/Apertus-8B-Instruct-2509 using mlx-lm version 0.27.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("NexVeridian/Apertus-8B-Instruct-2509-5bit")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)