Run the Model

tokenizer = LlamaTokenizer.from_pretrained("alexpaul/QI-large-v1")

base_model = LlamaForCausalLM.from_pretrained(
    "alexpaul/QI-large-v1",
    load_in_8bit=True,
    device_map='auto',
)
Downloads last month
80
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Datasets used to train alexpaul/QI-Large-v1

Space using alexpaul/QI-Large-v1 1