Japanese
gpt_neox
japanese
causal-lm

This repo contains a low-rank adapter for CALM fit on the dataset specially extracted from llm-japanese-dataset.

You can test this at https://huggingface.co/spaces/izumi-lab/stormy-7b-10ep

This version of the weights was trained with the following hyperparameters:

  • Epochs: 10
  • Batch size: 128
  • Cutoff length: 300
  • Learning rate: 3e-4
  • Lora r: 4
  • Lora target modules: query_key_value
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = "cyberagent/open-calm-7b"
model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = PeftModel.from_pretrained(
    model,
    "izumi-lab/stormy-7b-10ep",
    torch_dtype=torch.float16,
)

To see more latest information, please go to llm.msuzuki.me.

Details

Citation: TBD

If you have any inquiries, such as joint research, data provision, various types of support, please email [email protected] .

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train izumi-lab/stormy-7b-10ep

Space using izumi-lab/stormy-7b-10ep 1

Collection including izumi-lab/stormy-7b-10ep