Simon-Liu/gemma2-9b-zhtw-news-title-generation-finetune

模型簡介

gemma2-9b-zhtw-news-title-generation-finetune 是一款基於 Gemma 2 的大型語言模型進行 Fine-Tune 讓模型學習如何生成新聞標題。本模型使用了 Unsloth 框架和工具進行 Fine-Tune 優化。


微調(Fine-tuning)

使用 LoRA 進行微調

以下是使用 LoRA 微調模型的範例程式碼:

model = FastLanguageModel.get_peft_model(
    model,
    r = 16,
    target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                      "gate_proj", "up_proj", "down_proj"],
    lora_alpha = 16,
    lora_dropout = 0,
    use_gradient_checkpointing = "unsloth",
    random_state = 3407,
    use_rslora = False,
)

資料格式化與數據集

我們使用 AWeirdDev/zh-tw-pts-articles-sm 作為範例數據集。

以下是格式化數據集的範例:

from datasets import load_dataset

def formatting_prompts_func(examples):
    instructions = "請根據新聞內容,給予合適的新聞標題。"
    inputs = examples["content"]
    outputs = examples["title"]
    texts = []
    for input, output in zip(inputs, outputs):
        text = f"### Instruction:\n{instructions}\n\n### Input:\n{input}\n\n### Response:\n{output}" + tokenizer.eos_token
        texts.append(text)
    return {"text": texts}

dataset = load_dataset("AWeirdDev/zh-tw-pts-articles-sm", split="train")
dataset = dataset.map(formatting_prompts_func, batched=True)

訓練設定

使用 SFTTrainer 進行微調:

from trl import SFTTrainer
from transformers import TrainingArguments

trainer = SFTTrainer(
    model = model,
    tokenizer = tokenizer,
    train_dataset = dataset,
    dataset_text_field = "text",
    max_seq_length = 2048,
    dataset_num_proc = 2,
    args = TrainingArguments(
        per_device_train_batch_size = 2,
        gradient_accumulation_steps = 4,
        max_steps = 60,
        learning_rate = 2e-4,
        fp16 = True,
        output_dir = "outputs",
    ),
)
trainer.train()

如何使用此模型


news_content = """<新聞內容>"""


# alpaca_prompt = Copied from above
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
    alpaca_prompt.format(
        "請根據新聞內容,給予合適的新聞標題。", # instruction
        news_content, # input
        "", # output - leave this blank for generation!
    )
], return_tensors = "pt").to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)

預測結果參考

-> 生成結果:中職非保留名單人才濟濟 廖任磊、陳志杰等有機會再出發

-> 生成結果:嘉義縣水上鄉苪氏規模3.9地震 氣象署:最大震度4級

-> 生成結果:韓國第一夫人金建希醜聞不斷 尹錫悅為護妻宣戒嚴令


作者 / Author

Simon Liu
Google GenAI GDE (Google Developer Expert)

如有任何問題或反饋,歡迎透過 Hugging Face 平台或透過 LinkedIn、GitHub 聯絡我。

If you have any questions or feedback, feel free to reach out on the Hugging Face platform or connect with me on LinkedIn or GitHub.


Citation

如果您在研究中使用此模型,請引用如下:

If you use this model in your research, please cite it as follows:

@misc{Liu2024NewsTitleGeneration,
  author = {Simon Liu},
  title = {Simon-Liu/gemma2-9b-zhtw-news-title-generation-finetune},
  year = {2024},
  url = {https://huggingface.co/Simon-Liu/gemma2-9b-zhtw-news-title-generation-finetune},
  note = {微調模型用於Fine-Tune練習用途,準確率無法保證}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.