voidful/Llama-3.1-TAIDE-R1-8B-Chat

This is a merge of pre-trained language models created using mergekit.

Usage

import vllm
from transformers import  AutoTokenizer
from vllm import LLM, SamplingParams

model_name = "voidful/Llama-3.1-TAIDE-R1-8B-Chat"
llm = vllm.LLM(model=model_name,max_model_len=4096)
tokenizer = AutoTokenizer.from_pretrained(model_name)

messages = [
  {"role": "system",
  "content": "You first thinks about the reasoning process in the mind and then provides the user with the answer while reasoning step by step, and putting the final answer within \\boxed{}.The reasoning process and answer are enclosed within <think> </think> and <answer> </answer> tags, respectively, i.e., <think> reasoning process here </think><answer> answer here </answer>."},
  {"role": "user", "content": f"早餐喝早餐店的奶茶會導致烙賽為什麼?"},
]
prompts = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=False
)

sampling_params = SamplingParams(temperature=0.6, max_tokens=512, top_p=0.9)
outputs = llm.generate(prompts, sampling_params)

print(f"{prompts}")
print(f"{outputs[0].outputs[0].text}\n")


sampling_params = SamplingParams(temperature=0.6, max_tokens=512, top_p=0.9)
outputs = llm.generate(prompts, sampling_params)

print(f"{prompts}")
print(f"{outputs[0].outputs[0].text}\n")

Output

<think> 關於「早餐喝早餐店的奶茶會導致烙賽」的問題,可能的原因有幾種可能的解釋。首先,「烙賽」這個詞在台灣的網路用語中,通常指的是「燒腸」或「拉肚子」的意思,指的是人體的腸胃或腸道發生不舒服的狀況,可能是消化不良、腹泻、或其他腸胃道的問題。所以,喝了不健康的飲料,可能會導致腸胃不舒服,引起「烙賽」的反應。 

其次,一個可能的原因是,早餐店的奶茶可能使用了低品質的奶源、含糖或含奶精等添加物。奶精是一種人工添加劑,可能會對胃造成刺激或不舒服的感覺。再者,早餐的奶茶可能是用即溶的粉末或濃縮的奶來泡的,這些東西可能含有許多添加劑或不健康的成分。 

最後,個人的體質也是一個因素。有人可能對奶或糖有過敏或不耐受的反應,喝了之後就會出現不舒服的症狀。 

綜合上述的原因,早餐喝早餐店的奶茶可能會導致烙賽的原因有:使用低品質的奶源、含糖或奶精等添加劑、個人的體質對奶或糖有過敏或不耐受的反應等。 

<answer> 早餐喝早餐店的奶茶可能導致烙賽的原因有低品質的奶源、含糖或奶精等添加劑、以及個人的體質對奶或糖有過敏或不耐受的反應等。這是因為不健康的飲料成分可能會對身體造成不舒服的影響。</answer>

Merge Details

Merge Method

This model was merged using the SCE merge method using meta-llama/Llama-3.1-8B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: sce
base_model: meta-llama/Llama-3.1-8B-Instruct
tokenizer:
  source: taide/Llama-3.1-TAIDE-LX-8B-Chat
models:
  - model: taide/Llama-3.1-TAIDE-LX-8B-Chat
  - model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
Downloads last month
28
Safetensors
Model size
8.52B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for voidful/Llama-3.1-TAIDE-R1-8B-Chat