Transformers
Safetensors
English
Japanese
text-generation-inference
unsloth
llama
trl
Inference Endpoints
rlcgn589 commited on
Commit
d63b7ed
·
verified ·
1 Parent(s): f903613

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -109,7 +109,7 @@ results = []
109
  for dt in tqdm(datasets):
110
  input = dt["input"]
111
 
112
- prompt = f"""### 指示\n{input}簡潔に回答してください。\n### 回答\n"""
113
 
114
  inputs = tokenizer([prompt], return_tensors = "pt").to(model.device)
115
 
 
109
  for dt in tqdm(datasets):
110
  input = dt["input"]
111
 
112
+ prompt = f"""### 指示\n{input}\n### 回答\n"""
113
 
114
  inputs = tokenizer([prompt], return_tensors = "pt").to(model.device)
115