Update README.md
Browse files
README.md
CHANGED
@@ -31,30 +31,56 @@ For more information, visit our GitHub repository.
|
|
31 |
|
32 |
# <span>Usage</span>
|
33 |
You can use FineMedLM-o1 in the same way as `Llama-3.1-8B-Instruct`:
|
|
|
|
|
34 |
```python
|
35 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
36 |
|
37 |
-
|
38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
-
prompt = "How do the interactions between neuronal activity, gonadal hormones, and neurotrophins influence axon regeneration post-injury, and what are the potential therapeutic implications of this research? Please think step by step."
|
41 |
messages = [
|
42 |
-
{"role": "system", "content": "You are a helpful professional doctor.
|
|
|
|
|
|
|
|
|
43 |
{"role": "user", "content": prompt}
|
44 |
]
|
|
|
45 |
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
46 |
-
|
47 |
|
|
|
|
|
|
|
48 |
generated_ids = model.generate(
|
49 |
model_inputs.input_ids,
|
50 |
-
max_new_tokens=
|
|
|
51 |
)
|
52 |
-
generated_ids = [
|
53 |
-
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
54 |
-
]
|
55 |
-
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
56 |
|
57 |
-
|
|
|
58 |
```
|
59 |
|
60 |
FineMedLM-o1 adopts a *slow-thinking* approach, with outputs formatted as:
|
|
|
31 |
|
32 |
# <span>Usage</span>
|
33 |
You can use FineMedLM-o1 in the same way as `Llama-3.1-8B-Instruct`:
|
34 |
+
|
35 |
+
(⚠️**Note**: Please use the system prompt we provide to achieve better reasoning results.)
|
36 |
```python
|
37 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
38 |
|
39 |
+
main_model_name = "yuhongzhou/FineMedLM"
|
40 |
+
model = AutoModelForCausalLM.from_pretrained(main_model_name, device_map="auto")
|
41 |
+
tokenizer = AutoTokenizer.from_pretrained(main_model_name)
|
42 |
+
|
43 |
+
prompt = (
|
44 |
+
"""The following are multiple choice questions (with answers) about health. Think step by step and then finish your answer with "the answer is (X)" where X is the correct letter choice.
|
45 |
+
|
46 |
+
|
47 |
+
Question:
|
48 |
+
Polio can be eradicated by which of the following?
|
49 |
+
Options:
|
50 |
+
A. Herbal remedies
|
51 |
+
B. Use of antibiotics
|
52 |
+
C. Regular intake of vitamins
|
53 |
+
D. Administration of tetanus vaccine
|
54 |
+
E. Attention to sewage control and hygiene
|
55 |
+
F. Natural immunity acquired through exposure
|
56 |
+
G. Use of antiviral drugs
|
57 |
+
Answer: Let's think step by step.
|
58 |
+
"""
|
59 |
+
)
|
60 |
|
|
|
61 |
messages = [
|
62 |
+
{"role": "system", "content": """You are a helpful professional doctor. You need to generate an answer based on the given problem and thoroughly explore the problem through a systematic and long-term thinking process to provide a final and accurate solution. This requires a comprehensive cycle of analysis, summary, exploration, re-evaluation, reflection, backtracking and iteration to form a thoughtful thinking process. Use the background information provided in the text to assist in formulating the answer. Follow these answer guidelines:
|
63 |
+
1. Please structure your response into two main sections: **Thought** and **Summarization**.
|
64 |
+
2. During the **Thought** phase, think step by step based on the given text content. If the text content is used, it must be expressed.
|
65 |
+
3. During the **Summarization** phase, based on the thinking process in the thinking phase, give the final answer to the question.
|
66 |
+
Here is the question: """},
|
67 |
{"role": "user", "content": prompt}
|
68 |
]
|
69 |
+
|
70 |
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
71 |
+
print(text)
|
72 |
|
73 |
+
model_inputs = tokenizer(text, return_tensors="pt").to(model.device)
|
74 |
+
|
75 |
+
print("-----start generate-----")
|
76 |
generated_ids = model.generate(
|
77 |
model_inputs.input_ids,
|
78 |
+
max_new_tokens=2048,
|
79 |
+
eos_token_id=tokenizer.eos_token_id
|
80 |
)
|
|
|
|
|
|
|
|
|
81 |
|
82 |
+
answer = tokenizer.decode(generated_ids[0], skip_special_tokens=False)
|
83 |
+
print(answer)
|
84 |
```
|
85 |
|
86 |
FineMedLM-o1 adopts a *slow-thinking* approach, with outputs formatted as:
|