abaryan commited on
Commit
17aac37
·
verified ·
1 Parent(s): e8c909f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -40,8 +40,7 @@ This model is a finetuned version of Qwen/Qwen2.5-0.5B-Instruct, a 0.5 billion p
40
  The finetuning was performed using SFT following by Group Relative Policy Optimization (GRPO).
41
 
42
  - **Developed by:** Qwen (original model), finetuning by Abaryan
43
- - **Funded by :** Abaryan
44
- - **Shared by :** Abaryan
45
  - **Model type:** Causal Language Model
46
  - **Language(s) (NLP):** English
47
  - **License:** MIT
@@ -65,7 +64,7 @@ You can load this model using the `transformers` library in Python:
65
  ```python
66
  from transformers import AutoModelForCausalLM, AutoTokenizer
67
 
68
- model_id = "rgb2gbr/BioXP-0.5B-MedMCQA"
69
  tokenizer = AutoTokenizer.from_pretrained(model_id)
70
  model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype="auto")
71
 
 
40
  The finetuning was performed using SFT following by Group Relative Policy Optimization (GRPO).
41
 
42
  - **Developed by:** Qwen (original model), finetuning by Abaryan
43
+ - **Funded & Shared by :** Abaryan
 
44
  - **Model type:** Causal Language Model
45
  - **Language(s) (NLP):** English
46
  - **License:** MIT
 
64
  ```python
65
  from transformers import AutoModelForCausalLM, AutoTokenizer
66
 
67
+ model_id = "abaryan/BioXP-0.5B-MedMCQA"
68
  tokenizer = AutoTokenizer.from_pretrained(model_id)
69
  model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype="auto")
70