gugarosa commited on
Commit
73394c6
·
verified ·
1 Parent(s): 459ab9f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -26,7 +26,7 @@ library_name: transformers
26
 
27
  # Phi-4-reasoning-plus Model Card
28
 
29
- [Phi-4-reasoning Technical Report](https://aka.ms/phi-reasoning/techreport)
30
 
31
  ## Model Summary
32
 
@@ -55,9 +55,8 @@ library_name: transformers
55
 
56
  ## Usage
57
 
58
- ### Inference Parameters
59
-
60
- Inference is better with `temperature=0.8`, `top_p=0.95`, and `do_sample=True`. For more complex queries, set the maximum number of tokens to 32k to allow for longer chain-of-thought (CoT).
61
 
62
  *Phi-4-reasoning-plus has shown strong performance on reasoning-intensive tasks. In our experiments, we extended its maximum number of tokens to 64k, and it handled longer sequences with promising results, maintaining coherence and logical consistency over extended inputs. This makes it a compelling option to explore for tasks that require deep, multi-step reasoning or extensive context.*
63
 
 
26
 
27
  # Phi-4-reasoning-plus Model Card
28
 
29
+ [Phi-4-reasoning Technical Report](https://huggingface.co/papers/2504.21318)
30
 
31
  ## Model Summary
32
 
 
55
 
56
  ## Usage
57
 
58
+ > [!IMPORTANT]
59
+ > To fully take advantage of the model's capabilities, inference must use `temperature=0.8`, `top_p=0.95`, and `do_sample=True`. For more complex queries, set `max_new_tokens=32768` to allow for longer chain-of-thought (CoT).
 
60
 
61
  *Phi-4-reasoning-plus has shown strong performance on reasoning-intensive tasks. In our experiments, we extended its maximum number of tokens to 64k, and it handled longer sequences with promising results, maintaining coherence and logical consistency over extended inputs. This makes it a compelling option to explore for tasks that require deep, multi-step reasoning or extensive context.*
62