nielsr HF Staff commited on
Commit
f403c6f
·
verified ·
1 Parent(s): 3fa3619

Improve model card: add pipeline tag, project page, and fix sample usage model_id

Browse files

This PR improves the model card for Hermes 4 by:

* Adding the `pipeline_tag: text-generation` to the metadata for better discoverability and categorization on the Hugging Face Hub.
* Adding an explicit "Project Page" link near the top of the model card content, linking to the Hermes 4 collection.
* Correcting the `model_id` in the "Transformers example" code snippet from `NousResearch/Hermes-4-14B` to `NousResearch/Hermes-4-14B-FP8` to accurately reflect this model's specific FP8 version.

Files changed (1) hide show
  1. README.md +8 -7
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
 
2
  language:
3
  - en
 
4
  license: apache-2.0
5
  tags:
6
  - Qwen-3-14B
@@ -18,21 +20,18 @@ tags:
18
  - long context
19
  - roleplaying
20
  - chat
21
- base_model: Qwen/Qwen3-14B
22
- library_name: transformers
23
  widget:
24
  - example_title: Hermes 4
25
  messages:
26
  - role: system
27
- content: >-
28
- You are Hermes 4, a capable, neutrally-aligned assistant. Prefer concise,
29
  correct answers.
30
  - role: user
31
- content: >-
32
- Explain the difference between BFS and DFS to a new CS student.
33
  model-index:
34
  - name: Hermes-4-Qwen-3-14B
35
  results: []
 
36
  ---
37
 
38
  # Hermes 4 — Qwen-3 14B
@@ -45,6 +44,8 @@ Hermes 4 14B is a frontier, hybrid-mode **reasoning** model based on Qwen 3 14B
45
 
46
  Read the Hermes 4 technical report here: <a href="https://arxiv.org/abs/2508.18255">Hermes 4 Technical Report</a>
47
 
 
 
48
  Chat with Hermes in Nous Chat: https://chat.nousresearch.com
49
 
50
  Training highlights include a newly synthesized post-training corpus emphasizing verified reasoning traces, massive improvements in math, code, STEM, logic, creativity, and format-faithful outputs, while preserving general assistant quality and broadly neutral alignment.
@@ -142,7 +143,7 @@ The model will then generate tool calls within `<tool_call> {tool_call} </tool_c
142
  from transformers import AutoTokenizer, AutoModelForCausalLM
143
  import torch
144
 
145
- model_id = "NousResearch/Hermes-4-14B"
146
 
147
  tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
148
  model = AutoModelForCausalLM.from_pretrained(
 
1
  ---
2
+ base_model: Qwen/Qwen3-14B
3
  language:
4
  - en
5
+ library_name: transformers
6
  license: apache-2.0
7
  tags:
8
  - Qwen-3-14B
 
20
  - long context
21
  - roleplaying
22
  - chat
 
 
23
  widget:
24
  - example_title: Hermes 4
25
  messages:
26
  - role: system
27
+ content: You are Hermes 4, a capable, neutrally-aligned assistant. Prefer concise,
 
28
  correct answers.
29
  - role: user
30
+ content: Explain the difference between BFS and DFS to a new CS student.
 
31
  model-index:
32
  - name: Hermes-4-Qwen-3-14B
33
  results: []
34
+ pipeline_tag: text-generation
35
  ---
36
 
37
  # Hermes 4 — Qwen-3 14B
 
44
 
45
  Read the Hermes 4 technical report here: <a href="https://arxiv.org/abs/2508.18255">Hermes 4 Technical Report</a>
46
 
47
+ Project Page: https://huggingface.co/collections/NousResearch/hermes-4-collection-68a731bfd452e20816725728
48
+
49
  Chat with Hermes in Nous Chat: https://chat.nousresearch.com
50
 
51
  Training highlights include a newly synthesized post-training corpus emphasizing verified reasoning traces, massive improvements in math, code, STEM, logic, creativity, and format-faithful outputs, while preserving general assistant quality and broadly neutral alignment.
 
143
  from transformers import AutoTokenizer, AutoModelForCausalLM
144
  import torch
145
 
146
+ model_id = "NousResearch/Hermes-4-14B-FP8" # Corrected model_id
147
 
148
  tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
149
  model = AutoModelForCausalLM.from_pretrained(