Xenova HF Staff commited on
Commit
9f0aab4
·
verified ·
1 Parent(s): 34280c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -1,8 +1,56 @@
1
  ---
2
  library_name: transformers.js
3
  base_model: Menlo/Jan-nano
 
4
  ---
5
 
6
  https://huggingface.co/Menlo/Jan-nano with ONNX weights to be compatible with Transformers.js.
7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
 
1
  ---
2
  library_name: transformers.js
3
  base_model: Menlo/Jan-nano
4
+ license: apache-2.0
5
  ---
6
 
7
  https://huggingface.co/Menlo/Jan-nano with ONNX weights to be compatible with Transformers.js.
8
 
9
+ # Jan-Nano: An Agentic Model
10
+
11
+ <div align="center">
12
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/wC7Xtolp7HOFIdKTOJhVt.png" width="300" alt="Jan-Nano">
13
+ </div>
14
+
15
+ Authors: [Alan Dao](https://scholar.google.com/citations?user=eGWws2UAAAAJ&hl=en), [Bach Vu Dinh](https://scholar.google.com/citations?user=7Lr6hdoAAAAJ&hl=vi), [Thinh Le](https://scholar.google.com/citations?user=8tcN7xMAAAAJ&hl=en)
16
+
17
+ ## Overview
18
+
19
+ Jan-Nano is a compact 4-billion parameter language model specifically designed and trained for deep research tasks. This model has been optimized to work seamlessly with Model Context Protocol (MCP) servers, enabling efficient integration with various research tools and data sources.
20
+
21
+ ## Usage (Transformers.js)
22
+
23
+ If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
24
+ ```bash
25
+ npm i @huggingface/transformers
26
+ ```
27
+
28
+ **Example:** Text-generation with `onnx-community/Jan-nano-ONNX`.
29
+ ```js
30
+ import { pipeline, TextStreamer } from "@huggingface/transformers";
31
+
32
+ // Create a text generation pipeline
33
+ const generator = await pipeline(
34
+ "text-generation",
35
+ "onnx-community/Jan-nano-ONNX",
36
+ { dtype: "q4f16" },
37
+ );
38
+
39
+ // Define the list of messages
40
+ const messages = [
41
+ { role: "system", content: "You are a helpful assistant." },
42
+ { role: "user", content: "Tell me a joke." },
43
+ ];
44
+
45
+ // Generate a response
46
+ const output = await generator(messages, {
47
+ max_new_tokens: 512,
48
+ do_sample: false,
49
+ streamer: new TextStreamer(generator.tokenizer, { skip_prompt: true, skip_special_tokens: true}),
50
+ });
51
+ console.log(output[0].generated_text.at(-1).content);
52
+ ```
53
+
54
+ ---
55
+
56
  Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).