Update README.md
Browse files
README.md
CHANGED
|
@@ -15,4 +15,53 @@ base_model: LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct
|
|
| 15 |
|
| 16 |
https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct with ONNX weights to be compatible with Transformers.js.
|
| 17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
|
|
|
| 15 |
|
| 16 |
https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct with ONNX weights to be compatible with Transformers.js.
|
| 17 |
|
| 18 |
+
## Usage (Transformers.js)
|
| 19 |
+
|
| 20 |
+
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
|
| 21 |
+
```bash
|
| 22 |
+
npm i @huggingface/transformers
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
**Example:** Text-generation w/ `EXAONE-3.5-2.4B-Instruct`:
|
| 27 |
+
|
| 28 |
+
```js
|
| 29 |
+
import { pipeline } from "@huggingface/transformers";
|
| 30 |
+
|
| 31 |
+
// Create a text generation pipeline
|
| 32 |
+
const generator = await pipeline(
|
| 33 |
+
"text-generation",
|
| 34 |
+
"onnx-community/EXAONE-3.5-2.4B-Instruct",
|
| 35 |
+
{ dtype: "q4f16" },
|
| 36 |
+
);
|
| 37 |
+
|
| 38 |
+
// Define the list of messages
|
| 39 |
+
const messages = [
|
| 40 |
+
{ role: "system", content: "You are a helpful assistant." },
|
| 41 |
+
{ role: "user", content: "Tell me a joke." },
|
| 42 |
+
];
|
| 43 |
+
|
| 44 |
+
// Generate a response
|
| 45 |
+
const output = await generator(messages, { max_new_tokens: 128 });
|
| 46 |
+
console.log(output[0].generated_text.at(-1).content);
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
<details>
|
| 50 |
+
|
| 51 |
+
<summary>See example output</summary>
|
| 52 |
+
|
| 53 |
+
```
|
| 54 |
+
Sure! Here's a light joke for you:
|
| 55 |
+
|
| 56 |
+
Why don't scientists trust atoms?
|
| 57 |
+
|
| 58 |
+
Because they make up everything!
|
| 59 |
+
|
| 60 |
+
I hope you found that amusing! If you want another one, feel free to ask!
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
</details>
|
| 64 |
+
|
| 65 |
+
---
|
| 66 |
+
|
| 67 |
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|