Update README.md
Browse files
README.md
CHANGED
@@ -12,14 +12,18 @@ tags:
|
|
12 |
## gpt-oss-20b ONNX Models
|
13 |
This repository hosts the optimized versions of [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) to accelerate inference with ONNX Runtime. Model optimizations refer to techniques and methods used to improve the run time performance, efficiency, and resource utilization of machine learning models.
|
14 |
|
15 |
-
Optimized models are published here in [ONNX](https://onnx.ai) format to run with [ONNX Runtime](https://onnxruntime.ai)
|
16 |
|
17 |
To easily get started with the model, you can use [Foundry Local](https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-local/get-started). See instructions [here](https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-local/get-started#run-the-latest-openai-open-source-model).
|
18 |
|
19 |
-
You can also
|
20 |
|
21 |
-
## ONNX Models
|
22 |
-
|
|
|
|
|
|
|
|
|
23 |
|
24 |
## Model Description
|
25 |
|
@@ -27,7 +31,7 @@ The optimized configuration we have added is ONNX model for CUDA GPU using int4
|
|
27 |
- **Model type:** ONNX
|
28 |
- **License:** Apache-2.0
|
29 |
- **License Description:** Use of this model is subject to the terms of the Apache License, Version 2.0, available at https://www.apache.org/licenses/LICENSE-2.0.
|
30 |
-
- **Model Description:** This is a conversion of the gpt-oss-20b model for local inference on CUDA GPUs.
|
31 |
- **Disclaimer:** Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
|
32 |
|
33 |
## Base Model Information
|
|
|
12 |
## gpt-oss-20b ONNX Models
|
13 |
This repository hosts the optimized versions of [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) to accelerate inference with ONNX Runtime. Model optimizations refer to techniques and methods used to improve the run time performance, efficiency, and resource utilization of machine learning models.
|
14 |
|
15 |
+
Optimized models are published here in [ONNX](https://onnx.ai) format to run with [ONNX Runtime](https://onnxruntime.ai) with the precision best suited to this target.
|
16 |
|
17 |
To easily get started with the model, you can use [Foundry Local](https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-local/get-started). See instructions [here](https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-local/get-started#run-the-latest-openai-open-source-model).
|
18 |
|
19 |
+
You can also install [ONNX Runtime GenAI](https://onnxruntime.ai/docs/genai/) to run the model. You can then run the inference example [here](https://github.com/microsoft/onnxruntime-genai/blob/main/examples/python/model-chat.py).
|
20 |
|
21 |
+
## ONNX Models
|
22 |
+
Here are some of the optimized configurations we have added:
|
23 |
+
|
24 |
+
1. ONNX model for CPU and mobile using int4 quantization via RTN and block size 32.
|
25 |
+
|
26 |
+
2. ONNX model for CUDA GPU using int4 quantization via k-quant mixed precision and block size 32.
|
27 |
|
28 |
## Model Description
|
29 |
|
|
|
31 |
- **Model type:** ONNX
|
32 |
- **License:** Apache-2.0
|
33 |
- **License Description:** Use of this model is subject to the terms of the Apache License, Version 2.0, available at https://www.apache.org/licenses/LICENSE-2.0.
|
34 |
+
- **Model Description:** This is a conversion of the gpt-oss-20b model for local inference on CPU and CUDA GPUs.
|
35 |
- **Disclaimer:** Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied. Note that optimizations applied are distinct from fine tuning and thus do not alter the intended uses or capabilities of the model.
|
36 |
|
37 |
## Base Model Information
|