jackyoung96 commited on
Commit
27b4923
·
verified ·
1 Parent(s): 32486b8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -43,7 +43,7 @@ model-index:
43
  ## A.X 3.1 Light Highlights
44
 
45
  <!-- SK Telecom released **A.X 3.1 Light** (pronounced "A dot X"), a large language model (LLM) optimized for Korean-language understanding and enterprise deployment, on July 10, 2025. -->
46
- **A.X 3.1 Light** (pronounced "A dot X" /ˈeɪ dɒt ɛks/ ) is a light weight LLM optimized for Korean-language understanding and enterprise deployment.
47
  This sovereign AI model was developed entirely in-house by SKT, encompassing model architecture, data curation, and training, all carried out on SKT’s proprietary supercomputing infrastructure, TITAN.
48
  The model was trained from scratch on a high-quality multilingual corpus comprising **1.65 trillion tokens**, with a primary focus on the Korean language.
49
  With a strong emphasis on data quality, A.X 3.1 Light achieves **Pareto-optimal performance among Korean LLMs relative to its training corpus size**, enabling **highly efficient and cost-effective compute usage**.
 
43
  ## A.X 3.1 Light Highlights
44
 
45
  <!-- SK Telecom released **A.X 3.1 Light** (pronounced "A dot X"), a large language model (LLM) optimized for Korean-language understanding and enterprise deployment, on July 10, 2025. -->
46
+ **A.X 3.1 Light** (pronounced "A dot X") is a light weight LLM optimized for Korean-language understanding and enterprise deployment.
47
  This sovereign AI model was developed entirely in-house by SKT, encompassing model architecture, data curation, and training, all carried out on SKT’s proprietary supercomputing infrastructure, TITAN.
48
  The model was trained from scratch on a high-quality multilingual corpus comprising **1.65 trillion tokens**, with a primary focus on the Korean language.
49
  With a strong emphasis on data quality, A.X 3.1 Light achieves **Pareto-optimal performance among Korean LLMs relative to its training corpus size**, enabling **highly efficient and cost-effective compute usage**.