Qwen-14B-Hindi

Qwen-14B-Hindi is a 14.7B parameter pre-trained and instruction-tuned bilingual large language model for both Hindi and English, trained on a mixed language dataset containing < >. It features Rotary Position Embeddings (RoPE), SwiGLU activation, RMSNorm normalization, and Attention QKV bias, optimizing performance and efficiency.

Model Details:

Intended Use

We release Qwen-14B-Hindi under the MIT and Apache-2.0 licenses, encouraging researchers, developers, and enterprises to experiment with and build upon the model, particularly for bilingual, multilingual and non-English applications. At the time of release, the model demonstrated state-of-the-art performance across an extensive English and Hindi evaluation suite.

Some potential downstream applications are as follows:

  • Research: This model serves as a valuable tool for researchers and developers working in NLP.
  • Commercial Use: It can be utilized as a foundational model for fine-tuning to meet specific industry needs.
    Possible applications include:
    • AI-powered Chat Assistants
    • Customer Support Service
    • Educational tools for language learning

Target audiences who may benefit from our model:

  • Academics: Researchers focused on Hindi and multilingual NLP advancements.
  • Businesses: Companies catering to Hindi-speaking and bilingual users.
  • Developers: Those integrating Hindi language capabilities into applications and services.
  • Educational Institutions: Schools and universities developing AI-powered learning tools.

Out-of-Scope Use

While Qwen-14B-Hindi is a powerful bilingual model designed for Hindi and English, it is crucial to acknowledge its limitations and the potential for misuse. The model must not be used in ways that violate any applicable laws or regulations. Below are specific scenarios where its use is restricted:

  • Harmful or Malicious Use: The model should not be employed to create or distribute harmful, misleading, or inappropriate content, including but not limited to:

    • Encouraging hate speech, violence, or discrimination
    • Spreading misinformation or false narratives
    • Facilitating or promoting illegal activities
  • Sensitive Data Handling: The model is not designed to process or generate personal, confidential, or sensitive information.

  • Language Constraints: While optimized for Hindi and English, the model should not be assumed to have the same proficiency in other languages.

  • High-Risk Decision-Making: It should not be used for critical decision-making without human oversight, especially in medical, legal, financial, or safety-related contexts.

Bias, Risks, and Limitations

~~We have employed different techniqes to reduce bias in the model. While efforts have been made to minimize biases, it is likely that the model, as with all LLM models, will exhibit some bias.

The model is trained as an AI assistant for Hindi and English speakers. The model is limited to produce responses for queries in these two languages and may not produce appropriate responses to other language queries.

By using Llama-3-Nanda-10B-Chat, you acknowledge and accept that, as with any large language model, it may generate incorrect, misleading and/or offensive information or content. The information is not intended as advice and should not be relied upon in any way, nor are we responsible for any of the content or consequences resulting from its use. We are continuously working to develop models with greater capabilities, and as such, welcome any feedback on the model~~

Training Details:

Training Data:

For the pre-training of Llama-3-Nanda-10B-Chat, we used a diverse bilingual corpus sourced from the Web and other sources. We also used publicly available English and code datasets. To collect Hindi data, we used multiple sources including web pages, Wikipedia articles, news articles, Hindi books, etc.

Training Procedure:

We performed continuous pre-training followed by instruction tuning, both on Cerebras supercomputer.

Evaluation:

We evaluated our models on multiple well-known benchmarks to measure their effectiveness against other leading models, and the results are as follows:

Model ARC-C ARC-E BoolQ CMCQ MMLU Average* MMLU-Pro GPQA MuSR BBH MATH
AryaBhatta-GemmaUltra-8.5B 22.70 25.04 22.95 62.23 23.70 31.32 22.66 25.34 42.72 41.12 2.95
Airavata-7B 25.09 30.47 25.31 62.17 33.20 35.25 16.35 27.43 37.57 36.00 13.60
sarvam-1-2B 30.03 33.25 62.17 42.80 27.90 39.23 - - - - -
Nemotron-4-Mini-Hindi-Instruct 55.80 71.63 62.11 68.10 43.20 60.17 25.95 30.87 41.53 40.11 2.04
Llama-3-Nanda-10B-Chat 65.36 80.64 82.29 67.60 50.61 69.30 - - - - -
Krutrim-2-12b-instruct 67.32 81.10 84.74 76.30 56.10 73.11 - - - - -
aya-expanse-8b 74.06 87.08 86.45 83.30 56.89 77.56 30.04 30.29 37.17 49.42 7.02
aya-expanse-32B 85.41 95.08 90.43 89.80 69.71 86.08 41.30 32.55 38.62 56.29 13.37
Our Qwen Model (14b) 90.61 94.82 88.53 90.70 75.00 87.93 52.63 36.24 44.84 64.97 25.08
Our Phi Model (14b) 97.39 92.24 87.65 87.40 75.59 88.05 52.39 39.77 49.07 66.97 23.11

Table 1: Metrics (.2f) of our Qwen-2.5-14B and other LLMs over several English benchmarks

Model ARC-C ARC-E BoolQ CMCQ MMLU Average
AryaBhatta-GemmaUltra-8.5B 22.70 25.08 22.95 62.17 23.80 31.34
Airavata-7B 22.87 25.13 23.28 62.17 33.20 33.33
sarvam-1-2B 32.76 35.06 62.16 47.10 24.22 40.26
Llama-3-Nanda-10B-Chat 45.99 60.56 71.96 54.70 36.35 53.91
Nemotron-4-Mini-Hindi-4B-Instruct 50.68 63.72 68.74 51.30 37.18 54.32
Krutrim-2-12b-instruct 56.83 70.66 78.86 64.10 46.51 63.39
aya-expanse-8b 57.42 72.90 80.42 69.00 43.39 64.63
aya-expanse-32B 73.29 85.48 87.73 79.70 56.96 76.63
Our Qwen Model (14b) 74.06 81.23 84.07 78.20 53.85 74.82
Our Phi Model (14b) 81.74 89.06 86.02 78.70 56.39 78.38

Table 2: Metrics (.2f) of our Qwen-2.5-14B and other LLMs over several Hindi benchmarks

Benchmark Lang Qwen-2.5-14B-Instruct Our Qwen Change Phi-4 Our Phi Change
ARC-Easy En 95.45 94.82 ๐Ÿ”ป 0.63 97.31 97.39 ๐Ÿ”ผ 0.08
Hi 78.49 81.23 ๐Ÿ”ผ 2.74 86.87 89.06 ๐Ÿ”ผ 2.19
ARC-Challenge En 90.87 90.61 ๐Ÿ”ป 0.26 92.41 92.24 ๐Ÿ”ป 0.17
Hi 69.62 74.06 ๐Ÿ”ผ 4.44 79.18 81.74 ๐Ÿ”ผ 2.56
BoolQ En 86.09 88.53 ๐Ÿ”ผ 2.44 86.30 87.65 ๐Ÿ”ผ 1.35
Hi 78.89 84.07 ๐Ÿ”ผ 5.18 82.72 86.02 ๐Ÿ”ผ 3.30
Context-MCQ En 91.20 90.70 ๐Ÿ”ป 0.50 86.30 87.40 ๐Ÿ”ผ 1.10
Hi 77.40 78.20 ๐Ÿ”ผ 0.80 75.70 78.70 ๐Ÿ”ผ 3.00
MMLU En 74.37 75.00 ๐Ÿ”ผ 0.63 74.67 75.59 ๐Ÿ”ผ 0.92
Hi 52.16 53.85 ๐Ÿ”ผ 1.69 53.24 56.39 ๐Ÿ”ผ 3.15
Average En 87.60 87.93 ๐Ÿ”ผ 0.33 87.40 88.05 ๐Ÿ”ผ 0.65
Hi 71.31 74.82 ๐Ÿ”ผ 3.51 75.54 78.38 ๐Ÿ”ผ 2.84
Overall 79.46 81.38 ๐Ÿ”ผ 1.92 81.47 83.22 ๐Ÿ”ผ 1.75

Table 3: Performance of our Qwen-2.5-14B model compared to originals over each benchmark : evals through log likelihoods

Benchmark Lang Qwen-2.5-14B-Instruct Our Qwen Change Phi-4 Our Phi Change
MMLU-Pro En 49.04 52.63 ๐Ÿ”ผ 3.59 53.78 52.39 ๐Ÿ”ป 1.39
MATH hard En 00.00 25.08 N/A 12.31 23.11 ๐Ÿ”ผ 10.80
GPQA En 32.21 36.24 ๐Ÿ”ผ 4.03 33.72 39.77 ๐Ÿ”ผ 6.05
MuSR En 40.87 44.84 ๐Ÿ”ผ 3.97 41.01 49.07 ๐Ÿ”ผ 8.06
BigBench-Hard En 63.74 64.97 ๐Ÿ”ผ 1.23 68.60 66.97 ๐Ÿ”ป 1.63
Average 37.17 44.75 ๐Ÿ”ผ 7.58 41.88 46.26 ๐Ÿ”ผ 4.38

Table 4: Performance of our Qwen-2.5-14B model compared to originals over each benchmark : evals through eval-harness

Recommendations

It is advisable for users to:

  • Refrain from deploying the model in sensitive domains without human supervision.
  • Cross-check factual information generated by the model for accuracy.
  • Continuously assess the model to ensure compliance with ethical standards.
  • Be mindful of potential biases and unintended outputs, especially in critical applications.

Terms of use

By accessing this model, you are agreeing to the LLama 3 terms and conditions of the license, acceptable use policy and Metaโ€™s privacy policy

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train DrishtiSharma/qwen-2.5-14b

Evaluation results