qqWen-1.5B-SFT: SFT d Q Programming Language Model

Model Overview

qqWen-1.5B-SFT is a 1.5-billion parameter language model specifically designed for advanced reasoning and code generation in the Q programming language. Built upon the robust Qwen 2.5 architecture, this model has undergone a comprehensive two-stage training process: pretraining and supervised fine-tuning (SFT), for the Q programming language.

Associated Technical Report: [Link to paper will be added here]

🔤 About Q Programming Language

Q is a high-performance, vector-oriented programming language developed by Kx Systems, primarily used in:

  • Financial Markets: High-frequency trading, risk management, and market data analysis
  • Time-Series Analytics: Real-time processing of large-scale temporal data
  • Data Science: Efficient manipulation of large datasets with concise syntax
  • Quantitative Research: Mathematical modeling and statistical analysis

Key Q Language Features:

  • Vector Operations: Built-in support for element-wise operations on arrays
  • Functional Programming: First-class functions and powerful combinators
  • Memory Efficiency: Optimized for handling large datasets in minimal memory
  • Speed: Exceptional performance for numerical computations
  • Concise Syntax: Expressive code that can accomplish complex tasks in few lines

📝 Citation

If you use this model in your research or applications, please cite our technical report.

Downloads last month
4
Safetensors
Model size
1.54B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for morganstanley/qqWen-1.5B-SFT

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(1120)
this model
Quantizations
2 models

Collection including morganstanley/qqWen-1.5B-SFT