Model Card for ph3-FineTunned-matplotlib-seaborn-10k

This is a fine-tuned version of the Phi-3 language model designed to generate Python data visualization code (using matplotlib and seaborn) from natural language prompts. It has been trained on 10,000 high-quality prompt–completion pairs focused on data plotting.


Model Details

Model Description

  • Developed by: Prashant Suresh Shirgave
  • Shared by: prashantss1404
  • Model type: Text-to-Code Generation (Instruction-based)
  • Language(s): English (data viz-related queries)
  • License: Apache 2.0
  • Finetuned from model: Phi-3 Mini

Model Sources


Uses

Direct Use

This model is designed to:

  • Generate Python visualization code (matplotlib, seaborn) from natural language queries.
  • Help automate plotting tasks in notebooks, dashboards, or LLM-based assistants.

Out-of-Scope Use

  • Not suitable for general-purpose coding outside of data visualization.
  • Not optimized for plotly or non-Python frameworks.

Bias, Risks, and Limitations

Limitations

  • The model was fine-tuned on text-based examples
  • It may generate incorrect or non-working code for complex queries.
  • No built-in code execution or error checking is included.
  • Meant for educational and experimental use only.

Recommendations

Always validate generated code before executing. Combine with an execution sandbox (e.g., Streamlit, Jupyter) for best results.


How to Get Started with the Model

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("prashantss1404/ph3-FineTunned-matplotlib-seaborn-10k")
model = AutoModelForCausalLM.from_pretrained("prashantss1404/ph3-FineTunned-matplotlib-seaborn-10k")

prompt = "Plot a bar chart of sales by region using seaborn"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0]))
Downloads last month
17
Safetensors
Model size
2.07B params
Tensor type
F32
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support