base_model:
- google/gemma-2-2b-it
library_name: transformers
license: apache-2.0
language:
- sw
- en
metrics:
- accuracy
- bleu
pipeline_tag: text2text-generation
datasets:
- fka/awesome-chatgpt-prompts
tags:
- finance
- AI
- NLP
- customer-support
new_version: google/gemma-2-27b-it
Model Card for Futuresony AI
Model Details
Model Description
Futuresony AI is a fine-tuned language model designed to enhance conversational AI in customer support, finance, and general-purpose NLP applications. It is developed with a focus on multilingual capabilities, supporting both Swahili and English. The model is particularly fine-tuned for tasks such as text generation, question-answering, and AI-driven customer support solutions, aligning with Salum A. Salum’s goal of introducing AI-driven automation in Tanzania.
- Developed by: Futuresony Tech
- Funded by: Futuresony
- Shared by: Salum A. Salum
- Model type: Transformer-based text-to-text generation model
- Language(s) (NLP): Swahili and English
- License: Apache 2.0
- Finetuned from model: google/gemma-2-2b-it
Model Sources
- Repository: [More Information Needed]
- Paper: [More Information Needed]
- Demo: [More Information Needed]
Uses
Direct Use
The model can be used for:
- AI-powered customer support for businesses
- Financial query handling
- General NLP tasks like summarization, translation, and chatbot applications
- Integration with WhatsApp, SMS, and social media for automated responses
Downstream Use
When fine-tuned, the model can be applied to:
- ASA Microfinance, model finetuned with ASA Microfinance Company to assist internal and external users related to ASA questions
- AI-driven traffic control and safety enhancements at Tanga Port
- AI-driven FAQ systems for businesses
Out-of-Scope Use
- Any application that violates ethical AI usage, such as misinformation, hate speech, or unethical automation.
- Legal or medical advisory applications without proper expert supervision.
Bias, Risks, and Limitations
While Futuresony AI is trained to provide accurate and helpful responses, it may still exhibit biases present in the training data. There is also a risk of incorrect or misleading responses, particularly for sensitive topics.
Recommendations
Users should verify important responses with human oversight, especially in critical domains like finance and customer support. Continuous monitoring and retraining are recommended to improve accuracy and fairness.
How to Get Started with the Model
To use the model, follow this example:
from transformers import pipeline
generator = pipeline("text2text-generation", model="Futuresony/future_ai_12_10_2024.gguf")
response = generator("What is the best way to improve AI chatbot accuracy?")
print(response)
Training Details
Training Data
The model is fine-tuned using a dataset that includes:
- Customer service inquiries and responses
- Financial support queries
- Swahili-English multilingual conversations
Training Procedure
- Preprocessing: Tokenization and normalization of data
- Training Regime: Mixed precision (fp16) for efficiency
- Hardware: Trained using Google Colab with TPU acceleration
Evaluation
Testing Data, Factors & Metrics
Testing Data
- Datasets used: fka/awesome-chatgpt-prompts
- Additional customer service data from Futuresony’s internal datasets
Factors
- Response accuracy
- Coherence in multilingual interactions
- Efficiency in real-time conversations
Metrics
- Accuracy: Measures the correctness of generated responses
- BLEU Score: Evaluates text coherence and fluency
Results
Initial tests show strong performance in customer service scenarios, with high coherence and accuracy in answering user queries in both Swahili and English.
Model Examination
Further research is needed to evaluate:
- Bias mitigation
- Performance in domain-specific use cases
Environmental Impact
Training energy consumption:
- Hardware Type: Google Colab TPU
- Hours used: ~50 hours
- Cloud Provider: Google Cloud
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications
Model Architecture and Objective
Futuresony AI is based on the Google Gemma-2 architecture and fine-tuned for NLP tasks with a focus on customer support and finance applications.
Compute Infrastructure
- Hardware: TPU-based training
- Software: Hugging Face Transformers, PyTorch
Citation
If you use Futuresony AI in your research or project, please cite:
@misc{FuturesonyAI2025,
author = {Salum A. Salum},
title = {Futuresony AI: A Multilingual Conversational Model},
year = {2025},
url = {https://huggingface.co/Futuresony}
}
Model Card Contact
For inquiries or collaboration, contact:
- Email: [email protected]
- Phone: +255 672 087 616
This model card provides an overview of Futuresony AI, detailing its purpose, training, and usage guidelines. The information is subject to updates as the project evolves.