|
--- |
|
pipeline_tag: text-generation |
|
tags: |
|
- information-retrieval |
|
- language-model |
|
- text-semantic-similarity |
|
- prompt-retrieval |
|
- sentence-transformers |
|
- transformers |
|
- natural_questions |
|
- english |
|
- dementia |
|
- dementia disease |
|
language: en |
|
inference: true |
|
license: apache-2.0 |
|
--- |
|
|
|
# **My LLM Model: Dementia Knowledge Assistant** |
|
|
|
**Model Name:** `Dementia-llm-model` |
|
**Description:** |
|
This is a fine-tuned **Large Language Model (LLM)** designed to assist with dementia-related knowledge retrieval and question-answering tasks. The model uses advanced embeddings (`hkunlp/instructor-large`) and a **FAISS vector store** for efficient contextual search and retrieval. |
|
|
|
--- |
|
|
|
## **Model Summary** |
|
|
|
This LLM is fine-tuned on a dataset specifically curated for dementia-related content, including medical knowledge, patient care, and treatment practices. It leverages state-of-the-art embeddings to generate accurate and contextually relevant answers to user queries. The model supports researchers, caregivers, and medical professionals in accessing domain-specific information quickly. |
|
|
|
--- |
|
|
|
## **Key Features** |
|
- **Domain-Specific Knowledge:** Trained on a dementia-related dataset for precise answers. |
|
- **Embeddings:** Utilizes the `hkunlp/instructor-large` embedding model for semantic understanding. |
|
- **Retrieval-augmented QA:** Employs FAISS vector databases for efficient document retrieval. |
|
- **Custom Prompting:** Generates responses based on well-designed prompts to ensure factual accuracy. |
|
|
|
--- |
|
|
|
## **Intended Use** |
|
- **Primary Use Case:** Question-answering related to dementia. |
|
- **Secondary Use Cases:** Exploring dementia knowledge, aiding medical students or caregivers in understanding dementia-related topics, and supporting researchers. |
|
- **Input Format:** Text queries in natural language. |
|
- **Output Format:** Natural language responses relevant to the context provided. |
|
|
|
--- |
|
|
|
## **Limitations** |
|
- **Context Dependency:** Model outputs are only as good as the context provided by the FAISS retriever. If the context is insufficient, the model may respond with "I don't know." |
|
- **Static Knowledge:** The model is limited to the knowledge present in its training dataset. It may not include the latest medical breakthroughs or research after the training cutoff. |
|
- **Biases:** The model might inherit biases present in the training data. |
|
|
|
--- |
|
|
|
## **How to Use** |
|
|
|
### **Using the Model Programmatically** |
|
You can use the model directly in Python: |
|
|
|
```python |
|
from transformers import pipeline |
|
|
|
model_name = "rohitashva/my-llm-model" |
|
|
|
# Load the model and tokenizer |
|
qa_pipeline = pipeline("question-answering", model=model_name) |
|
|
|
# Example Query |
|
result = qa_pipeline({ |
|
"question": "What are the symptoms of early-stage dementia?", |
|
"context": "Provide relevant details from a dementia dataset." |
|
}) |
|
|
|
print(result) |
|
``` |
|
|
|
--- |
|
### **Training Details** |
|
|
|
• Base Model: hkunlp/instructor-large |
|
• Frameworks: PyTorch, Transformers |
|
• Embedding Model: HuggingFace Embeddings (hkunlp/instructor-large) |
|
• Fine-Tuning: FAISS-based vector retrieval augmented with dementia-specific content. |
|
• Hardware: Trained on a GPU with sufficient VRAM for embeddings and fine-tuning tasks. |
|
|
|
--- |
|
|
|
|
|
## Further Information |
|
|
|
### Dataset |
|
|
|
The model was trained on a proprietary dementia-specific dataset, including structured knowledge, medical texts, and patient case studies. The data is preprocessed into embeddings for efficient retrieval. |
|
|
|
### Model Performance |
|
|
|
• Accuracy: Validated on a subset of dementia-related QA pairs. |
|
• Response Time: Optimized for fast retrieval via FAISS vector storage. |
|
|
|
### Deployment |
|
|
|
• Hugging Face Spaces: The model is deployed on Hugging Face Spaces, enabling users to interact via a web-based interface. |
|
• API Support: The model is available for integration into custom workflows using the Hugging Face Inference API. |
|
|
|
### Acknowledgments |
|
|
|
• Hugging Face team for the transformers library. |
|
• Contributors to the hkunlp/instructor-large embedding model. |
|
• Medical experts and datasets used for model fine-tuning. |
|
|
|
|