|
--- |
|
language: |
|
- it |
|
library_name: transformers |
|
tags: |
|
- pretrained |
|
- biomedical |
|
- text-generation |
|
- medical |
|
base_model: sapienzanlp/Minerva-7B-base-v1.0 |
|
datasets: |
|
- IVN-RIN/BioBERT_Italian |
|
- Detsutut/medmcqa-ita |
|
pipeline_tag: text-generation |
|
widget: |
|
- text: 'I batteri della famiglia Bacteroides sono importanti per ' |
|
example_title: Example 1 |
|
license: apache-2.0 |
|
extra_gated_prompt: >- |
|
This is a pretrained model that should be fine-tuned to perform downstream |
|
tasks. You agree to not use the model to conduct experiments that cause harm |
|
to human subjects, or to perform any medical-related task. |
|
extra_gated_fields: |
|
Company: text |
|
Country: country |
|
Specific date: date_picker |
|
I want to use this model for: |
|
type: select |
|
options: |
|
- Research |
|
- Education |
|
- label: Other |
|
value: other |
|
I agree to use this model for non-commercial use ONLY: checkbox |
|
I have read and unsderstood the 'Bias, Risk, and Limitation' section of the model card: checkbox |
|
geo: ip_location |
|
extra_gated_heading: Acknowledge terms and conditions to accept the repository |
|
extra_gated_description: Our team may take 2-3 days to process your request |
|
extra_gated_button_content: Acknowledge |
|
metrics: |
|
- accuracy |
|
--- |
|
|
|
# Igea-7B-v0.1 ⚕️🩺 |
|
|
|
Igea is a biomedical Large Language Model (LLM) for Italian, continually pretrained from [Minerva](https://huggingface.co/sapienzanlp/Minerva-7B-base-v1.0) with [NMT translated Pubmed Abstracts](https://huggingface.co/datasets/IVN-RIN/BioBERT_Italian) |
|
|
|
🔓: Access to the model is only granted after explicitly acknowledging that you have read the 'Bias, Risk, and Limitation' section of this model card. |
|
|
|
This is ongoing research. Do not use it for any medical-related tasks. |
|
|
|
**Preprint: [Igea: a Decoder-Only Language Model for Biomedical Text Generation in Italian](https://arxiv.org/abs/2407.06011).** |
|
|
|
## How to use Igea with Hugging Face transformers |
|
|
|
```python |
|
import transformers |
|
import torch |
|
|
|
model_id = "bmi-labmedinfo/Igea-7B-v0.1" |
|
|
|
# Initialize the pipeline. |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model_id, |
|
model_kwargs={"torch_dtype": torch.bfloat16}, |
|
device_map="auto", |
|
) |
|
|
|
# Input text for the model. |
|
input_text = "Il fegato è " |
|
|
|
# Compute the outputs. |
|
output = pipeline( |
|
input_text, |
|
max_new_tokens=128, |
|
) |
|
|
|
# Output: |
|
# [{'generated_text': "Il fegato è una ghiandola fondamentale per il metabolismo umano, la più [...]"}] |
|
``` |
|
|
|
## 🚨⚠️🚨 Bias, Risks, and Limitations 🚨⚠️🚨 |
|
*This section identifies foreseeable harms and misunderstandings.* |
|
|
|
This is a continued pretraining of a foundation model, not subject to alignment. Model may: |
|
|
|
- Overrepresent some viewpoints and underrepresent others |
|
- Contain stereotypes |
|
- Contain personal information |
|
- Generate: |
|
- Racist and sexist content |
|
- Hateful, abusive, or violent language |
|
- Discriminatory or prejudicial language |
|
- Content that may not be appropriate for all settings, including sexual content |
|
- Make errors, including producing incorrect information or historical facts as if it were factual |
|
- Generate irrelevant or repetitive outputs |
|
|
|
We are aware of the biases and potential problematic/toxic content that current pretrained large language models exhibit: more specifically, as probabilistic models of (Italian and English) languages, they reflect and amplify the biases of their training data. |
|
|
|
The biomedical setting poses additional threats, including: |
|
|
|
- Disparities in research focus, demographic representation, and reporting standards |
|
- Reinforcement of existing medical paradigms and overlook emerging or alternative viewpoints, hindering innovation and comprehensive care |
|
- Generation of incorrect information and false claims, potentially leading to incorrect medical decisions |
|
|
|
This model is therefore **not** intended to be used as it is for any medical-related task. |
|
|
|
## Training and evaluation data |
|
|
|
Work in progress |
|
|
|
## Evaluation |
|
|
|
Work in progress |
|
|
|
## Credits |
|
|
|
Developed by [Tommaso M. Buonocore](https://huggingface.co/Detsutut) and [Simone Rancati](https://huggingface.co/SimoRancati). |