
language:
- en license: cc-by-nc-2.0 size_categories:
- "10K-100K" task_categories:
- text-classification
- multimodal pretty_name: vision and language bias configs:
- config_name: default
data_files:
- split: train path: data/dataset_cleaned.parquet
π° NewsMediaBias-Plus Dataset
π Overview
NewsMediaBias-Plus is a multimodal dataset designed for the analysis of media bias and disinformation through the combination of textual and visual data from news articles. This dataset aims to foster research and development in detecting, categorizing, and understanding the nuances of biased reporting and the dissemination of information in media outlets.
π Dataset Description
The NewsMediaBias-Plus dataset comprises news articles paired with relevant images, complete with annotations that reflect perceived biases and the reliability of the content. It extends existing datasets by adding a multimodal dimension, offering new opportunities for comprehensive bias detection in news media.
π Additional Resources
- π Dataset website
- πΌοΈ Full Version and Images Download
π Contents
unique_id
: Unique identifier for each news item. Eachunique_id
is associated with the image (top image) for the same news article.outlet
: Publisher of the news article.headline
: Headline of the news article.article_text
: Full text content of the news article.img_description
: Description of the image paired with the article.image
: File path of the image associated with the article.url
: Original URL of the news article.
π·οΈ Annotation Labels
nlp_label
: Annotation for the textual content, indicating:'Likely to Bias'
: Likely to be disinformation.'Likely to UnBias'
: Unlikely to be disinformation.
nlp_img_label
: Annotation for the combined text snippet (first paragraph of the news story) and image content, assessing:'Likely to Bias'
: Likely to be disinformation.'Likely to UnBias'
: Unlikely to be disinformation.
π Getting Started
π Prerequisites
- Python 3.6 or later
- Pandas
- Datasets (from Hugging Face)
- Hugging Face Hub
π» Installation
pip install pandas datasets
π€ Contributions
Contributions to this dataset are welcome. You can contribute in several ways:
- Data Contribution: Add more data points to enhance the datasetβs utility.
- Annotation Improvement: Help refine the annotations for better accuracy.
- Usage Examples: Contribute usage examples to help the community understand how to leverage this dataset effectively.
To contribute, please fork the repository and create a pull request with your proposed changes.
π License
This dataset is released under a specific license CC BY-NC: Attribution-NonCommercial 4.0 that allows for non-commercial use.
π Papers and Citations
1. ViLBias: A Framework for Bias Detection Using Linguistic and Visual Cues
Citation:
@article{raza2024vilbias,
title={ViLBias: A Framework for Bias Detection Using Linguistic and Visual Cues},
author={Raza, Shaina and Saleh, Caesar and Hasan, Emrul and Ogidi, Franklin and Powers, Maximus and Chatrath, Veronica and Lotif, Marcelo and Javadi, Roya and Zahid, Anam and Khazaie, Vahid Reza},
journal={arXiv preprint arXiv:2412.17052},
year={2024},
url={https://arxiv.org/pdf/2412.17052}
}
- Dataset Usage
- Uses approximately 40% of this dataset fields except labels.
- The original dataset labels are not utilized for bias detection.
- Employs multiple LLM-based annotation strategies to generate new bias-related labels, focusing on linguistic and visual cues.
- Provides evaluations on largae language models vs small language models.
2. Perceived Confidence Scoring for Data Annotation with Zero-Shot LLMs
Citation:
@article{salimian2025perceived,
title={Perceived Confidence Scoring for Data Annotation with Zero-Shot LLMs},
author={Salimian, Sina and Uddin, Gias and Jahan, Most Husne and Raza, Shaina},
journal={arXiv preprint arXiv:2502.07186},
year={2025},
url={https://arxiv.org/abs/2502.07186}
}
Dataset Usage
- Selectively re-annotates 10% a subset of data (from the original labels).
- Introduces confidence scores generated through multiple zero-shot LLM approaches.
- Focuses on how annotation reliability can be assessed via perceived model confidence.
Key Contribution
Examines the feasibility and effectiveness of using zero-shot LLMs to label data with minimal human intervention, proposing a confidence-based metric to gauge annotation quality.
π§ Contact
For any questions or support related to this dataset, please contact [email protected].
β οΈ Disclaimer and Guidance for Users
Disclaimer: The classifications of 'Likely' and 'Unlikely' disinformation are based on LLM-based annotations and assessments by content experts and are intended for informational purposes only. They should not be seen as definitive or used to label entities definitively without further analysis.
Guidance for Users: This dataset is intended to encourage critical engagement with media content. Users are advised to use these annotations as a starting point for deeper analysis and to cross-reference findings with reliable sources. Please approach the data with an understanding of its intended use as a tool for research and awareness, not as a conclusive judgment. ```
- Downloads last month
- 46
Models trained or fine-tuned on vector-institute/newsmediabias-plus
