-
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Paper • 2411.14257 • Published • 13 -
Distinguishing Ignorance from Error in LLM Hallucinations
Paper • 2410.22071 • Published -
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations
Paper • 2410.18860 • Published • 9 -
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Paper • 2410.11779 • Published • 26
Collections
Discover the best community collections!
Collections including paper arxiv:2402.03744
-
Looking for a Needle in a Haystack: A Comprehensive Study of Hallucinations in Neural Machine Translation
Paper • 2208.05309 • Published • 1 -
LLM-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models
Paper • 2305.13711 • Published • 2 -
Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language Generation
Paper • 2302.09664 • Published • 3 -
BARTScore: Evaluating Generated Text as Text Generation
Paper • 2106.11520 • Published • 2
-
Partially Rewriting a Transformer in Natural Language
Paper • 2501.18838 • Published • 2 -
AxBench: Steering LLMs? Even Simple Baselines Outperform Sparse Autoencoders
Paper • 2501.17148 • Published • 1 -
Sparse Autoencoders Trained on the Same Data Learn Different Features
Paper • 2501.16615 • Published • 1 -
Open Problems in Mechanistic Interpretability
Paper • 2501.16496 • Published • 19
-
G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment
Paper • 2303.16634 • Published • 3 -
miracl/miracl-corpus
Viewer • Updated • 77.2M • 4.52k • 44 -
Judging LLM-as-a-judge with MT-Bench and Chatbot Arena
Paper • 2306.05685 • Published • 33 -
How is ChatGPT's behavior changing over time?
Paper • 2307.09009 • Published • 24