Collections
Discover the best community collections!
Collections including paper arxiv:2201.11903
-
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 40 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
Attention Is All You Need
Paper • 1706.03762 • Published • 53 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 244
-
Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models
Paper • 2402.14848 • Published • 18 -
Teaching Large Language Models to Reason with Reinforcement Learning
Paper • 2403.04642 • Published • 46 -
How Far Are We from Intelligent Visual Deductive Reasoning?
Paper • 2403.04732 • Published • 21 -
Learning to Reason and Memorize with Self-Notes
Paper • 2305.00833 • Published • 5
-
Large Language Model Alignment: A Survey
Paper • 2309.15025 • Published • 2 -
Aligning Large Language Models with Human: A Survey
Paper • 2307.12966 • Published • 1 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 53 -
SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF
Paper • 2310.05344 • Published • 1
-
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 40 -
Efficient Estimation of Word Representations in Vector Space
Paper • 1301.3781 • Published • 6 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
Attention Is All You Need
Paper • 1706.03762 • Published • 53
-
TinyLlama: An Open-Source Small Language Model
Paper • 2401.02385 • Published • 92 -
MM-LLMs: Recent Advances in MultiModal Large Language Models
Paper • 2401.13601 • Published • 47 -
SliceGPT: Compress Large Language Models by Deleting Rows and Columns
Paper • 2401.15024 • Published • 72 -
Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling
Paper • 2401.16380 • Published • 49
-
Attention Is All You Need
Paper • 1706.03762 • Published • 53 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
-
Attention Is All You Need
Paper • 1706.03762 • Published • 53 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 13 -
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Paper • 2201.11903 • Published • 11 -
Orca 2: Teaching Small Language Models How to Reason
Paper • 2311.11045 • Published • 73
-
RA-DIT: Retrieval-Augmented Dual Instruction Tuning
Paper • 2310.01352 • Published • 7 -
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Paper • 2203.11171 • Published • 4 -
MemGPT: Towards LLMs as Operating Systems
Paper • 2310.08560 • Published • 6 -
Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
Paper • 2310.06117 • Published • 2
-
Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure
Paper • 2311.07590 • Published • 17 -
A Survey on Language Models for Code
Paper • 2311.07989 • Published • 22 -
Llamas Know What GPTs Don't Show: Surrogate Models for Confidence Estimation
Paper • 2311.08877 • Published • 7 -
A Challenger to GPT-4V? Early Explorations of Gemini in Visual Expertise
Paper • 2312.12436 • Published • 14