-
Mistral 7B
Paper • 2310.06825 • Published • 46 -
BloombergGPT: A Large Language Model for Finance
Paper • 2303.17564 • Published • 22 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
Collections
Discover the best community collections!
Collections including paper arxiv:1706.03762
-
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 40 -
Efficient Estimation of Word Representations in Vector Space
Paper • 1301.3781 • Published • 6 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
Attention Is All You Need
Paper • 1706.03762 • Published • 53
-
Attention Is All You Need
Paper • 1706.03762 • Published • 53 -
Training Generative Adversarial Networks with Limited Data
Paper • 2006.06676 • Published -
A survey of Generative AI Applications
Paper • 2306.02781 • Published -
11k
Stable Diffusion 2-1
🔥Generate images from text descriptions
-
Attention Is All You Need
Paper • 1706.03762 • Published • 53 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 7 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14
-
Attention Is All You Need
Paper • 1706.03762 • Published • 53 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 35 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 53 -
Lost in the Middle: How Language Models Use Long Contexts
Paper • 2307.03172 • Published • 40
-
Attention Is All You Need
Paper • 1706.03762 • Published • 53 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 13 -
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Paper • 2201.11903 • Published • 11 -
Orca 2: Teaching Small Language Models How to Reason
Paper • 2311.11045 • Published • 73
-
Attention Is All You Need
Paper • 1706.03762 • Published • 53 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 6 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 13
-
Attention Is All You Need
Paper • 1706.03762 • Published • 53 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 17 -
Universal Language Model Fine-tuning for Text Classification
Paper • 1801.06146 • Published • 6 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 13
-
The Impact of Depth and Width on Transformer Language Model Generalization
Paper • 2310.19956 • Published • 10 -
Retentive Network: A Successor to Transformer for Large Language Models
Paper • 2307.08621 • Published • 170 -
RWKV: Reinventing RNNs for the Transformer Era
Paper • 2305.13048 • Published • 15 -
Attention Is All You Need
Paper • 1706.03762 • Published • 53