Running 1.36k 1.36k The Ultra-Scale Playbook 🌌 The ultimate guide to training LLM on large GPU Clusters
Running on T4 1.05k 1.05k Open NotebookLM 🎙 Personalised Podcasts For All - Available in 13 Languages
Writing in the Margins: Better Inference Pattern for Long Context Retrieval Paper • 2408.14906 • Published Aug 27, 2024 • 141
GoldFinch: High Performance RWKV/Transformer Hybrid with Linear Pre-Fill and Extreme KV-Cache Compression Paper • 2407.12077 • Published Jul 16, 2024 • 55
GoldFinch: High Performance RWKV/Transformer Hybrid with Linear Pre-Fill and Extreme KV-Cache Compression Paper • 2407.12077 • Published Jul 16, 2024 • 55