Excited to share insights about LinkedIn's innovative approach to content search, recently detailed in a groundbreaking paper by their Mountain View team. This advancement represents a significant shift from traditional keyword-based search to semantic understanding.
>> Technical Architecture
The new search engine employs a sophisticated two-layer architecture:
Retrieval Layer - Token Based Retriever (TBR) for exact keyword matching - Embedding Based Retriever (EBR) using a two-tower model with multilingual-e5 embeddings - Pre-computed post embeddings stored in a dedicated embedding store for efficient retrieval
Multi-Stage Ranking - L1 Stage: Initial filtering using a lightweight model - L2 Stage: Advanced ranking with complex features including: - Query-post semantic matching - Author reputation analysis - User engagement metrics - Content freshness evaluation
>> Performance Improvements
The system has achieved remarkable results: - 10%+ improvement in both on-topic rate and long-dwell metrics - Enhanced ability to handle complex natural language queries - Significant boost in sitewide engagement
This advancement enables LinkedIn to better serve complex queries like "how to ask for a raise?" while maintaining high performance at scale. The system intelligently balances between exact keyword matching and semantic understanding, ensuring optimal results for both navigational and conceptual searches.
What impresses me most is how the team solved the scale challenge - processing billions of posts efficiently using pre-computed embeddings and approximate nearest neighbor search. This is enterprise-scale AI at its finest.
π New Model Release: zamal/Molmo-7B-GPTQ-4bit π
Hello lovely community,
zamal/Molmo-7B-GPTQ-4bit model is now available for all! This model has been highly quantized, reducing its size by almost six times. It now occupies significantly less space and vRAM, making it perfect for deployment on resource-constrained devices without compromising performance.
Now we get: Efficient Performance: Maintains high accuracy while being highly quantized. Reduced Size: The model size is reduced by nearly six times, optimizing storage and memory usage. Versatile Application: Ideal for integrating a powerful visual language model into various projects particularly multi rag chains. Check it out!
I used my Poco X6 Camera phone and solo taken images
My dataset is far from being ready, thus I have used so many repeating and almost same images, but this was rather experimental
Hopefully I will continue taking more shots and improve dataset and reduce size in future
I trained Clip-L and T5-XXL Text Encoders as well
Since there was too much push from community that my workflow wonβt work with expressions, I had to take a break from research and use whatever I have
I used my own researched workflow for training with Kohya GUI and also my own self developed SUPIR app batch upscaling with face upscaling and auto LLaVA captioning improvement
Download images to see them in full size, the last provided grid is 50% downscaled
Workflow
Gather a dataset that has expressions and perspectives that you like after training, this is crucial, whatever you add, it can generate perfect
Follow one of the LoRA training tutorials / guides
After training your LoRA, use your favorite UI to generate images
I prefer SwarmUI and here used prompts (you can add specific expressions to prompts) including face inpainting :
NuMind has just released 3 new state-of-the-art GLiNER models for Named Entity Recognition/Information Extraction. These GLiNER models allow you to specify any label that you want, and it'll find spans in the text corresponding to your label. It's been shown to work quite well on unusual domains, e.g. celestial entities in my picture.
There are 3 models released: - numind/NuNER_Zero: The primary model, SOTA & can detect really long entities. - numind/NuNER_Zero-span: Slightly better performance than NuNER Zero, but can't detect entities longer than 12 tokens. - numind/NuNER_Zero-4k: Slightly worse than NuNER Zero, but has a context length of 4k tokens.
Some more details about these models in general: - They are *really* small, orders of magnitude smaller than LLMs, which don't reach this level of performance. - Because they're small - they're fast: <1s per sentence on free GPUs. - They have an MIT license: free commercial usage.