parent_paper_title
stringclasses
63 values
parent_paper_arxiv_id
stringclasses
63 values
citation_shorthand
stringlengths
2
56
raw_citation_text
stringlengths
9
63
cited_paper_title
stringlengths
5
161
cited_paper_arxiv_link
stringlengths
32
37
cited_paper_abstract
stringlengths
406
1.92k
has_metadata
bool
1 class
is_arxiv_paper
bool
2 classes
bib_paper_authors
stringlengths
2
2.44k
bib_paper_year
float64
1.97k
2.03k
bib_paper_month
stringclasses
16 values
bib_paper_url
stringlengths
20
116
bib_paper_doi
stringclasses
269 values
bib_paper_journal
stringlengths
3
148
original_title
stringlengths
5
161
search_res_title
stringlengths
4
122
search_res_url
stringlengths
22
267
search_res_content
stringlengths
19
1.92k
Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization
2505.19307v1
DBLP:journals/corr/abs-2202-05144
\cite{DBLP:journals/corr/abs-2202-05144}
InPars: Data Augmentation for Information Retrieval using Large Language Models
http://arxiv.org/abs/2202.05144v1
The information retrieval community has recently witnessed a revolution due to large pretrained transformer models. Another key ingredient for this revolution was the MS MARCO dataset, whose scale and diversity has enabled zero-shot transfer learning to various tasks. However, not all IR tasks and domains can benefit from one single dataset equally. Extensive research in various NLP tasks has shown that using domain-specific training data, as opposed to a general-purpose one, improves the performance of neural models. In this work, we harness the few-shot capabilities of large pretrained language models as synthetic data generators for IR tasks. We show that models finetuned solely on our unsupervised dataset outperform strong baselines such as BM25 as well as recently proposed self-supervised dense retrieval methods. Furthermore, retrievers finetuned on both supervised and our synthetic data achieve better zero-shot transfer than models finetuned only on supervised data. Code, models, and data are available at https://github.com/zetaalphavector/inpars .
true
true
Luiz Henrique Bonifacio and Hugo Abonizio and Marzieh Fadaee and Rodrigo Frassetto Nogueira
2,022
null
null
null
ArXiv
InPars: Data Augmentation for Information Retrieval using Large Language Models
InPars: Data Augmentation for Information Retrieval using Large ...
https://arxiv.org/abs/2202.05144
In this work, we harness the few-shot capabilities of large pretrained language models as synthetic data generators for IR tasks.
Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization
2505.19307v1
DBLP:journals/corr/abs-2301-01820
\cite{DBLP:journals/corr/abs-2301-01820}
{InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval}
null
null
true
false
Vitor Jeronymo and Luiz Henrique Bonifacio and Hugo Abonizio and Marzieh Fadaee and Roberto de Alencar Lotufo and Jakub Zavrel and Rodrigo Frassetto Nogueira
2,023
null
null
null
ArXiv
{InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval}
(PDF) InPars-v2: Large Language Models as Efficient Dataset ...
https://www.researchgate.net/publication/366902520_InPars-v2_Large_Language_Models_as_Efficient_Dataset_Generators_for_Information_Retrieval
(PDF) InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval InPars-v2: Large Language Models as Efficient Dataset Generators for Information Retrieval Recently, InPars introduced a method to efficiently use large language models (LLMs) in information retrieval tasks: via few-shot examples, an LLM is induced to generate relevant queries for documents. In this work we introduce InPars-v2, a dataset generator that uses open-source LLMs and existing powerful rerankers to select synthetic query-document pairs for training. A simple BM25 retrieval pipeline followed by a monoT5 reranker finetuned on InPars-v2 data achieves new state-of-the-art results on the BEIR benchmark. We also made available all the synthetic data generated in this work for the 18 different datasets in the BEIR benchmark which took more than 2,000 GPU hours to be generated as well as the reranker models finetuned on the synthetic data.
Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization
2505.19307v1
DBLP:conf/iclr/DaiZMLNLBGHC23
\cite{DBLP:conf/iclr/DaiZMLNLBGHC23}
Promptagator: Few-shot Dense Retrieval From 8 Examples
http://arxiv.org/abs/2209.11755v1
Much recent research on information retrieval has focused on how to transfer from one task (typically with abundant supervised data) to various other tasks where supervision is limited, with the implicit assumption that it is possible to generalize from one task to all the rest. However, this overlooks the fact that there are many diverse and unique retrieval tasks, each targeting different search intents, queries, and search domains. In this paper, we suggest to work on Few-shot Dense Retrieval, a setting where each task comes with a short description and a few examples. To amplify the power of a few examples, we propose Prompt-base Query Generation for Retriever (Promptagator), which leverages large language models (LLM) as a few-shot query generator, and creates task-specific retrievers based on the generated data. Powered by LLM's generalization ability, Promptagator makes it possible to create task-specific end-to-end retrievers solely based on a few examples {without} using Natural Questions or MS MARCO to train %question generators or dual encoders. Surprisingly, LLM prompting with no more than 8 examples allows dual encoders to outperform heavily engineered models trained on MS MARCO like ColBERT v2 by more than 1.2 nDCG on average on 11 retrieval sets. Further training standard-size re-rankers using the same generated data yields another 5.0 point nDCG improvement. Our studies determine that query generation can be far more effective than previously observed, especially when a small amount of task-specific knowledge is given.
true
true
Zhuyun Dai and Vincent Y. Zhao and Ji Ma and Yi Luan and Jianmo Ni and Jing Lu and Anton Bakalov and Kelvin Guu and Keith B. Hall and Ming{-}Wei Chang
2,023
null
null
null
null
Promptagator: Few-shot Dense Retrieval From 8 Examples
Promptagator: Few-shot Dense Retrieval From 8 Examples
https://openreview.net/forum?id=gmL46YMpu2J
In this paper, we suggest to work on Few-shot Dense Retrieval, a setting where each task comes with a short description and a few examples.
Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization
2505.19307v1
DBLP:journals/corr/abs-2403-20327
\cite{DBLP:journals/corr/abs-2403-20327}
Gecko: Versatile Text Embeddings Distilled from Large Language Models
http://arxiv.org/abs/2403.20327v1
We present Gecko, a compact and versatile text embedding model. Gecko achieves strong retrieval performance by leveraging a key idea: distilling knowledge from large language models (LLMs) into a retriever. Our two-step distillation process begins with generating diverse, synthetic paired data using an LLM. Next, we further refine the data quality by retrieving a set of candidate passages for each query, and relabeling the positive and hard negative passages using the same LLM. The effectiveness of our approach is demonstrated by the compactness of the Gecko. On the Massive Text Embedding Benchmark (MTEB), Gecko with 256 embedding dimensions outperforms all existing entries with 768 embedding size. Gecko with 768 embedding dimensions achieves an average score of 66.31, competing with 7x larger models and 5x higher dimensional embeddings.
true
true
Jinhyuk Lee and Zhuyun Dai and Xiaoqi Ren and Blair Chen and Daniel Cer and Jeremy R. Cole and Kai Hui and Michael Boratko and Rajvi Kapadia and Wen Ding and Yi Luan and Sai Meher Karthik Duddu and Gustavo Hern{\'{a}}ndez {\'{A}}brego and Weiqiang Shi and Nithi Gupta and Aditya Kusupati and Prateek Jain and Siddhartha Reddy Jonnalagadda and Ming{-}Wei Chang and Iftekhar Naim
2,024
null
null
null
ArXiv
Gecko: Versatile Text Embeddings Distilled from Large Language Models
Gecko: Versatile Text Embeddings Distilled from Large Language Models
http://arxiv.org/pdf/2403.20327v1
We present Gecko, a compact and versatile text embedding model. Gecko achieves strong retrieval performance by leveraging a key idea: distilling knowledge from large language models (LLMs) into a retriever. Our two-step distillation process begins with generating diverse, synthetic paired data using an LLM. Next, we further refine the data quality by retrieving a set of candidate passages for each query, and relabeling the positive and hard negative passages using the same LLM. The effectiveness of our approach is demonstrated by the compactness of the Gecko. On the Massive Text Embedding Benchmark (MTEB), Gecko with 256 embedding dimensions outperforms all existing entries with 768 embedding size. Gecko with 768 embedding dimensions achieves an average score of 66.31, competing with 7x larger models and 5x higher dimensional embeddings.
Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization
2505.19307v1
DBLP:journals/corr/abs-2411-00722
\cite{DBLP:journals/corr/abs-2411-00722}
Token-level Proximal Policy Optimization for Query Generation
http://arxiv.org/abs/2411.00722v1
Query generation is a critical task for web search engines (e.g. Google, Bing) and recommendation systems. Recently, state-of-the-art query generation methods leverage Large Language Models (LLMs) for their strong capabilities in context understanding and text generation. However, they still face challenges in generating high-quality queries in terms of inferring user intent based on their web search interaction history. In this paper, we propose Token-level Proximal Policy Optimization (TPPO), a noval approach designed to empower LLMs perform better in query generation through fine-tuning. TPPO is based on the Reinforcement Learning from AI Feedback (RLAIF) paradigm, consisting of a token-level reward model and a token-level proximal policy optimization module to address the sparse reward challenge in traditional RLAIF frameworks. To evaluate the effectiveness and robustness of TPPO, we conducted experiments on both open-source dataset and an industrial dataset that was collected from a globally-used search engine. The experimental results demonstrate that TPPO significantly improves the performance of query generation for LLMs and outperforms its existing competitors.
true
true
Yichen Ouyang and Lu Wang and Fangkai Yang and Pu Zhao and Chenghua Huang and Jianfeng Liu and Bochen Pang and Yaming Yang and Yuefeng Zhan and Hao Sun and Qingwei Lin and Saravan Rajmohan and Weiwei Deng and Dongmei Zhang and Feng Sun and Qi Zhang
2,024
null
null
null
ArXiv
Token-level Proximal Policy Optimization for Query Generation
Token-level Proximal Policy Optimization for Query Generation
https://www.researchgate.net/publication/385510091_Token-level_Proximal_Policy_Optimization_for_Query_Generation
In this paper, we propose Token-level Proximal Policy Optimization (TPPO), a noval approach designed to empower LLMs perform better in query generation through
Aligning Web Query Generation with Ranking Objectives via Direct Preference Optimization
2505.19307v1
DBLP:journals/corr/SchulmanWDRK17
\cite{DBLP:journals/corr/SchulmanWDRK17}
Proximal Policy Optimization Algorithms
http://arxiv.org/abs/1707.06347v2
We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.
true
true
John Schulman and Filip Wolski and Prafulla Dhariwal and Alec Radford and Oleg Klimov
2,017
null
null
null
ArXiv
Proximal Policy Optimization Algorithms
Proximal Policy Optimization Algorithms
http://arxiv.org/pdf/1707.06347v2
We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.
Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph
2505.17507v1
DEKR
\cite{DEKR}
{DEKR:} Description Enhanced Knowledge Graph for Machine Learning Method Recommendation
null
null
true
false
Xianshuai Cao and Yuliang Shi and Han Yu and Jihu Wang and Xinjun Wang and Zhongmin Yan and Zhiyong Chen
2,021
null
https://doi.org/10.1145/3404835.3462900
10.1145/3404835.3462900
null
{DEKR:} Description Enhanced Knowledge Graph for Machine Learning Method Recommendation
Description Enhanced Knowledge Graph for Machine Learning ...
https://www.researchgate.net/publication/353188658_DEKR_Description_Enhanced_Knowledge_Graph_for_Machine_Learning_Method_Recommendation
To further improve the performance of machine learning method recommendation, cross-modal knowledge graph contrastive learning (Cao et al., 2022) maximized the
Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph
2505.17507v1
tse23
\cite{tse23}
Task-Oriented {ML/DL} Library Recommendation Based on a Knowledge Graph
null
null
true
false
Mingwei Liu and Chengyuan Zhao and Xin Peng and Simin Yu and Haofen Wang and Chaofeng Sha
2,023
null
https://doi.org/10.1109/TSE.2023.3285280
10.1109/TSE.2023.3285280
{IEEE} Trans. Software Eng.
Task-Oriented {ML/DL} Library Recommendation Based on a Knowledge Graph
Task-Oriented ML/DL Library Recommendation Based on ...
https://www.researchgate.net/publication/371549606_Task-Oriented_MLDL_Library_Recommendation_based_on_a_Knowledge_Graph
AI applications often use ML/DL (Machine Learning/Deep Learning) models to implement specific AI tasks. As application developers usually are not AI experts, they often choose to integrate existing implementations of ML/DL models as libraries for their AI tasks. It constructs a knowledge graph that captures AI tasks, ML/DL models, model implementations, repositories, and their relationships by extracting knowledge from different sources such as ML/DL resource websites, papers, ML/DL frameworks, and repositories. Based on the knowledge graph, MLTaskKG recommends ML/DL libraries for developers by matching their requirements on tasks, model characteristics, and implementation information. Abstract—AI applications often use ML/DL(Machine Learning/Deep Learning)models to implement specific AI tasks.As application a knowledge graph that captures AI tasks,ML/DL models,model implementations,repositories,and their relationships b y extracting
Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph
2505.17507v1
OAGBench
\cite{OAGBench}
OAG-Bench: {A} Human-Curated Benchmark for Academic Graph Mining
null
null
true
false
Fanjin Zhang and Shijie Shi and Yifan Zhu and Bo Chen and Yukuo Cen and Jifan Yu and Yelin Chen and Lulu Wang and Qingfei Zhao and Yuqing Cheng and Tianyi Han and Yuwei An and Dan Zhang and Weng Lam Tam and Kun Cao and Yunhe Pang and Xinyu Guan and Huihui Yuan and Jian Song and Xiaoyan Li and Yuxiao Dong and Jie Tang
2,024
null
https://doi.org/10.1145/3637528.3672354
10.1145/3637528.3672354
null
OAG-Bench: {A} Human-Curated Benchmark for Academic Graph Mining
[PDF] A Human-Curated Benchmark for Academic Graph Mining - arXiv
https://arxiv.org/pdf/2402.15810
OAG-Bench is a comprehensive, human-curated benchmark for academic graph mining, based on the Open Academic Graph, covering 10 tasks, 20 datasets, and 70+
Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph
2505.17507v1
paper2repo
\cite{paper2repo}
paper2repo: GitHub Repository Recommendation for Academic Papers
http://arxiv.org/abs/2004.06059v1
GitHub has become a popular social application platform, where a large number of users post their open source projects. In particular, an increasing number of researchers release repositories of source code related to their research papers in order to attract more people to follow their work. Motivated by this trend, we describe a novel item-item cross-platform recommender system, $\textit{paper2repo}$, that recommends relevant repositories on GitHub that match a given paper in an academic search system such as Microsoft Academic. The key challenge is to identify the similarity between an input paper and its related repositories across the two platforms, $\textit{without the benefit of human labeling}$. Towards that end, paper2repo integrates text encoding and constrained graph convolutional networks (GCN) to automatically learn and map the embeddings of papers and repositories into the same space, where proximity offers the basis for recommendation. To make our method more practical in real life systems, labels used for model training are computed automatically from features of user actions on GitHub. In machine learning, such automatic labeling is often called {\em distant supervision\/}. To the authors' knowledge, this is the first distant-supervised cross-platform (paper to repository) matching system. We evaluate the performance of paper2repo on real-world data sets collected from GitHub and Microsoft Academic. Results demonstrate that it outperforms other state of the art recommendation methods.
true
true
Huajie Shao and Dachun Sun and Jiahao Wu and Zecheng Zhang and Aston Zhang and Shuochao Yao and Shengzhong Liu and Tianshi Wang and Chao Zhang and Tarek F. Abdelzaher
2,020
null
https://doi.org/10.1145/3366423.3380145
10.1145/3366423.3380145
null
paper2repo: GitHub Repository Recommendation for Academic Papers
paper2repo: GitHub Repository Recommendation for Academic Papers
http://arxiv.org/pdf/2004.06059v1
GitHub has become a popular social application platform, where a large number of users post their open source projects. In particular, an increasing number of researchers release repositories of source code related to their research papers in order to attract more people to follow their work. Motivated by this trend, we describe a novel item-item cross-platform recommender system, $\textit{paper2repo}$, that recommends relevant repositories on GitHub that match a given paper in an academic search system such as Microsoft Academic. The key challenge is to identify the similarity between an input paper and its related repositories across the two platforms, $\textit{without the benefit of human labeling}$. Towards that end, paper2repo integrates text encoding and constrained graph convolutional networks (GCN) to automatically learn and map the embeddings of papers and repositories into the same space, where proximity offers the basis for recommendation. To make our method more practical in real life systems, labels used for model training are computed automatically from features of user actions on GitHub. In machine learning, such automatic labeling is often called {\em distant supervision\/}. To the authors' knowledge, this is the first distant-supervised cross-platform (paper to repository) matching system. We evaluate the performance of paper2repo on real-world data sets collected from GitHub and Microsoft Academic. Results demonstrate that it outperforms other state of the art recommendation methods.
Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph
2505.17507v1
RepoRecommendation
\cite{RepoRecommendation}
Personalized Repository Recommendation Service for Developers with Multi-modal Features Learning
null
null
true
false
Yueshen Xu and Yuhong Jiang and Xinkui Zhao and Ying Li and Rui Li
2,023
null
https://doi.org/10.1109/ICWS60048.2023.00064
10.1109/ICWS60048.2023.00064
null
Personalized Repository Recommendation Service for Developers with Multi-modal Features Learning
AIDC-AI/Awesome-Unified-Multimodal-Models
https://github.com/AIDC-AI/Awesome-Unified-Multimodal-Models
| ANOLE | ANOLE: An Open, Autoregressive, Native Large Multimodal Models for Interleaved Image-Text GenerationImage 11: GitHub Repo stars | arXiv | 2024/07/08 | Github | - | | MM-Interleaved | MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature SynchronizerImage 20: GitHub Repo stars | arXiv | 2024/01/18 | Github | - | | Nexus-Gen | Nexus-Gen: A Unified Model for Image Understanding, Generation, and EditingImage 27: GitHub Repo stars | arXiv | 2025/04/30 | Github | Demo | | VARGPT | VARGPT: Unified Understanding and Generation in a Visual Autoregressive Multimodal Large Language ModelImage 38: GitHub Repo stars | arXiv | 2025/01/21 | Github | - |
Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph
2505.17507v1
GRETA
\cite{GRETA}
{GRETA:} Graph-Based Tag Assignment for GitHub Repositories
null
null
true
false
Xuyang Cai and Jiangang Zhu and Beijun Shen and Yuting Chen
2,016
null
https://doi.org/10.1109/COMPSAC.2016.124
10.1109/COMPSAC.2016.124
null
{GRETA:} Graph-Based Tag Assignment for GitHub Repositories
GRETA: Graph-Based Tag Assignment for GitHub Repositories
https://ieeexplore.ieee.org/iel7/7551592/7551973/07551994.pdf
GRETA is a novel, graph-based approach to tag assignment for repositories on GitHub, which allows tags to be assigned by some graph algorithms. GRETA is also a
Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph
2505.17507v1
EASE24
\cite{EASE24}
Automated categorization of pre-trained models for software engineering: A case study with a Hugging Face dataset
http://arxiv.org/abs/2405.13185v1
Software engineering (SE) activities have been revolutionized by the advent of pre-trained models (PTMs), defined as large machine learning (ML) models that can be fine-tuned to perform specific SE tasks. However, users with limited expertise may need help to select the appropriate model for their current task. To tackle the issue, the Hugging Face (HF) platform simplifies the use of PTMs by collecting, storing, and curating several models. Nevertheless, the platform currently lacks a comprehensive categorization of PTMs designed specifically for SE, i.e., the existing tags are more suited to generic ML categories. This paper introduces an approach to address this gap by enabling the automatic classification of PTMs for SE tasks. First, we utilize a public dump of HF to extract PTMs information, including model documentation and associated tags. Then, we employ a semi-automated method to identify SE tasks and their corresponding PTMs from existing literature. The approach involves creating an initial mapping between HF tags and specific SE tasks, using a similarity-based strategy to identify PTMs with relevant tags. The evaluation shows that model cards are informative enough to classify PTMs considering the pipeline tag. Moreover, we provide a mapping between SE tasks and stored PTMs by relying on model names.
true
true
Claudio Di Sipio and Riccardo Rubei and Juri Di Rocco and Davide Di Ruscio and Phuong T. Nguyen
2,024
null
https://doi.org/10.1145/3661167.3661215
10.1145/3661167.3661215
null
Automated categorization of pre-trained models for software engineering: A case study with a Hugging Face dataset
Automated categorization of pre-trained models for software ... - arXiv
https://arxiv.org/abs/2405.13185
To tackle the issue, the Hugging Face (HF) platform simplifies the use of PTMs by collecting, storing, and curating several models. Nevertheless
Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph
2505.17507v1
ESEM24
\cite{ESEM24}
Automatic Categorization of GitHub Actions with Transformers and Few-shot Learning
http://arxiv.org/abs/2407.16946v1
In the GitHub ecosystem, workflows are used as an effective means to automate development tasks and to set up a Continuous Integration and Delivery (CI/CD pipeline). GitHub Actions (GHA) have been conceived to provide developers with a practical tool to create and maintain workflows, avoiding reinventing the wheel and cluttering the workflow with shell commands. Properly leveraging the power of GitHub Actions can facilitate the development processes, enhance collaboration, and significantly impact project outcomes. To expose actions to search engines, GitHub allows developers to assign them to one or more categories manually. These are used as an effective means to group actions sharing similar functionality. Nevertheless, while providing a practical way to execute workflows, many actions have unclear purposes, and sometimes they are not categorized. In this work, we bridge such a gap by conceptualizing Gavel, a practical solution to increasing the visibility of actions in GitHub. By leveraging the content of README.MD files for each action, we use Transformer--a deep learning algorithm--to assign suitable categories to the action. We conducted an empirical investigation and compared Gavel with a state-of-the-art baseline. The experimental results show that our proposed approach can assign categories to GitHub actions effectively, thus outperforming the state-of-the-art baseline.
true
true
Phuong T. Nguyen and Juri Di Rocco and Claudio Di Sipio and Mudita Shakya and Davide Di Ruscio and Massimiliano Di Penta
2,024
null
https://doi.org/10.1145/3674805.3690752
10.1145/3674805.3690752
null
Automatic Categorization of GitHub Actions with Transformers and Few-shot Learning
Automatic Categorization of GitHub Actions with Transformers and ...
https://arxiv.org/html/2407.16946v1
a GitHub actions visibility elevator based on transformers and few-shot learning to make actions more visible and accessible to developers.
Benchmarking Recommendation, Classification, and Tracing Based on Hugging Face Knowledge Graph
2505.17507v1
issue-PR-link-prediction
\cite{issue-PR-link-prediction}
Improving Issue-PR Link Prediction via Knowledge-Aware Heterogeneous Graph Learning
null
null
true
false
Shuotong Bai and Huaxiao Liu and Enyan Dai and Lei Liu
2,024
null
https://doi.org/10.1109/TSE.2024.3408448
10.1109/TSE.2024.3408448
{IEEE} Trans. Software Eng.
Improving Issue-PR Link Prediction via Knowledge-Aware Heterogeneous Graph Learning
Improving Issue-PR Link Prediction via Knowledge-Aware ...
https://www.researchgate.net/publication/381145630_Improving_Issue-PR_Link_Prediction_via_Knowledge-aware_Heterogeneous_Graph_Learning
This method combines vector similarity, clustering techniques, and a deep learning model to improve the recommendation process. Additionally, Bai et al. [11]
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
kharitonov2019federated
\cite{kharitonov2019federated}
Federated online learning to rank with evolution strategies
null
null
true
false
Kharitonov, Eugene
2,019
null
null
null
null
Federated online learning to rank with evolution strategies
Federated Online Learning to Rank with Evolution Strategies
https://arvinzhuang.github.io/publication/ECIR2021FOLTR
Online Learning to Rank (OLTR) optimizes ranking models using implicit users' feedback, such as clicks, directly manipulating search engine results in
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
wang2021federated
\cite{wang2021federated}
Federated online learning to rank with evolution strategies: a reproducibility study
null
null
true
false
Wang, Shuyi and Zhuang, Shengyao and Zuccon, Guido
2,021
null
null
null
null
Federated online learning to rank with evolution strategies: a reproducibility study
Federated Online Learning to Rank with Evolution Strategies
https://arvinzhuang.github.io/publication/ECIR2021FOLTR
Abstract. Online Learning to Rank (OLTR) optimizes ranking models using implicit users' feedback, such as clicks, directly manipulating search engine results in
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
wang2021effective
\cite{wang2021effective}
Effective and privacy-preserving federated online learning to rank
null
null
true
false
Wang, Shuyi and Liu, Bing and Zhuang, Shengyao and Zuccon, Guido
2,021
null
null
null
null
Effective and privacy-preserving federated online learning to rank
Effective and Privacy-preserving Federated Online Learning to Rank
https://dl.acm.org/doi/10.1145/3471158.3472236
Empirical evaluation shows FPDGD significantly outperforms the only other federated OLTR method. In addition, FPDGD is more robust across different privacy
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
oosterhuis2018differentiable
\cite{oosterhuis2018differentiable}
Differentiable Unbiased Online Learning to Rank
http://arxiv.org/abs/1809.08415v1
Online Learning to Rank (OLTR) methods optimize rankers based on user interactions. State-of-the-art OLTR methods are built specifically for linear models. Their approaches do not extend well to non-linear models such as neural networks. We introduce an entirely novel approach to OLTR that constructs a weighted differentiable pairwise loss after each interaction: Pairwise Differentiable Gradient Descent (PDGD). PDGD breaks away from the traditional approach that relies on interleaving or multileaving and extensive sampling of models to estimate gradients. Instead, its gradient is based on inferring preferences between document pairs from user clicks and can optimize any differentiable model. We prove that the gradient of PDGD is unbiased w.r.t. user document pair preferences. Our experiments on the largest publicly available Learning to Rank (LTR) datasets show considerable and significant improvements under all levels of interaction noise. PDGD outperforms existing OLTR methods both in terms of learning speed as well as final convergence. Furthermore, unlike previous OLTR methods, PDGD also allows for non-linear models to be optimized effectively. Our results show that using a neural network leads to even better performance at convergence than a linear model. In summary, PDGD is an efficient and unbiased OLTR approach that provides a better user experience than previously possible.
true
true
Oosterhuis, Harrie and de Rijke, Maarten
2,018
null
null
null
null
Differentiable Unbiased Online Learning to Rank
Differentiable Unbiased Online Learning to Rank
http://arxiv.org/pdf/1809.08415v1
Online Learning to Rank (OLTR) methods optimize rankers based on user interactions. State-of-the-art OLTR methods are built specifically for linear models. Their approaches do not extend well to non-linear models such as neural networks. We introduce an entirely novel approach to OLTR that constructs a weighted differentiable pairwise loss after each interaction: Pairwise Differentiable Gradient Descent (PDGD). PDGD breaks away from the traditional approach that relies on interleaving or multileaving and extensive sampling of models to estimate gradients. Instead, its gradient is based on inferring preferences between document pairs from user clicks and can optimize any differentiable model. We prove that the gradient of PDGD is unbiased w.r.t. user document pair preferences. Our experiments on the largest publicly available Learning to Rank (LTR) datasets show considerable and significant improvements under all levels of interaction noise. PDGD outperforms existing OLTR methods both in terms of learning speed as well as final convergence. Furthermore, unlike previous OLTR methods, PDGD also allows for non-linear models to be optimized effectively. Our results show that using a neural network leads to even better performance at convergence than a linear model. In summary, PDGD is an efficient and unbiased OLTR approach that provides a better user experience than previously possible.
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
wang2022non
\cite{wang2022non}
Is Non-IID Data a Threat in Federated Online Learning to Rank?
http://arxiv.org/abs/2204.09272v2
In this perspective paper we study the effect of non independent and identically distributed (non-IID) data on federated online learning to rank (FOLTR) and chart directions for future work in this new and largely unexplored research area of Information Retrieval. In the FOLTR process, clients participate in a federation to jointly create an effective ranker from the implicit click signal originating in each client, without the need to share data (documents, queries, clicks). A well-known factor that affects the performance of federated learning systems, and that poses serious challenges to these approaches, is that there may be some type of bias in the way data is distributed across clients. While FOLTR systems are on their own rights a type of federated learning system, the presence and effect of non-IID data in FOLTR has not been studied. To this aim, we first enumerate possible data distribution settings that may showcase data bias across clients and thus give rise to the non-IID problem. Then, we study the impact of each setting on the performance of the current state-of-the-art FOLTR approach, the Federated Pairwise Differentiable Gradient Descent (FPDGD), and we highlight which data distributions may pose a problem for FOLTR methods. We also explore how common approaches proposed in the federated learning literature address non-IID issues in FOLTR. This allows us to unveil new research gaps that, we argue, future research in FOLTR should consider. This is an important contribution to the current state of FOLTR field because, for FOLTR systems to be deployed, the factors affecting their performance, including the impact of non-IID data, need to be thoroughly understood.
true
true
Wang, Shuyi and Zuccon, Guido
2,022
null
null
null
null
Is Non-IID Data a Threat in Federated Online Learning to Rank?
Is Non-IID Data a Threat in Federated Online Learning to Rank?
https://scispace.com/pdf/is-non-iid-data-a-threat-in-federated-online-learning-to-1hxia4ua.pdf
ABSTRACT. In this perspective paper we study the effect of non independent and identically distributed (non-IID) data on federated online learn- ing to rank
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
wang2023analysis
\cite{wang2023analysis}
An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems
http://arxiv.org/abs/2307.01565v1
Federated online learning to rank (FOLTR) aims to preserve user privacy by not sharing their searchable data and search interactions, while guaranteeing high search effectiveness, especially in contexts where individual users have scarce training data and interactions. For this, FOLTR trains learning to rank models in an online manner -- i.e. by exploiting users' interactions with the search systems (queries, clicks), rather than labels -- and federatively -- i.e. by not aggregating interaction data in a central server for training purposes, but by training instances of a model on each user device on their own private data, and then sharing the model updates, not the data, across a set of users that have formed the federation. Existing FOLTR methods build upon advances in federated learning. While federated learning methods have been shown effective at training machine learning models in a distributed way without the need of data sharing, they can be susceptible to attacks that target either the system's security or its overall effectiveness. In this paper, we consider attacks on FOLTR systems that aim to compromise their search effectiveness. Within this scope, we experiment with and analyse data and model poisoning attack methods to showcase their impact on FOLTR search effectiveness. We also explore the effectiveness of defense methods designed to counteract attacks on FOLTR systems. We contribute an understanding of the effect of attack and defense methods for FOLTR systems, as well as identifying the key factors influencing their effectiveness.
true
true
Wang, Shuyi and Zuccon, Guido
2,023
null
null
null
null
An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems
An Analysis of Untargeted Poisoning Attack and Defense Methods ...
https://www.researchgate.net/publication/372136881_An_Analysis_of_Untargeted_Poisoning_Attack_and_Defense_Methods_for_Federated_Online_Learning_to_Rank_Systems
Within this scope, we experiment with and analyse data and model poisoning attack methods to showcase their impact on FOLTR search effectiveness. We also
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
jia2022learning
\cite{jia2022learning}
Learning Neural Ranking Models Online from Implicit User Feedback
http://arxiv.org/abs/2201.06658v1
Existing online learning to rank (OL2R) solutions are limited to linear models, which are incompetent to capture possible non-linear relations between queries and documents. In this work, to unleash the power of representation learning in OL2R, we propose to directly learn a neural ranking model from users' implicit feedback (e.g., clicks) collected on the fly. We focus on RankNet and LambdaRank, due to their great empirical success and wide adoption in offline settings, and control the notorious explore-exploit trade-off based on the convergence analysis of neural networks using neural tangent kernel. Specifically, in each round of result serving, exploration is only performed on document pairs where the predicted rank order between the two documents is uncertain; otherwise, the ranker's predicted order will be followed in result ranking. We prove that under standard assumptions our OL2R solution achieves a gap-dependent upper regret bound of $O(\log^2(T))$, in which the regret is defined on the total number of mis-ordered pairs over $T$ rounds. Comparisons against an extensive set of state-of-the-art OL2R baselines on two public learning to rank benchmark datasets demonstrate the effectiveness of the proposed solution.
true
true
Jia, Yiling and Wang, Hongning
2,022
null
null
null
null
Learning Neural Ranking Models Online from Implicit User Feedback
Learning Neural Ranking Models Online from Implicit User Feedback
http://arxiv.org/pdf/2201.06658v1
Existing online learning to rank (OL2R) solutions are limited to linear models, which are incompetent to capture possible non-linear relations between queries and documents. In this work, to unleash the power of representation learning in OL2R, we propose to directly learn a neural ranking model from users' implicit feedback (e.g., clicks) collected on the fly. We focus on RankNet and LambdaRank, due to their great empirical success and wide adoption in offline settings, and control the notorious explore-exploit trade-off based on the convergence analysis of neural networks using neural tangent kernel. Specifically, in each round of result serving, exploration is only performed on document pairs where the predicted rank order between the two documents is uncertain; otherwise, the ranker's predicted order will be followed in result ranking. We prove that under standard assumptions our OL2R solution achieves a gap-dependent upper regret bound of $O(\log^2(T))$, in which the regret is defined on the total number of mis-ordered pairs over $T$ rounds. Comparisons against an extensive set of state-of-the-art OL2R baselines on two public learning to rank benchmark datasets demonstrate the effectiveness of the proposed solution.
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
wang2018efficient
\cite{wang2018efficient}
Efficient Exploration of Gradient Space for Online Learning to Rank
http://arxiv.org/abs/1805.07317v1
Online learning to rank (OL2R) optimizes the utility of returned search results based on implicit feedback gathered directly from users. To improve the estimates, OL2R algorithms examine one or more exploratory gradient directions and update the current ranker if a proposed one is preferred by users via an interleaved test. In this paper, we accelerate the online learning process by efficient exploration in the gradient space. Our algorithm, named as Null Space Gradient Descent, reduces the exploration space to only the \emph{null space} of recent poorly performing gradients. This prevents the algorithm from repeatedly exploring directions that have been discouraged by the most recent interactions with users. To improve sensitivity of the resulting interleaved test, we selectively construct candidate rankers to maximize the chance that they can be differentiated by candidate ranking documents in the current query; and we use historically difficult queries to identify the best ranker when tie occurs in comparing the rankers. Extensive experimental comparisons with the state-of-the-art OL2R algorithms on several public benchmarks confirmed the effectiveness of our proposal algorithm, especially in its fast learning convergence and promising ranking quality at an early stage.
true
true
Wang, Huazheng and Langley, Ramsey and Kim, Sonwoo and McCord-Snook, Eric and Wang, Hongning
2,018
null
null
null
null
Efficient Exploration of Gradient Space for Online Learning to Rank
Efficient Exploration of Gradient Space for Online Learning to Rank
http://arxiv.org/pdf/1805.07317v1
Online learning to rank (OL2R) optimizes the utility of returned search results based on implicit feedback gathered directly from users. To improve the estimates, OL2R algorithms examine one or more exploratory gradient directions and update the current ranker if a proposed one is preferred by users via an interleaved test. In this paper, we accelerate the online learning process by efficient exploration in the gradient space. Our algorithm, named as Null Space Gradient Descent, reduces the exploration space to only the \emph{null space} of recent poorly performing gradients. This prevents the algorithm from repeatedly exploring directions that have been discouraged by the most recent interactions with users. To improve sensitivity of the resulting interleaved test, we selectively construct candidate rankers to maximize the chance that they can be differentiated by candidate ranking documents in the current query; and we use historically difficult queries to identify the best ranker when tie occurs in comparing the rankers. Extensive experimental comparisons with the state-of-the-art OL2R algorithms on several public benchmarks confirmed the effectiveness of our proposal algorithm, especially in its fast learning convergence and promising ranking quality at an early stage.
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
liu2021federaser
\cite{liu2021federaser}
Federaser: Enabling efficient client-level data removal from federated learning models
null
null
true
false
Liu, Gaoyang and Ma, Xiaoqiang and Yang, Yang and Wang, Chen and Liu, Jiangchuan
2,021
null
null
null
null
Federaser: Enabling efficient client-level data removal from federated learning models
FedEraser: Enabling Efficient Client-Level Data Removal ...
https://www.semanticscholar.org/paper/FedEraser%3A-Enabling-Efficient-Client-Level-Data-Liu-Ma/eadeffdec9fac8fd7f9aea732ca410eb082b7dcf
FedEraser is presented, the first federated unlearning method-ology that can eliminate the influence of a federated client's data on the global FL model
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
wu2022federated
\cite{wu2022federated}
Federated Unlearning with Knowledge Distillation
http://arxiv.org/abs/2201.09441v1
Federated Learning (FL) is designed to protect the data privacy of each client during the training process by transmitting only models instead of the original data. However, the trained model may memorize certain information about the training data. With the recent legislation on right to be forgotten, it is crucially essential for the FL model to possess the ability to forget what it has learned from each client. We propose a novel federated unlearning method to eliminate a client's contribution by subtracting the accumulated historical updates from the model and leveraging the knowledge distillation method to restore the model's performance without using any data from the clients. This method does not have any restrictions on the type of neural networks and does not rely on clients' participation, so it is practical and efficient in the FL system. We further introduce backdoor attacks in the training process to help evaluate the unlearning effect. Experiments on three canonical datasets demonstrate the effectiveness and efficiency of our method.
true
true
Wu, Chen and Zhu, Sencun and Mitra, Prasenjit
2,022
null
null
null
arXiv preprint arXiv:2201.09441
Federated Unlearning with Knowledge Distillation
Federated Unlearning with Knowledge Distillation
http://arxiv.org/pdf/2201.09441v1
Federated Learning (FL) is designed to protect the data privacy of each client during the training process by transmitting only models instead of the original data. However, the trained model may memorize certain information about the training data. With the recent legislation on right to be forgotten, it is crucially essential for the FL model to possess the ability to forget what it has learned from each client. We propose a novel federated unlearning method to eliminate a client's contribution by subtracting the accumulated historical updates from the model and leveraging the knowledge distillation method to restore the model's performance without using any data from the clients. This method does not have any restrictions on the type of neural networks and does not rely on clients' participation, so it is practical and efficient in the FL system. We further introduce backdoor attacks in the training process to help evaluate the unlearning effect. Experiments on three canonical datasets demonstrate the effectiveness and efficiency of our method.
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
liu2022right
\cite{liu2022right}
The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining
http://arxiv.org/abs/2203.07320v1
In Machine Learning, the emergence of \textit{the right to be forgotten} gave birth to a paradigm named \textit{machine unlearning}, which enables data holders to proactively erase their data from a trained model. Existing machine unlearning techniques focus on centralized training, where access to all holders' training data is a must for the server to conduct the unlearning process. It remains largely underexplored about how to achieve unlearning when full access to all training data becomes unavailable. One noteworthy example is Federated Learning (FL), where each participating data holder trains locally, without sharing their training data to the central server. In this paper, we investigate the problem of machine unlearning in FL systems. We start with a formal definition of the unlearning problem in FL and propose a rapid retraining approach to fully erase data samples from a trained FL model. The resulting design allows data holders to jointly conduct the unlearning process efficiently while keeping their training data locally. Our formal convergence and complexity analysis demonstrate that our design can preserve model utility with high efficiency. Extensive evaluations on four real-world datasets illustrate the effectiveness and performance of our proposed realization.
true
true
Liu, Yi and Xu, Lei and Yuan, Xingliang and Wang, Cong and Li, Bo
2,022
null
null
null
null
The Right to be Forgotten in Federated Learning: An Efficient Realization with Rapid Retraining
The Right to be Forgotten in Federated Learning: An Efficient ...
https://ieeexplore.ieee.org/iel7/9796607/9796652/09796721.pdf
This paper proposes a rapid retraining approach in Federated Learning to erase data samples, using a distributed Newton-type model update algorithm.
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
halimi2022federated
\cite{halimi2022federated}
Federated Unlearning: How to Efficiently Erase a Client in FL?
http://arxiv.org/abs/2207.05521v3
With privacy legislation empowering the users with the right to be forgotten, it has become essential to make a model amenable for forgetting some of its training data. However, existing unlearning methods in the machine learning context can not be directly applied in the context of distributed settings like federated learning due to the differences in learning protocol and the presence of multiple actors. In this paper, we tackle the problem of federated unlearning for the case of erasing a client by removing the influence of their entire local data from the trained global model. To erase a client, we propose to first perform local unlearning at the client to be erased, and then use the locally unlearned model as the initialization to run very few rounds of federated learning between the server and the remaining clients to obtain the unlearned global model. We empirically evaluate our unlearning method by employing multiple performance measures on three datasets, and demonstrate that our unlearning method achieves comparable performance as the gold standard unlearning method of federated retraining from scratch, while being significantly efficient. Unlike prior works, our unlearning method neither requires global access to the data used for training nor the history of the parameter updates to be stored by the server or any of the clients.
true
true
Halimi, Anisa and Kadhe, Swanand and Rawat, Ambrish and Baracaldo, Nathalie
2,022
null
null
null
arXiv preprint arXiv:2207.05521
Federated Unlearning: How to Efficiently Erase a Client in FL?
Federated Unlearning: How to Efficiently Erase a Client in FL?
http://arxiv.org/pdf/2207.05521v3
With privacy legislation empowering the users with the right to be forgotten, it has become essential to make a model amenable for forgetting some of its training data. However, existing unlearning methods in the machine learning context can not be directly applied in the context of distributed settings like federated learning due to the differences in learning protocol and the presence of multiple actors. In this paper, we tackle the problem of federated unlearning for the case of erasing a client by removing the influence of their entire local data from the trained global model. To erase a client, we propose to first perform local unlearning at the client to be erased, and then use the locally unlearned model as the initialization to run very few rounds of federated learning between the server and the remaining clients to obtain the unlearned global model. We empirically evaluate our unlearning method by employing multiple performance measures on three datasets, and demonstrate that our unlearning method achieves comparable performance as the gold standard unlearning method of federated retraining from scratch, while being significantly efficient. Unlike prior works, our unlearning method neither requires global access to the data used for training nor the history of the parameter updates to be stored by the server or any of the clients.
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
yuan2023federated
\cite{yuan2023federated}
Federated Unlearning for On-Device Recommendation
http://arxiv.org/abs/2210.10958v2
The increasing data privacy concerns in recommendation systems have made federated recommendations (FedRecs) attract more and more attention. Existing FedRecs mainly focus on how to effectively and securely learn personal interests and preferences from their on-device interaction data. Still, none of them considers how to efficiently erase a user's contribution to the federated training process. We argue that such a dual setting is necessary. First, from the privacy protection perspective, ``the right to be forgotten'' requires that users have the right to withdraw their data contributions. Without the reversible ability, FedRecs risk breaking data protection regulations. On the other hand, enabling a FedRec to forget specific users can improve its robustness and resistance to malicious clients' attacks. To support user unlearning in FedRecs, we propose an efficient unlearning method FRU (Federated Recommendation Unlearning), inspired by the log-based rollback mechanism of transactions in database management systems. It removes a user's contribution by rolling back and calibrating the historical parameter updates and then uses these updates to speed up federated recommender reconstruction. However, storing all historical parameter updates on resource-constrained personal devices is challenging and even infeasible. In light of this challenge, we propose a small-sized negative sampling method to reduce the number of item embedding updates and an importance-based update selection mechanism to store only important model updates. To evaluate the effectiveness of FRU, we propose an attack method to disturb FedRecs via a group of compromised users and use FRU to recover recommenders by eliminating these users' influence. Finally, we conduct experiments on two real-world recommendation datasets with two widely used FedRecs to show the efficiency and effectiveness of our proposed approaches.
true
true
Yuan, Wei and Yin, Hongzhi and Wu, Fangzhao and Zhang, Shijie and He, Tieke and Wang, Hao
2,023
null
null
null
null
Federated Unlearning for On-Device Recommendation
Federated Unlearning for On-Device Recommendation
https://dl.acm.org/doi/10.1145/3539597.3570463
To support user unlearning in federated recommendation systems, we propose an efficient unlearning method FRU (Federated Recommendation Unlearning), inspired by
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
zhu2023heterogeneous
\cite{zhu2023heterogeneous}
Heterogeneous Federated Knowledge Graph Embedding Learning and Unlearning
http://arxiv.org/abs/2302.02069v2
Federated Learning (FL) recently emerges as a paradigm to train a global machine learning model across distributed clients without sharing raw data. Knowledge Graph (KG) embedding represents KGs in a continuous vector space, serving as the backbone of many knowledge-driven applications. As a promising combination, federated KG embedding can fully take advantage of knowledge learned from different clients while preserving the privacy of local data. However, realistic problems such as data heterogeneity and knowledge forgetting still remain to be concerned. In this paper, we propose FedLU, a novel FL framework for heterogeneous KG embedding learning and unlearning. To cope with the drift between local optimization and global convergence caused by data heterogeneity, we propose mutual knowledge distillation to transfer local knowledge to global, and absorb global knowledge back. Moreover, we present an unlearning method based on cognitive neuroscience, which combines retroactive interference and passive decay to erase specific knowledge from local clients and propagate to the global model by reusing knowledge distillation. We construct new datasets for assessing realistic performance of the state-of-the-arts. Extensive experiments show that FedLU achieves superior results in both link prediction and knowledge forgetting.
true
true
Zhu, Xiangrong and Li, Guangyao and Hu, Wei
2,023
null
null
null
null
Heterogeneous Federated Knowledge Graph Embedding Learning and Unlearning
Heterogeneous Federated Knowledge Graph Embedding ...
https://dl.acm.org/doi/10.1145/3543507.3583305
In this paper, we propose FedLU, a novel FL framework for heterogeneous KG embedding learning and unlearning. To cope with the drift between
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
wang2024forget
\cite{wang2024forget}
How to Forget Clients in Federated Online Learning to Rank?
http://arxiv.org/abs/2401.13410v1
Data protection legislation like the European Union's General Data Protection Regulation (GDPR) establishes the \textit{right to be forgotten}: a user (client) can request contributions made using their data to be removed from learned models. In this paper, we study how to remove the contributions made by a client participating in a Federated Online Learning to Rank (FOLTR) system. In a FOLTR system, a ranker is learned by aggregating local updates to the global ranking model. Local updates are learned in an online manner at a client-level using queries and implicit interactions that have occurred within that specific client. By doing so, each client's local data is not shared with other clients or with a centralised search service, while at the same time clients can benefit from an effective global ranking model learned from contributions of each client in the federation. In this paper, we study an effective and efficient unlearning method that can remove a client's contribution without compromising the overall ranker effectiveness and without needing to retrain the global ranker from scratch. A key challenge is how to measure whether the model has unlearned the contributions from the client $c^*$ that has requested removal. For this, we instruct $c^*$ to perform a poisoning attack (add noise to this client updates) and then we measure whether the impact of the attack is lessened when the unlearning process has taken place. Through experiments on four datasets, we demonstrate the effectiveness and efficiency of the unlearning strategy under different combinations of parameter settings.
true
true
Wang, Shuyi and Liu, Bing and Zuccon, Guido
2,024
null
null
null
null
How to Forget Clients in Federated Online Learning to Rank?
How to Forget Clients in Federated Online Learning to Rank?
http://arxiv.org/pdf/2401.13410v1
Data protection legislation like the European Union's General Data Protection Regulation (GDPR) establishes the \textit{right to be forgotten}: a user (client) can request contributions made using their data to be removed from learned models. In this paper, we study how to remove the contributions made by a client participating in a Federated Online Learning to Rank (FOLTR) system. In a FOLTR system, a ranker is learned by aggregating local updates to the global ranking model. Local updates are learned in an online manner at a client-level using queries and implicit interactions that have occurred within that specific client. By doing so, each client's local data is not shared with other clients or with a centralised search service, while at the same time clients can benefit from an effective global ranking model learned from contributions of each client in the federation. In this paper, we study an effective and efficient unlearning method that can remove a client's contribution without compromising the overall ranker effectiveness and without needing to retrain the global ranker from scratch. A key challenge is how to measure whether the model has unlearned the contributions from the client $c^*$ that has requested removal. For this, we instruct $c^*$ to perform a poisoning attack (add noise to this client updates) and then we measure whether the impact of the attack is lessened when the unlearning process has taken place. Through experiments on four datasets, we demonstrate the effectiveness and efficiency of the unlearning strategy under different combinations of parameter settings.
Unlearning for Federated Online Learning to Rank: A Reproducibility Study
2505.12791v1
shejwalkar2021manipulating
\cite{shejwalkar2021manipulating}
Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning
null
null
true
false
Shejwalkar, Virat and Houmansadr, Amir
2,021
null
null
null
null
Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning
Optimizing Model Poisoning Attacks and Defenses for Federat...
https://www.youtube.com/watch?v=G2VYRnLqAXE
SESSION 6C-3 Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning Federated learning (FL)
Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition
2505.07166v1
karpukhin2020dense
\cite{karpukhin2020dense}
Dense Passage Retrieval for Open-Domain Question Answering
http://arxiv.org/abs/2004.04906v3
Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.
true
true
Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau
2,020
null
null
null
null
Dense Passage Retrieval for Open-Domain Question Answering
[2004.04906] Dense Passage Retrieval for Open-Domain ...
https://arxiv.org/abs/2004.04906
**arXiv:2004.04906** (cs) Authors:Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih View a PDF of the paper titled Dense Passage Retrieval for Open-Domain Question Answering, by Vladimir Karpukhin and 7 other authors View a PDF of the paper titled Dense Passage Retrieval for Open-Domain Question Answering, by Vladimir Karpukhin and 7 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Spaces Toggle - [x] Core recommender toggle
Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition
2505.07166v1
izacard2021contriever
\cite{izacard2021contriever}
Contriever: A Fully Unsupervised Dense Retriever
null
null
true
false
Izacard, Gautier and Grave, Edouard
2,021
null
null
null
null
Contriever: A Fully Unsupervised Dense Retriever
Unsupervised Dense Information Retrieval with Contrastive Learning
https://fanpu.io/summaries/2024-10-07-unsupervised-dense-information-retrieval-with-contrastive-learning/
Contriever is one of the most competitive & popular baselines for retrievers, and shows how unsupervised techniques have broad appeal. Not
Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition
2505.07166v1
reimers2019sentence
\cite{reimers2019sentence}
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
http://arxiv.org/abs/1908.10084v1
BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10,000 sentences requires about 50 million inference computations (~65 hours) with BERT. The construction of BERT makes it unsuitable for semantic similarity search as well as for unsupervised tasks like clustering. In this publication, we present Sentence-BERT (SBERT), a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning tasks, where it outperforms other state-of-the-art sentence embeddings methods.
true
true
Reimers, Nils and Gurevych, Iryna
2,019
null
null
null
null
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
[PDF] Sentence Embeddings using Siamese BERT-Networks
https://aclanthology.org/D19-1410.pdf
c ⃝2019 Association for Computational Linguistics 3982 Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks Nils Reimers and Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universit¨ at Darmstadt www.ukp.tu-darmstadt.de Abstract BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). We fine-tune SBERT on NLI data, which cre-ates sentence embeddings that significantly out-perform other state-of-the-art sentence embedding methods like InferSent (Conneau et al., 2017) and Universal Sentence Encoder (Cer et al., 2018).
Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition
2505.07166v1
gao2021simcse
\cite{gao2021simcse}
SimCSE: Simple Contrastive Learning of Sentence Embeddings
http://arxiv.org/abs/2104.08821v4
This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings' anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.
true
true
Gao, Tianyu and Yao, Xing and Chen, Dan
2,021
null
null
null
null
SimCSE: Simple Contrastive Learning of Sentence Embeddings
SimCSE: Simple Contrastive Learning of Sentence Embeddings
http://arxiv.org/pdf/2104.08821v4
This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearman's correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings' anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.
Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition
2505.07166v1
replama2021
\cite{replama2021}
RePLAMA: A Decoder-based Dense Retriever for Open-Domain Question Answering
null
null
true
false
Smith, John and Doe, Jane
2,021
null
null
null
null
RePLAMA: A Decoder-based Dense Retriever for Open-Domain Question Answering
A Reproducibility Study on Dense Retrieval Knowledge Acquisition
https://dl.acm.org/doi/10.1145/3726302.3730332
RePLAMA: A Decoder-based Dense Retriever for Open-Domain Question Answering. In Proceedings of the 2021 Conference on Information Retrieval
Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition
2505.07166v1
promptreps2021
\cite{promptreps2021}
PromptReps: Enhancing Dense Retrieval with Prompt-based Representations
null
null
true
false
Lee, Alex and Kumar, Rahul
2,021
null
null
null
null
PromptReps: Enhancing Dense Retrieval with Prompt-based Representations
[2404.18424] PromptReps: Prompting Large Language Models to ...
https://arxiv.org/abs/2404.18424
In this paper, we propose PromptReps, which combines the advantages of both categories: no need for training and the ability to retrieve from the whole corpus.
Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition
2505.07166v1
msmarco
\cite{msmarco}
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
http://arxiv.org/abs/1611.09268v3
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
true
true
Nguyen, Thang and others
2,016
null
null
null
null
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
http://arxiv.org/pdf/1611.09268v3
We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.
Pre-training vs. Fine-tuning: A Reproducibility Study on Dense Retrieval Knowledge Acquisition
2505.07166v1
naturalquestions
\cite{naturalquestions}
Natural Questions: A Benchmark for Question Answering
null
null
true
false
Kwiatkowski, Tom and Palomaki, Jenna and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, David and Filatov, Yury and Khashabi, Daniel and Sabharwal, Ashish and others
2,019
null
null
null
null
Natural Questions: A Benchmark for Question Answering
Natural Questions: A Benchmark for Question Answering Research
https://scispace.com/papers/natural-questions-a-benchmark-for-question-answering-10mm1ytgmc
The Natural Questions corpus, a question answering data set, is presented, introducing robust metrics for the purposes of evaluating question answering systems.
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
Frequency23
\cite{Frequency23}
Frequency Enhanced Hybrid Attention Network for Sequential Recommendation
http://arxiv.org/abs/2304.09184v3
The self-attention mechanism, which equips with a strong capability of modeling long-range dependencies, is one of the extensively used techniques in the sequential recommendation field. However, many recent studies represent that current self-attention based models are low-pass filters and are inadequate to capture high-frequency information. Furthermore, since the items in the user behaviors are intertwined with each other, these models are incomplete to distinguish the inherent periodicity obscured in the time domain. In this work, we shift the perspective to the frequency domain, and propose a novel Frequency Enhanced Hybrid Attention Network for Sequential Recommendation, namely FEARec. In this model, we firstly improve the original time domain self-attention in the frequency domain with a ramp structure to make both low-frequency and high-frequency information could be explicitly learned in our approach. Moreover, we additionally design a similar attention mechanism via auto-correlation in the frequency domain to capture the periodic characteristics and fuse the time and frequency level attention in a union model. Finally, both contrastive learning and frequency regularization are utilized to ensure that multiple views are aligned in both the time domain and frequency domain. Extensive experiments conducted on four widely used benchmark datasets demonstrate that the proposed model performs significantly better than the state-of-the-art approaches.
true
true
Du, Xinyu and Yuan, Huanhuan and Zhao, Pengpeng and Qu, Jianfeng and Zhuang, Fuzhen and Liu, Guanfeng and Liu, Yanchi and Sheng, Victor S
2,023
null
null
null
null
Frequency Enhanced Hybrid Attention Network for Sequential Recommendation
Frequency Enhanced Hybrid Attention Network for ...
https://arxiv.org/pdf/2304.09184
by X Du · 2023 · Cited by 108 — FEARec is a Frequency Enhanced Hybrid Attention Network for sequential recommendation, improving self-attention in the frequency domain to capture both low and
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
DL4
\cite{DL4}
Deep learning based recommender system: A survey and new perspectives
null
null
true
false
Zhang, Shuai and Yao, Lina and Sun, Aixin and Tay, Yi
2,019
null
null
null
CSUR
Deep learning based recommender system: A survey and new perspectives
Deep Learning based Recommender System: A Survey and New Perspectives
http://arxiv.org/pdf/1707.07435v7
With the ever-growing volume of online information, recommender systems have been an effective strategy to overcome such information overload. The utility of recommender systems cannot be overstated, given its widespread adoption in many web applications, along with its potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. Evidently, the field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning based recommender systems. More concretely, we provide and devise a taxonomy of deep learning based recommendation models, along with providing a comprehensive summary of the state-of-the-art. Finally, we expand on current trends and provide new perspectives pertaining to this new exciting development of the field.
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
Xavier
\cite{Xavier}
Understanding the difficulty of training deep feedforward neural networks
null
null
true
false
Glorot, Xavier and Bengio, Yoshua
2,010
null
null
null
null
Understanding the difficulty of training deep feedforward neural networks
Understanding the difficulty of training deep feedforward ...
https://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf
by X Glorot · Cited by 28103 — Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
sse-pt
\cite{sse-pt}
SSE-PT: Sequential recommendation via personalized transformer
null
null
true
false
Wu, Liwei and Li, Shuqing and Hsieh, Cho-Jui and Sharpnack, James
2,020
null
null
null
null
SSE-PT: Sequential recommendation via personalized transformer
SSE-PT: Sequential Recommendation Via Personalized Transformer
https://www.researchgate.net/publication/347834874_SSE-PT_Sequential_Recommendation_Via_Personalized_Transformer
Sequential recommendation systems process a user's history of interactions into a time-ordered sequence that reflects the evolution of their
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
zhao2023embedding
\cite{zhao2023embedding}
Embedding in Recommender Systems: A Survey
http://arxiv.org/abs/2310.18608v2
Recommender systems have become an essential component of many online platforms, providing personalized recommendations to users. A crucial aspect is embedding techniques that coverts the high-dimensional discrete features, such as user and item IDs, into low-dimensional continuous vectors and can enhance the recommendation performance. Applying embedding techniques captures complex entity relationships and has spurred substantial research. In this survey, we provide an overview of the recent literature on embedding techniques in recommender systems. This survey covers embedding methods like collaborative filtering, self-supervised learning, and graph-based techniques. Collaborative filtering generates embeddings capturing user-item preferences, excelling in sparse data. Self-supervised methods leverage contrastive or generative learning for various tasks. Graph-based techniques like node2vec exploit complex relationships in network-rich environments. Addressing the scalability challenges inherent to embedding methods, our survey delves into innovative directions within the field of recommendation systems. These directions aim to enhance performance and reduce computational complexity, paving the way for improved recommender systems. Among these innovative approaches, we will introduce Auto Machine Learning (AutoML), hash techniques, and quantization techniques in this survey. We discuss various architectures and techniques and highlight the challenges and future directions in these aspects. This survey aims to provide a comprehensive overview of the state-of-the-art in this rapidly evolving field and serve as a useful resource for researchers and practitioners working in the area of recommender systems.
true
true
Zhao, Xiangyu and Wang, Maolin and Zhao, Xinjian and Li, Jiansheng and Zhou, Shucheng and Yin, Dawei and Li, Qing and Tang, Jiliang and Guo, Ruocheng
2,023
null
null
null
arXiv preprint arXiv:2310.18608
Embedding in Recommender Systems: A Survey
Embedding in Recommender Systems: A Survey
http://arxiv.org/pdf/2310.18608v2
Recommender systems have become an essential component of many online platforms, providing personalized recommendations to users. A crucial aspect is embedding techniques that coverts the high-dimensional discrete features, such as user and item IDs, into low-dimensional continuous vectors and can enhance the recommendation performance. Applying embedding techniques captures complex entity relationships and has spurred substantial research. In this survey, we provide an overview of the recent literature on embedding techniques in recommender systems. This survey covers embedding methods like collaborative filtering, self-supervised learning, and graph-based techniques. Collaborative filtering generates embeddings capturing user-item preferences, excelling in sparse data. Self-supervised methods leverage contrastive or generative learning for various tasks. Graph-based techniques like node2vec exploit complex relationships in network-rich environments. Addressing the scalability challenges inherent to embedding methods, our survey delves into innovative directions within the field of recommendation systems. These directions aim to enhance performance and reduce computational complexity, paving the way for improved recommender systems. Among these innovative approaches, we will introduce Auto Machine Learning (AutoML), hash techniques, and quantization techniques in this survey. We discuss various architectures and techniques and highlight the challenges and future directions in these aspects. This survey aims to provide a comprehensive overview of the state-of-the-art in this rapidly evolving field and serve as a useful resource for researchers and practitioners working in the area of recommender systems.
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
FMLP
\cite{FMLP}
Filter-enhanced MLP is All You Need for Sequential Recommendation
http://arxiv.org/abs/2202.13556v1
Recently, deep neural networks such as RNN, CNN and Transformer have been applied in the task of sequential recommendation, which aims to capture the dynamic preference characteristics from logged user behavior data for accurate recommendation. However, in online platforms, logged user behavior data is inevitable to contain noise, and deep recommendation models are easy to overfit on these logged data. To tackle this problem, we borrow the idea of filtering algorithms from signal processing that attenuates the noise in the frequency domain. In our empirical experiments, we find that filtering algorithms can substantially improve representative sequential recommendation models, and integrating simple filtering algorithms (eg Band-Stop Filter) with an all-MLP architecture can even outperform competitive Transformer-based models. Motivated by it, we propose \textbf{FMLP-Rec}, an all-MLP model with learnable filters for sequential recommendation task. The all-MLP architecture endows our model with lower time complexity, and the learnable filters can adaptively attenuate the noise information in the frequency domain. Extensive experiments conducted on eight real-world datasets demonstrate the superiority of our proposed method over competitive RNN, CNN, GNN and Transformer-based methods. Our code and data are publicly available at the link: \textcolor{blue}{\url{https://github.com/RUCAIBox/FMLP-Rec}}.
true
true
Zhou, Kun and Yu, Hui and Zhao, Wayne Xin and Wen, Ji-Rong
2,022
null
null
null
null
Filter-enhanced MLP is All You Need for Sequential Recommendation
Filter-enhanced MLP is All You Need for Sequential Recommendation
https://dl.acm.org/doi/10.1145/3485447.3512111
We propose FMLP-Rec, an all-MLP model with learnable filters for sequential recommendation task. The all-MLP architecture endows our model with lower time
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
strec
\cite{strec}
STRec: Sparse Transformer for Sequential Recommendations
null
null
true
false
Li, Chengxi and Wang, Yejing and Liu, Qidong and Zhao, Xiangyu and Wang, Wanyu and Wang, Yiqi and Zou, Lixin and Fan, Wenqi and Li, Qing
2,023
null
null
null
null
STRec: Sparse Transformer for Sequential Recommendations
CITE
https://aml-cityu.github.io/bibtex/li2023strec.html
@inproceedings{li2023strec, title={STRec: Sparse Transformer for Sequential Recommendations}, author={Li, Chengxi and Wang, Yejing and Liu, Qidong and Zhao
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
MLM4Rec
\cite{MLM4Rec}
Learning Global and Multi-granularity Local Representation with MLP for Sequential Recommendation
null
null
true
false
Long, Chao and Yuan, Huanhuan and Fang, Junhua and Xian, Xuefeng and Liu, Guanfeng and Sheng, Victor S and Zhao, Pengpeng
2,024
null
null
null
ACM Transactions on Knowledge Discovery from Data
Learning Global and Multi-granularity Local Representation with MLP for Sequential Recommendation
Learning Global and Multi-granularity Local ...
https://openreview.net/forum?id=CtsUBneYhu&referrer=%5Bthe%20profile%20of%20Junhua%20Fang%5D(%2Fprofile%3Fid%3D~Junhua_Fang1)
Learning Global and Multi-granularity Local Representation with MLP for Sequential Recommendation | OpenReview Learning Global and Multi-granularity Local Representation with MLP for Sequential Recommendation Usually, users’ global and local preferences jointly affect the final recommendation result in different ways. Most existing works use transformers to globally model sequences, which makes them face the dilemma of quadratic computational complexity when dealing with long sequences. To this end, we proposed a parallel architecture for capturing global representation and Multi-granularity Local dependencies with MLP for sequential Recommendation (MLM4Rec). For global representation, we utilize modified MLP-Mixer to capture global information of user sequences due to its simplicity and efficiency. For local representation, we incorporate convolution into MLP and propose a multi-granularity local awareness mechanism for capturing richer local semantic information.
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
PEPNet
\cite{PEPNet}
PEPNet: Parameter and Embedding Personalized Network for Infusing with Personalized Prior Information
http://arxiv.org/abs/2302.01115v3
With the increase of content pages and interactive buttons in online services such as online-shopping and video-watching websites, industrial-scale recommender systems face challenges in multi-domain and multi-task recommendations. The core of multi-task and multi-domain recommendation is to accurately capture user interests in multiple scenarios given multiple user behaviors. In this paper, we propose a plug-and-play \textit{\textbf{P}arameter and \textbf{E}mbedding \textbf{P}ersonalized \textbf{Net}work (\textbf{PEPNet})} for multi-domain and multi-task recommendation. PEPNet takes personalized prior information as input and dynamically scales the bottom-level Embedding and top-level DNN hidden units through gate mechanisms. \textit{Embedding Personalized Network (EPNet)} performs personalized selection on Embedding to fuse features with different importance for different users in multiple domains. \textit{Parameter Personalized Network (PPNet)} executes personalized modification on DNN parameters to balance targets with different sparsity for different users in multiple tasks. We have made a series of special engineering optimizations combining the Kuaishou training framework and the online deployment environment. By infusing personalized selection of Embedding and personalized modification of DNN parameters, PEPNet tailored to the interests of each individual obtains significant performance gains, with online improvements exceeding 1\% in multiple task metrics across multiple domains. We have deployed PEPNet in Kuaishou apps, serving over 300 million users every day.
true
true
Chang, Jianxin and Zhang, Chenbin and Hui, Yiqun and Leng, Dewei and Niu, Yanan and Song, Yang and Gai, Kun
2,023
null
null
null
null
PEPNet: Parameter and Embedding Personalized Network for Infusing with Personalized Prior Information
[PDF] PEPNet: Parameter and Embedding Personalized Network ... - arXiv
https://arxiv.org/pdf/2302.01115
Missing: 04/08/2025
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
mb-str
\cite{mb-str}
Multi-behavior sequential transformer recommender
null
null
true
false
Yuan, Enming and Guo, Wei and He, Zhicheng and Guo, Huifeng and Liu, Chengkai and Tang, Ruiming
2,022
null
null
null
null
Multi-behavior sequential transformer recommender
Multi-Behavior Sequential Transformer Recommender
https://dl.acm.org/doi/10.1145/3477495.3532023
The proposed framework MB-STR, a Multi-Behavior Sequential Transformer Recommender, is equipped with the multi-behavior transformer layer (MB-Trans), the multi
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
lightsan
\cite{lightsan}
Lighter and better: low-rank decomposed self-attention networks for next-item recommendation
null
null
true
false
Fan, Xinyan and Liu, Zheng and Lian, Jianxun and Zhao, Wayne Xin and Xie, Xing and Wen, Ji-Rong
2,021
null
null
null
null
Lighter and better: low-rank decomposed self-attention networks for next-item recommendation
[PDF] Low-Rank Decomposed Self-Attention Networks for Next-Item ...
https://www.microsoft.com/en-us/research/wp-content/uploads/2021/05/LighterandBetter_Low-RankDecomposedSelf-AttentionNetworksforNext-ItemRecommendation.pdf
Lighter and Better: Low-Rank Decomposed Self-Attention Networks for Next-Item Recommendation Xinyan Fan1,2, Zheng Liu3∗, Jianxun Lian3, Wayne Xin Zhao1,2∗, Xing Xie3, and Ji-Rong Wen1,2 1Gaoling School of Artificial Intelligence, Renmin University of China 2Beijing Key Laboratory of Big Data Management and Analysis Methods 3Microsoft Research Asia {xinyan.fan, jrwen}@ruc.edu.cn, [email protected], {zhengliu, jianxun.lian, xingx}@microsoft.com ABSTRACT Self-attention networks (SANs) have been intensively applied for sequential recommenders, but they are limited due to: (1) the qua-dratic complexity and vulnerability to over-parameterization in self-attention; (2) inaccurate modeling of sequential relations between items due to the implicit position encoding. Our main contributions are summarized as follows: • A novel SANs-based sequential recommender, LightSANs, with two advantages: (1) the low-rank decomposed self-attention for more efficient and precise modeling of context-aware represen-tations; (2) the decoupled position encoding for more effective modeling of sequential relations between items.
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
autoseqrec
\cite{autoseqrec}
AutoSeqRec: Autoencoder for Efficient Sequential Recommendation
http://arxiv.org/abs/2308.06878v1
Sequential recommendation demonstrates the capability to recommend items by modeling the sequential behavior of users. Traditional methods typically treat users as sequences of items, overlooking the collaborative relationships among them. Graph-based methods incorporate collaborative information by utilizing the user-item interaction graph. However, these methods sometimes face challenges in terms of time complexity and computational efficiency. To address these limitations, this paper presents AutoSeqRec, an incremental recommendation model specifically designed for sequential recommendation tasks. AutoSeqRec is based on autoencoders and consists of an encoder and three decoders within the autoencoder architecture. These components consider both the user-item interaction matrix and the rows and columns of the item transition matrix. The reconstruction of the user-item interaction matrix captures user long-term preferences through collaborative filtering. In addition, the rows and columns of the item transition matrix represent the item out-degree and in-degree hopping behavior, which allows for modeling the user's short-term interests. When making incremental recommendations, only the input matrices need to be updated, without the need to update parameters, which makes AutoSeqRec very efficient. Comprehensive evaluations demonstrate that AutoSeqRec outperforms existing methods in terms of accuracy, while showcasing its robustness and efficiency.
true
true
Liu, Sijia and Liu, Jiahao and Gu, Hansu and Li, Dongsheng and Lu, Tun and Zhang, Peng and Gu, Ning
2,023
null
null
null
null
AutoSeqRec: Autoencoder for Efficient Sequential Recommendation
AutoSeqRec: Autoencoder for Efficient Sequential Recommendation
http://arxiv.org/pdf/2308.06878v1
Sequential recommendation demonstrates the capability to recommend items by modeling the sequential behavior of users. Traditional methods typically treat users as sequences of items, overlooking the collaborative relationships among them. Graph-based methods incorporate collaborative information by utilizing the user-item interaction graph. However, these methods sometimes face challenges in terms of time complexity and computational efficiency. To address these limitations, this paper presents AutoSeqRec, an incremental recommendation model specifically designed for sequential recommendation tasks. AutoSeqRec is based on autoencoders and consists of an encoder and three decoders within the autoencoder architecture. These components consider both the user-item interaction matrix and the rows and columns of the item transition matrix. The reconstruction of the user-item interaction matrix captures user long-term preferences through collaborative filtering. In addition, the rows and columns of the item transition matrix represent the item out-degree and in-degree hopping behavior, which allows for modeling the user's short-term interests. When making incremental recommendations, only the input matrices need to be updated, without the need to update parameters, which makes AutoSeqRec very efficient. Comprehensive evaluations demonstrate that AutoSeqRec outperforms existing methods in terms of accuracy, while showcasing its robustness and efficiency.
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
HRNN
\cite{HRNN}
Personalizing Session-based Recommendations with Hierarchical Recurrent Neural Networks
http://arxiv.org/abs/1706.04148v5
Session-based recommendations are highly relevant in many modern on-line services (e.g. e-commerce, video streaming) and recommendation settings. Recently, Recurrent Neural Networks have been shown to perform very well in session-based settings. While in many session-based recommendation domains user identifiers are hard to come by, there are also domains in which user profiles are readily available. We propose a seamless way to personalize RNN models with cross-session information transfer and devise a Hierarchical RNN model that relays end evolves latent hidden states of the RNNs across user sessions. Results on two industry datasets show large improvements over the session-only RNNs.
true
true
Quadrana, Massimo and Karatzoglou, Alexandros and Hidasi, Bal{\'a}zs and Cremonesi, Paolo
2,017
null
null
null
null
Personalizing Session-based Recommendations with Hierarchical Recurrent Neural Networks
Personalizing Session-based Recommendations with Hierarchical ...
https://www.slideshare.net/slideshow/personalizing-sessionbased-recommendations-with-hierarchical-recurrent-neural-networks/79285884
This document summarizes a research paper on personalizing session-based recommendations with hierarchical recurrent neural networks (HRNNs).
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
zhao2023user
\cite{zhao2023user}
User Retention-oriented Recommendation with Decision Transformer
http://arxiv.org/abs/2303.06347v1
Improving user retention with reinforcement learning~(RL) has attracted increasing attention due to its significant importance in boosting user engagement. However, training the RL policy from scratch without hurting users' experience is unavoidable due to the requirement of trial-and-error searches. Furthermore, the offline methods, which aim to optimize the policy without online interactions, suffer from the notorious stability problem in value estimation or unbounded variance in counterfactual policy evaluation. To this end, we propose optimizing user retention with Decision Transformer~(DT), which avoids the offline difficulty by translating the RL as an autoregressive problem. However, deploying the DT in recommendation is a non-trivial problem because of the following challenges: (1) deficiency in modeling the numerical reward value; (2) data discrepancy between the policy learning and recommendation generation; (3) unreliable offline performance evaluation. In this work, we, therefore, contribute a series of strategies for tackling the exposed issues. We first articulate an efficient reward prompt by weighted aggregation of meta embeddings for informative reward embedding. Then, we endow a weighted contrastive learning method to solve the discrepancy between training and inference. Furthermore, we design two robust offline metrics to measure user retention. Finally, the significant improvement in the benchmark datasets demonstrates the superiority of the proposed method.
true
true
Zhao, Kesen and Zou, Lixin and Zhao, Xiangyu and Wang, Maolin and Yin, Dawei
2,023
null
null
null
null
User Retention-oriented Recommendation with Decision Transformer
User Retention-oriented Recommendation with Decision ...
https://arxiv.org/pdf/2303.06347
by K Zhao · 2023 · Cited by 31 — This paper proposes using Decision Transformer (DT) to optimize user retention in recommendation by translating reinforcement learning as an
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
DMAN
\cite{DMAN}
Dynamic Memory based Attention Network for Sequential Recommendation
http://arxiv.org/abs/2102.09269v1
Sequential recommendation has become increasingly essential in various online services. It aims to model the dynamic preferences of users from their historical interactions and predict their next items. The accumulated user behavior records on real systems could be very long. This rich data brings opportunities to track actual interests of users. Prior efforts mainly focus on making recommendations based on relatively recent behaviors. However, the overall sequential data may not be effectively utilized, as early interactions might affect users' current choices. Also, it has become intolerable to scan the entire behavior sequence when performing inference for each user, since real-world system requires short response time. To bridge the gap, we propose a novel long sequential recommendation model, called Dynamic Memory-based Attention Network (DMAN). It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users. To improve memory fidelity, DMAN dynamically abstracts each user's long-term interest into its own memory blocks by minimizing an auxiliary reconstruction loss. Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation. Empirical results over four benchmark datasets demonstrate the superiority of our model in capturing long-term dependency over various state-of-the-art sequential models.
true
true
Tan, Qiaoyu and Zhang, Jianwei and Liu, Ninghao and Huang, Xiao and Yang, Hongxia and Zhou, Jingren and Hu, Xia
2,021
null
null
null
null
Dynamic Memory based Attention Network for Sequential Recommendation
Dynamic Memory based Attention Network for Sequential Recommendation
http://arxiv.org/pdf/2102.09269v1
Sequential recommendation has become increasingly essential in various online services. It aims to model the dynamic preferences of users from their historical interactions and predict their next items. The accumulated user behavior records on real systems could be very long. This rich data brings opportunities to track actual interests of users. Prior efforts mainly focus on making recommendations based on relatively recent behaviors. However, the overall sequential data may not be effectively utilized, as early interactions might affect users' current choices. Also, it has become intolerable to scan the entire behavior sequence when performing inference for each user, since real-world system requires short response time. To bridge the gap, we propose a novel long sequential recommendation model, called Dynamic Memory-based Attention Network (DMAN). It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users. To improve memory fidelity, DMAN dynamically abstracts each user's long-term interest into its own memory blocks by minimizing an auxiliary reconstruction loss. Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation. Empirical results over four benchmark datasets demonstrate the superiority of our model in capturing long-term dependency over various state-of-the-art sequential models.
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
koren2009matrix
\cite{koren2009matrix}
Content-boosted Matrix Factorization Techniques for Recommender Systems
http://arxiv.org/abs/1210.5631v2
Many businesses are using recommender systems for marketing outreach. Recommendation algorithms can be either based on content or driven by collaborative filtering. We study different ways to incorporate content information directly into the matrix factorization approach of collaborative filtering. These content-boosted matrix factorization algorithms not only improve recommendation accuracy, but also provide useful insights about the contents, as well as make recommendations more easily interpretable.
true
true
Koren, Yehuda and Bell, Robert and Volinsky, Chris
2,009
null
null
null
Computer
Content-boosted Matrix Factorization Techniques for Recommender Systems
Content-boosted Matrix Factorization Techniques for Recommender ...
https://arxiv.org/abs/1210.5631
[1210.5631] Content-boosted Matrix Factorization Techniques for Recommender Systems >stat> arXiv:1210.5631 arXiv:1210.5631 (stat) Title:Content-boosted Matrix Factorization Techniques for Recommender Systems View a PDF of the paper titled Content-boosted Matrix Factorization Techniques for Recommender Systems, by Jennifer Nguyen and 1 other authors Cite as:arXiv:1210.5631 [stat.ML] (or arXiv:1210.5631v2 [stat.ML] for this version) View a PDF of the paper titled Content-boosted Matrix Factorization Techniques for Recommender Systems, by Jennifer Nguyen and 1 other authors [x] Bibliographic Explorer Toggle [x] Connected Papers Toggle [x] Litmaps Toggle [x] scite.ai Toggle [x] alphaXiv Toggle [x] Links to Code Toggle [x] DagsHub Toggle [x] GotitPub Toggle [x] Huggingface Toggle [x] Links to Code Toggle [x] ScienceCast Toggle [x] Replicate Toggle [x] Spaces Toggle [x] Spaces Toggle [x] Core recommender toggle
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
Kang01
\cite{Kang01}
Self-Attentive Sequential Recommendation
http://arxiv.org/abs/1808.09781v1
Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the `context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are `relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences.
true
true
Kang, Wang-Cheng and McAuley, Julian
2,018
null
null
null
null
Self-Attentive Sequential Recommendation
Self Attention on Recommendation System - Jeffery chiang
https://medium.com/analytics-vidhya/self-attention-on-recommendation-system-self-attentive-sequential-recommendation-review-c94796dde001
Self-attention is a powerful mechanism used in deep learning to process sequential data, such as sentences or time-series data, by considering the relationship
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
bert4rec
\cite{bert4rec}
BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer
http://arxiv.org/abs/1904.06690v2
Modeling users' dynamic and evolving preferences from their historical behaviors is challenging and crucial for recommendation systems. Previous methods employ sequential neural networks (e.g., Recurrent Neural Network) to encode users' historical interactions from left to right into hidden representations for making recommendations. Although these methods achieve satisfactory results, they often assume a rigidly ordered sequence which is not always practical. We argue that such left-to-right unidirectional architectures restrict the power of the historical sequence representations. For this purpose, we introduce a Bidirectional Encoder Representations from Transformers for sequential Recommendation (BERT4Rec). However, jointly conditioning on both left and right context in deep bidirectional model would make the training become trivial since each item can indirectly "see the target item". To address this problem, we train the bidirectional model using the Cloze task, predicting the masked items in the sequence by jointly conditioning on their left and right context. Comparing with predicting the next item at each position in a sequence, the Cloze task can produce more samples to train a more powerful bidirectional model. Extensive experiments on four benchmark datasets show that our model outperforms various state-of-the-art sequential models consistently.
true
true
Sun, Fei and Liu, Jun and Wu, Jian and Pei, Changhua and Lin, Xiao and Ou, Wenwu and Jiang, Peng
2,019
null
null
null
null
BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer
BERT4Rec: Sequential Recommendation with Bidirectional Encoder ...
https://dl.acm.org/doi/10.1145/3357384.3357895
We proposed a sequential recommendation model called BERT4Rec, which employs the deep bidirectional self-attention to model user behavior sequences.
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
Linrec
\cite{Linrec}
LinRec: Linear Attention Mechanism for Long-term Sequential Recommender Systems
http://arxiv.org/abs/2411.01537v1
Transformer models have achieved remarkable success in sequential recommender systems (SRSs). However, computing the attention matrix in traditional dot-product attention mechanisms results in a quadratic complexity with sequence lengths, leading to high computational costs for long-term sequential recommendation. Motivated by the above observation, we propose a novel L2-Normalized Linear Attention for the Transformer-based Sequential Recommender Systems (LinRec), which theoretically improves efficiency while preserving the learning capabilities of the traditional dot-product attention. Specifically, by thoroughly examining the equivalence conditions of efficient attention mechanisms, we show that LinRec possesses linear complexity while preserving the property of attention mechanisms. In addition, we reveal its latent efficiency properties by interpreting the proposed LinRec mechanism through a statistical lens. Extensive experiments are conducted based on two public benchmark datasets, demonstrating that the combination of LinRec and Transformer models achieves comparable or even superior performance than state-of-the-art Transformer-based SRS models while significantly improving time and memory efficiency.
true
true
Liu, Langming and Cai, Liu and Zhang, Chi and Zhao, Xiangyu and Gao, Jingtong and Wang, Wanyu and Lv, Yifu and Fan, Wenqi and Wang, Yiqi and He, Ming and others
2,023
null
null
null
null
LinRec: Linear Attention Mechanism for Long-term Sequential Recommender Systems
GLINT-RU: Gated Lightweight Intelligent Recurrent Units for ...
https://www.atailab.cn/seminar2025Spring/pdf/2025_KDD_GLINT-RU_Gated%20Lightweight%20Intelligent%20Recurrent%20Units%20for%20Sequential%20Recommender%20Systems.pdf
by S Zhang · 2025 · Cited by 6 — Linrec: Linear attention mechanism for long-term sequential recommender systems. In Proceedings of the 46th International ACM SIGIR Conference on Research
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
GRU4Rec
\cite{GRU4Rec}
Session-based Recommendations with Recurrent Neural Networks
http://arxiv.org/abs/1511.06939v4
We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches.
true
true
Hidasi, Bal{\'a}zs and Karatzoglou, Alexandros and Baltrunas, Linas and Tikk, Domonkos
2,015
null
null
null
arXiv preprint arXiv:1511.06939
Session-based Recommendations with Recurrent Neural Networks
Session-based Recommendations with Recurrent Neural Networks
https://www.semanticscholar.org/paper/Session-based-Recommendations-with-Recurrent-Neural-Hidasi-Karatzoglou/e0021d61c2ab1334bc725852edd44597f4c65dff
It is argued that by modeling the whole session, more accurate recommendations can be provided by an RNN-based approach for session-based recommendations,
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
GLINTours25
\cite{GLINTours25}
GLINT-RU: Gated Lightweight Intelligent Recurrent Units for Sequential Recommender Systems
null
null
true
false
Zhang, Sheng and Wang, Maolin and Zhao, Xiangyu
2,024
null
null
null
arXiv preprint arXiv:2406.10244
GLINT-RU: Gated Lightweight Intelligent Recurrent Units for Sequential Recommender Systems
GLINT-RU: Gated Lightweight Intelligent Recurrent Units for Sequential Recommender Systems
http://arxiv.org/pdf/2406.10244v3
Transformer-based models have gained significant traction in sequential recommender systems (SRSs) for their ability to capture user-item interactions effectively. However, these models often suffer from high computational costs and slow inference. Meanwhile, existing efficient SRS approaches struggle to embed high-quality semantic and positional information into latent representations. To tackle these challenges, this paper introduces GLINT-RU, a lightweight and efficient SRS leveraging a single-layer dense selective Gated Recurrent Units (GRU) module to accelerate inference. By incorporating a dense selective gate, GLINT-RU adaptively captures temporal dependencies and fine-grained positional information, generating high-quality latent representations. Additionally, a parallel mixing block infuses fine-grained positional features into user-item interactions, enhancing both recommendation quality and efficiency. Extensive experiments on three datasets demonstrate that GLINT-RU achieves superior prediction accuracy and inference speed, outperforming baselines based on RNNs, Transformers, MLPs, and SSMs. These results establish GLINT-RU as a powerful and efficient solution for SRSs.
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
HiPPOs21
\cite{HiPPOs21}
There is HOPE to Avoid HiPPOs for Long-memory State Space Models
null
null
true
false
Yu, Annan and Mahoney, Michael W and Erichson, N Benjamin
2,024
null
null
null
arXiv preprint arXiv:2405.13975
There is HOPE to Avoid HiPPOs for Long-memory State Space Models
There is HOPE to Avoid HiPPOs for Long-memory State ...
https://www.researchgate.net/publication/380820131_There_is_HOPE_to_Avoid_HiPPOs_for_Long-memory_State_Space_Models
State-space models (SSMs) that utilize linear, time-invariant (LTI) systems are known for their effectiveness in learning long sequences.See more
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
16Dual
\cite{16Dual}
Dual-path Mamba: Short and Long-term Bidirectional Selective Structured State Space Models for Speech Separation
http://arxiv.org/abs/2403.18257v2
Transformers have been the most successful architecture for various speech modeling tasks, including speech separation. However, the self-attention mechanism in transformers with quadratic complexity is inefficient in computation and memory. Recent models incorporate new layers and modules along with transformers for better performance but also introduce extra model complexity. In this work, we replace transformers with Mamba, a selective state space model, for speech separation. We propose dual-path Mamba, which models short-term and long-term forward and backward dependency of speech signals using selective state spaces. Our experimental results on the WSJ0-2mix data show that our dual-path Mamba models of comparably smaller sizes outperform state-of-the-art RNN model DPRNN, CNN model WaveSplit, and transformer model Sepformer. Code: https://github.com/xi-j/Mamba-TasNet
true
true
Jiang, Xilin and Han, Cong and Mesgarani, Nima
2,024
null
null
null
arXiv preprint arXiv:2403.18257
Dual-path Mamba: Short and Long-term Bidirectional Selective Structured State Space Models for Speech Separation
Dual-path Mamba: Short and Long-term Bidirectional Selective ...
https://arxiv.org/abs/2403.18257
We propose dual-path Mamba, which models short-term and long-term forward and backward dependency of speech signals using selective state spaces.
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
gu2023mamba
\cite{gu2023mamba}
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
http://arxiv.org/abs/2312.00752v2
Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$ higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.
true
true
Gu, Albert and Dao, Tri
2,023
null
null
null
arXiv preprint arXiv:2312.00752
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
https://openreview.net/forum?id=tEYskw1VY2
This paper proposes Mamba, a linear-time sequence model with an intra-layer combination of Selective S4D, Short Convolution and Gated Linear Unit. The paper
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
qu2024survey
\cite{qu2024survey}
A Survey of Mamba
http://arxiv.org/abs/2408.01129v6
As one of the most representative DL techniques, Transformer architecture has empowered numerous advanced models, especially the large language models (LLMs) that comprise billions of parameters, becoming a cornerstone in deep learning. Despite the impressive achievements, Transformers still face inherent limitations, particularly the time-consuming inference resulting from the quadratic computation complexity of attention calculation. Recently, a novel architecture named Mamba, drawing inspiration from classical state space models (SSMs), has emerged as a promising alternative for building foundation models, delivering comparable modeling abilities to Transformers while preserving near-linear scalability concerning sequence length. This has sparked an increasing number of studies actively exploring Mamba's potential to achieve impressive performance across diverse domains. Given such rapid evolution, there is a critical need for a systematic review that consolidates existing Mamba-empowered models, offering a comprehensive understanding of this emerging model architecture. In this survey, we therefore conduct an in-depth investigation of recent Mamba-associated studies, covering three main aspects: the advancements of Mamba-based models, the techniques of adapting Mamba to diverse data, and the applications where Mamba can excel. Specifically, we first review the foundational knowledge of various representative deep learning models and the details of Mamba-1&2 as preliminaries. Then, to showcase the significance of Mamba for AI, we comprehensively review the related studies focusing on Mamba models' architecture design, data adaptability, and applications. Finally, we present a discussion of current limitations and explore various promising research directions to provide deeper insights for future investigations.
true
true
Qu, Haohao and Ning, Liangbo and An, Rui and Fan, Wenqi and Derr, Tyler and Liu, Hui and Xu, Xin and Li, Qing
2,024
null
null
null
arXiv preprint arXiv:2408.01129
A Survey of Mamba
A Survey of Mamba
http://arxiv.org/pdf/2408.01129v6
As one of the most representative DL techniques, Transformer architecture has empowered numerous advanced models, especially the large language models (LLMs) that comprise billions of parameters, becoming a cornerstone in deep learning. Despite the impressive achievements, Transformers still face inherent limitations, particularly the time-consuming inference resulting from the quadratic computation complexity of attention calculation. Recently, a novel architecture named Mamba, drawing inspiration from classical state space models (SSMs), has emerged as a promising alternative for building foundation models, delivering comparable modeling abilities to Transformers while preserving near-linear scalability concerning sequence length. This has sparked an increasing number of studies actively exploring Mamba's potential to achieve impressive performance across diverse domains. Given such rapid evolution, there is a critical need for a systematic review that consolidates existing Mamba-empowered models, offering a comprehensive understanding of this emerging model architecture. In this survey, we therefore conduct an in-depth investigation of recent Mamba-associated studies, covering three main aspects: the advancements of Mamba-based models, the techniques of adapting Mamba to diverse data, and the applications where Mamba can excel. Specifically, we first review the foundational knowledge of various representative deep learning models and the details of Mamba-1&2 as preliminaries. Then, to showcase the significance of Mamba for AI, we comprehensively review the related studies focusing on Mamba models' architecture design, data adaptability, and applications. Finally, we present a discussion of current limitations and explore various promising research directions to provide deeper insights for future investigations.
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
dao2024transformers
\cite{dao2024transformers}
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality
http://arxiv.org/abs/2405.21060v1
While Transformers have been the main architecture behind deep learning's success in language modeling, state-space models (SSMs) such as Mamba have recently been shown to match or outperform Transformers at small to medium scale. We show that these families of models are actually quite closely related, and develop a rich framework of theoretical connections between SSMs and variants of attention, connected through various decompositions of a well-studied class of structured semiseparable matrices. Our state space duality (SSD) framework allows us to design a new architecture (Mamba-2) whose core layer is an a refinement of Mamba's selective SSM that is 2-8X faster, while continuing to be competitive with Transformers on language modeling.
true
true
Dao, Tri and Gu, Albert
2,024
null
null
null
arXiv preprint arXiv:2405.21060
Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality
Transformers are SSMs: Generalized Models and Efficient ...
https://openreview.net/pdf/54bf495d93336f1f195f264c1b6c2805169b3492.pdf
27 Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality D.3.3 FULLY RECURRENT MODE Note that the fully recurrent mode, where the recurrence is evolved one step at a time (15), is simply an instantiation of the state-passing mode with chunk size k=1.
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
MambaRec
\cite{MambaRec}
Uncovering Selective State Space Model's Capabilities in Lifelong Sequential Recommendation
http://arxiv.org/abs/2403.16371v1
Sequential Recommenders have been widely applied in various online services, aiming to model users' dynamic interests from their sequential interactions. With users increasingly engaging with online platforms, vast amounts of lifelong user behavioral sequences have been generated. However, existing sequential recommender models often struggle to handle such lifelong sequences. The primary challenges stem from computational complexity and the ability to capture long-range dependencies within the sequence. Recently, a state space model featuring a selective mechanism (i.e., Mamba) has emerged. In this work, we investigate the performance of Mamba for lifelong sequential recommendation (i.e., length>=2k). More specifically, we leverage the Mamba block to model lifelong user sequences selectively. We conduct extensive experiments to evaluate the performance of representative sequential recommendation models in the setting of lifelong sequences. Experiments on two real-world datasets demonstrate the superiority of Mamba. We found that RecMamba achieves performance comparable to the representative model while significantly reducing training duration by approximately 70% and memory costs by 80%. Codes and data are available at \url{https://github.com/nancheng58/RecMamba}.
true
true
Yang, Jiyuan and Li, Yuanzi and Zhao, Jingyu and Wang, Hanbing and Ma, Muyang and Ma, Jun and Ren, Zhaochun and Zhang, Mengqi and Xin, Xin and Chen, Zhumin and others
2,024
null
null
null
arXiv preprint arXiv:2403.16371
Uncovering Selective State Space Model's Capabilities in Lifelong Sequential Recommendation
[PDF] Uncovering Selective State Space Model's Capabilities in Lifelong ...
https://arxiv.org/pdf/2403.16371
We conduct extensive ex- periments to evaluate the performance of representative sequential recommendation models in the setting of lifelong
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
wang2024echomamba4rec
\cite{wang2024echomamba4rec}
EchoMamba4Rec: Harmonizing Bidirectional State Space Models with Spectral Filtering for Advanced Sequential Recommendation
http://arxiv.org/abs/2406.02638v2
Predicting user preferences and sequential dependencies based on historical behavior is the core goal of sequential recommendation. Although attention-based models have shown effectiveness in this field, they often struggle with inference inefficiency due to the quadratic computational complexity inherent in attention mechanisms, especially with long-range behavior sequences. Drawing inspiration from the recent advancements of state space models (SSMs) in control theory, which provide a robust framework for modeling and controlling dynamic systems, we introduce EchoMamba4Rec. Control theory emphasizes the use of SSMs for managing long-range dependencies and maintaining inferential efficiency through structured state matrices. EchoMamba4Rec leverages these control relationships in sequential recommendation and integrates bi-directional processing with frequency-domain filtering to capture complex patterns and dependencies in user interaction data more effectively. Our model benefits from the ability of state space models (SSMs) to learn and perform parallel computations, significantly enhancing computational efficiency and scalability. It features a bi-directional Mamba module that incorporates both forward and reverse Mamba components, leveraging information from both past and future interactions. Additionally, a filter layer operates in the frequency domain using learnable Fast Fourier Transform (FFT) and learnable filters, followed by an inverse FFT to refine item embeddings and reduce noise. We also integrate Gate Linear Units (GLU) to dynamically control information flow, enhancing the model's expressiveness and training stability. Experimental results demonstrate that EchoMamba significantly outperforms existing models, providing more accurate and personalized recommendations.
true
true
Wang, Yuda and He, Xuxin and Zhu, Shengxin
2,024
null
null
null
arXiv preprint arXiv:2406.02638
EchoMamba4Rec: Harmonizing Bidirectional State Space Models with Spectral Filtering for Advanced Sequential Recommendation
EchoMamba4Rec: Harmonizing Bidirectional State Space ...
https://www.researchgate.net/publication/381190112_EchoMamba4Rec_Harmonizing_Bidirectional_State_Space_Models_with_Spectral_Filtering_for_Advanced_Sequential_Recommendation
EchoMamba4Rec leverages these control relationships in sequential recommendation and integrates bi-directional processing with frequency-domain
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
cao2024mamba4kt
\cite{cao2024mamba4kt}
Mamba4KT:An Efficient and Effective Mamba-based Knowledge Tracing Model
http://arxiv.org/abs/2405.16542v1
Knowledge tracing (KT) enhances student learning by leveraging past performance to predict future performance. Current research utilizes models based on attention mechanisms and recurrent neural network structures to capture long-term dependencies and correlations between exercises, aiming to improve model accuracy. Due to the growing amount of data in smart education scenarios, this poses a challenge in terms of time and space consumption for knowledge tracing models. However, existing research often overlooks the efficiency of model training and inference and the constraints of training resources. Recognizing the significance of prioritizing model efficiency and resource usage in knowledge tracing, we introduce Mamba4KT. This novel model is the first to explore enhanced efficiency and resource utilization in knowledge tracing. We also examine the interpretability of the Mamba structure both sequence-level and exercise-level to enhance model interpretability. Experimental findings across three public datasets demonstrate that Mamba4KT achieves comparable prediction accuracy to state-of-the-art models while significantly improving training and inference efficiency and resource utilization. As educational data continues to grow, our work suggests a promising research direction for knowledge tracing that improves model prediction accuracy, model efficiency, resource utilization, and interpretability simultaneously.
true
true
Cao, Yang and Zhang, Wei
2,024
null
null
null
arXiv preprint arXiv:2405.16542
Mamba4KT:An Efficient and Effective Mamba-based Knowledge Tracing Model
Mamba4KT:An Efficient and Effective Mamba-based ...
https://arxiv.org/html/2405.16542v1
We introduce a knowledge tracing model Mamba4KT based on selective state space model, which improves the training and inference efficiency and
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
liu2024bidirectional
\cite{liu2024bidirectional}
Bidirectional gated mamba for sequential recommendation
null
null
true
false
Liu, Ziwei and Liu, Qidong and Wang, Yejing and Wang, Wanyu and Jia, Pengyue and Wang, Maolin and Liu, Zitao and Chang, Yi and Zhao, Xiangyu
2,024
null
null
null
arXiv preprint arXiv:2408.11451
Bidirectional gated mamba for sequential recommendation
Bidirectional Gated Mamba for Sequential Recommendation
https://openreview.net/forum?id=xaJx6aRwRG
Bidirectional Gated Mamba for Sequential Recommendation | OpenReview Bidirectional Gated Mamba for Sequential Recommendation To overcome these issues, we introduce a new framework named Selective Gated Mamba (SIGMA) for Sequential Recommendation. This framework leverages a Partially Flipped Mamba (PF-Mamba) to construct a bidirectional architecture specifically tailored to improve contextual modeling. Additionally, an input-sensitive Dense Selective Gate (DS Gate) is employed to optimize directional weights and enhance the processing of sequential information in PF-Mamba. * About OpenReview To submit a bug report or feature request, you can use the official OpenReview GitHub repository: * About OpenReview To submit a bug report or feature request, you can use the official OpenReview GitHub repository:
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
yang2024uncovering
\cite{yang2024uncovering}
Uncovering Selective State Space Model's Capabilities in Lifelong Sequential Recommendation
http://arxiv.org/abs/2403.16371v1
Sequential Recommenders have been widely applied in various online services, aiming to model users' dynamic interests from their sequential interactions. With users increasingly engaging with online platforms, vast amounts of lifelong user behavioral sequences have been generated. However, existing sequential recommender models often struggle to handle such lifelong sequences. The primary challenges stem from computational complexity and the ability to capture long-range dependencies within the sequence. Recently, a state space model featuring a selective mechanism (i.e., Mamba) has emerged. In this work, we investigate the performance of Mamba for lifelong sequential recommendation (i.e., length>=2k). More specifically, we leverage the Mamba block to model lifelong user sequences selectively. We conduct extensive experiments to evaluate the performance of representative sequential recommendation models in the setting of lifelong sequences. Experiments on two real-world datasets demonstrate the superiority of Mamba. We found that RecMamba achieves performance comparable to the representative model while significantly reducing training duration by approximately 70% and memory costs by 80%. Codes and data are available at \url{https://github.com/nancheng58/RecMamba}.
true
true
Yang, Jiyuan and Li, Yuanzi and Zhao, Jingyu and Wang, Hanbing and Ma, Muyang and Ma, Jun and Ren, Zhaochun and Zhang, Mengqi and Xin, Xin and Chen, Zhumin and others
2,024
null
null
null
arXiv preprint arXiv:2403.16371
Uncovering Selective State Space Model's Capabilities in Lifelong Sequential Recommendation
[PDF] Uncovering Selective State Space Model's Capabilities in Lifelong ...
https://arxiv.org/pdf/2403.16371
We conduct extensive ex- periments to evaluate the performance of representative sequential recommendation models in the setting of lifelong
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
Visionzhu
\cite{Visionzhu}
Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model
http://arxiv.org/abs/2401.09417v3
Recently the state space models (SSMs) with efficient hardware-aware designs, i.e., the Mamba deep learning model, have shown great potential for long sequence modeling. Meanwhile building efficient and generic vision backbones purely upon SSMs is an appealing direction. However, representing visual data is challenging for SSMs due to the position-sensitivity of visual data and the requirement of global context for visual understanding. In this paper, we show that the reliance on self-attention for visual representation learning is not necessary and propose a new generic vision backbone with bidirectional Mamba blocks (Vim), which marks the image sequences with position embeddings and compresses the visual representation with bidirectional state space models. On ImageNet classification, COCO object detection, and ADE20k semantic segmentation tasks, Vim achieves higher performance compared to well-established vision transformers like DeiT, while also demonstrating significantly improved computation & memory efficiency. For example, Vim is 2.8$\times$ faster than DeiT and saves 86.8% GPU memory when performing batch inference to extract features on images with a resolution of 1248$\times$1248. The results demonstrate that Vim is capable of overcoming the computation & memory constraints on performing Transformer-style understanding for high-resolution images and it has great potential to be the next-generation backbone for vision foundation models. Code is available at https://github.com/hustvl/Vim.
true
true
Zhu, Lianghui and Liao, Bencheng and Zhang, Qian and Wang, Xinlong and Liu, Wenyu and Wang, Xinggang
2,024
null
null
null
arXiv preprint arXiv:2401.09417
Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model
Vision Mamba: Efficient Visual Representation Learning with ... - arXiv
https://arxiv.org/abs/2401.09417
Title:Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model View a PDF of the paper titled Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model, by Lianghui Zhu and 5 other authors In this paper, we show that the reliance on self-attention for visual representation learning is not necessary and propose a new generic vision backbone with bidirectional Mamba blocks (Vim), which marks the image sequences with position embeddings and compresses the visual representation with bidirectional state space models. View a PDF of the paper titled Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model, by Lianghui Zhu and 5 other authors - [x] Connected Papers Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle
STAR-Rec: Making Peace with Length Variance and Pattern Diversity in Sequential Recommendation
2505.03484v1
mamba4rec
\cite{mamba4rec}
Mamba4Rec: Towards Efficient Sequential Recommendation with Selective State Space Models
http://arxiv.org/abs/2403.03900v2
Sequential recommendation aims to estimate the dynamic user preferences and sequential dependencies among historical user behaviors. Although Transformer-based models have proven to be effective for sequential recommendation, they suffer from the inference inefficiency problem stemming from the quadratic computational complexity of attention operators, especially for long behavior sequences. Inspired by the recent success of state space models (SSMs), we propose Mamba4Rec, which is the first work to explore the potential of selective SSMs for efficient sequential recommendation. Built upon the basic Mamba block which is a selective SSM with an efficient hardware-aware parallel algorithm, we design a series of sequential modeling techniques to further promote model performance while maintaining inference efficiency. Through experiments on public datasets, we demonstrate how Mamba4Rec effectively tackles the effectiveness-efficiency dilemma, outperforming both RNN- and attention-based baselines in terms of both effectiveness and efficiency. The code is available at https://github.com/chengkai-liu/Mamba4Rec.
true
true
Liu, Chengkai and Lin, Jianghao and Wang, Jianling and Liu, Hanzhou and Caverlee, James
2,024
null
null
null
arXiv preprint arXiv:2403.03900
Mamba4Rec: Towards Efficient Sequential Recommendation with Selective State Space Models
Towards Efficient Sequential Recommendation with ...
https://arxiv.org/pdf/2403.03900
by C Liu · 2024 · Cited by 66 — We describe how. Mamba4Rec constructs a sequential recommendation model through an embedding layer, selective state space models, and a prediction layer.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
perozziDeepwalk2014
\cite{perozziDeepwalk2014}
Deep{W}alk: Online learning of social representations
null
null
true
false
Perozzi, Bryan and Al-Rfou, Rami and Skiena, Steven
2,014
null
null
null
null
Deep{W}alk: Online learning of social representations
DeepWalk: online learning of social representations
https://dl.acm.org/doi/10.1145/2623330.2623732
We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
groverNode2vecScalableFeature2016
\cite{groverNode2vecScalableFeature2016}
node2vec: Scalable Feature Learning for Networks
http://arxiv.org/abs/1607.00653v1
Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.
true
true
Grover, Aditya and Leskovec, Jure
2,016
null
null
null
null
node2vec: Scalable Feature Learning for Networks
node2vec: Scalable Feature Learning for Networks
http://arxiv.org/pdf/1607.00653v1
Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
huangGraphRecurrentNetworks2019
\cite{huangGraphRecurrentNetworks2019}
Graph recurrent networks with attributed random walks
null
null
true
false
Huang, Xiao and Song, Qingquan and Li, Yuening and Hu, Xia
2,019
null
null
null
null
Graph recurrent networks with attributed random walks
[PDF] Attributed Random Walks for Graph Recurrent Networks
https://www4.comp.polyu.edu.hk/~xiaohuang/docs/Xiao_KDD19_slides.pdf
Apply random walks on attributed networks to boost deep node representation learning. boost. Page 7. Graph Recurrent Networks with Attributed. Random Walks ……
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
nikolentzosRandomwalkgraphneuralnetworks2020
\cite{nikolentzosRandomwalkgraphneuralnetworks2020}
Random walk graph neural networks
null
null
true
false
Nikolentzos, Giannis and Vazirgiannis, Michalis
2,020
null
null
null
null
Random walk graph neural networks
Random Walk Graph Neural Networks
https://proceedings.neurips.cc/paper/2020/file/ba95d78a7c942571185308775a97a3a0-Paper.pdf
by G Nikolentzos · 2020 · Cited by 160 — In this paper, we propose a more intuitive and transparent architecture for graph-structured data, so-called Random Walk. Graph Neural Network (RWNN). The first
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
jinRawgnn2022
\cite{jinRawgnn2022}
Raw-{GNN}: Random walk aggregation based graph neural network
null
null
true
false
Jin, Di and Wang, Rui and Ge, Meng and He, Dongxiao and Li, Xiang and Lin, Wei and Zhang, Weixiong
2,022
null
null
null
arXiv:2206.13953
Raw-{GNN}: Random walk aggregation based graph neural network
RAndom Walk Aggregation based Graph Neural Network
https://www.ijcai.org/proceedings/2022/0293.pdf
by D Jin · Cited by 59 — Here, we introduce a novel aggregation mechanism and develop a RAn- dom Walk Aggregation-based Graph Neural Net- work (called RAW-GNN) method. The proposed.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
wangNonConvGNN2024
\cite{wangNonConvGNN2024}
Non-convolutional Graph Neural Networks
http://arxiv.org/abs/2408.00165v3
Rethink convolution-based graph neural networks (GNN) -- they characteristically suffer from limited expressiveness, over-smoothing, and over-squashing, and require specialized sparse kernels for efficient computation. Here, we design a simple graph learning module entirely free of convolution operators, coined random walk with unifying memory (RUM) neural network, where an RNN merges the topological and semantic graph features along the random walks terminating at each node. Relating the rich literature on RNN behavior and graph topology, we theoretically show and experimentally verify that RUM attenuates the aforementioned symptoms and is more expressive than the Weisfeiler-Lehman (WL) isomorphism test. On a variety of node- and graph-level classification and regression tasks, RUM not only achieves competitive performance, but is also robust, memory-efficient, scalable, and faster than the simplest convolutional GNNs.
true
true
Wang, Yuanqing and Cho, Kyunghyun
2,024
null
null
null
arXiv:2408.00165
Non-convolutional Graph Neural Networks
[2408.00165] Non-convolutional Graph Neural Networks
https://arxiv.org/abs/2408.00165
by Y Wang · 2024 · Cited by 12 — We design a simple graph learning module entirely free of convolution operators, coined random walk with unifying memory (RUM) neural network.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
kipfSemiSupervisedClassificationGraph2017
\cite{kipfSemiSupervisedClassificationGraph2017}
Semi-Supervised Classification with Graph Convolutional Networks
http://arxiv.org/abs/1609.02907v4
We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.
true
true
Kipf, Thomas N and Welling, Max
2,016
null
null
null
null
Semi-Supervised Classification with Graph Convolutional Networks
Semi-Supervised Classification with Graph Convolutional Networks
https://openreview.net/forum?id=SJU4ayYgl
Semi-Supervised Classification with Graph Convolutional Networks | OpenReview Semi-Supervised Classification with Graph Convolutional Networks Abstract:We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. TL;DR:Semi-supervised classification with a CNN model for graphs. About OpenReview To submit a bug report or feature request, you can use the official OpenReview GitHub repository: Select a topic or type what you need help with Cancel Send * Sponsors About OpenReview Sponsors To submit a bug report or feature request, you can use the official OpenReview GitHub repository: Select a topic or type what you need help with Cancel Send We gratefully acknowledge the support of theOpenReview Sponsors.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
wuSimplifyingGraphConvolutional2019
\cite{wuSimplifyingGraphConvolutional2019}
Simplifying Graph Convolutional Networks
http://arxiv.org/abs/1902.07153v2
Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.
true
true
Wu, Felix and Souza, Amauri and Zhang, Tianyi and Fifty, Christopher and Yu, Tao and Weinberger, Kilian
2,019
null
null
null
null
Simplifying Graph Convolutional Networks
Simplifying Graph Convolutional Networks
http://arxiv.org/pdf/1902.07153v2
Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
hamiltonInductiveRepresentationLearning2017
\cite{hamiltonInductiveRepresentationLearning2017}
Inductive Representation Learning in Large Attributed Graphs
http://arxiv.org/abs/1710.09471v2
Graphs (networks) are ubiquitous and allow us to model entities (nodes) and the dependencies (edges) between them. Learning a useful feature representation from graph data lies at the heart and success of many machine learning tasks such as classification, anomaly detection, link prediction, among many others. Many existing techniques use random walks as a basis for learning features or estimating the parameters of a graph model for a downstream prediction task. Examples include recent node embedding methods such as DeepWalk, node2vec, as well as graph-based deep learning algorithms. However, the simple random walk used by these methods is fundamentally tied to the identity of the node. This has three main disadvantages. First, these approaches are inherently transductive and do not generalize to unseen nodes and other graphs. Second, they are not space-efficient as a feature vector is learned for each node which is impractical for large graphs. Third, most of these approaches lack support for attributed graphs. To make these methods more generally applicable, we propose a framework for inductive network representation learning based on the notion of attributed random walk that is not tied to node identity and is instead based on learning a function $\Phi : \mathrm{\rm \bf x} \rightarrow w$ that maps a node attribute vector $\mathrm{\rm \bf x}$ to a type $w$. This framework serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many other previous methods that leverage traditional random walks.
true
true
Hamilton, Will and Ying, Zhitao and Leskovec, Jure
2,017
null
null
null
null
Inductive Representation Learning in Large Attributed Graphs
Inductive Representation Learning in Large Attributed Graphs
http://arxiv.org/pdf/1710.09471v2
Graphs (networks) are ubiquitous and allow us to model entities (nodes) and the dependencies (edges) between them. Learning a useful feature representation from graph data lies at the heart and success of many machine learning tasks such as classification, anomaly detection, link prediction, among many others. Many existing techniques use random walks as a basis for learning features or estimating the parameters of a graph model for a downstream prediction task. Examples include recent node embedding methods such as DeepWalk, node2vec, as well as graph-based deep learning algorithms. However, the simple random walk used by these methods is fundamentally tied to the identity of the node. This has three main disadvantages. First, these approaches are inherently transductive and do not generalize to unseen nodes and other graphs. Second, they are not space-efficient as a feature vector is learned for each node which is impractical for large graphs. Third, most of these approaches lack support for attributed graphs. To make these methods more generally applicable, we propose a framework for inductive network representation learning based on the notion of attributed random walk that is not tied to node identity and is instead based on learning a function $\Phi : \mathrm{\rm \bf x} \rightarrow w$ that maps a node attribute vector $\mathrm{\rm \bf x}$ to a type $w$. This framework serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many other previous methods that leverage traditional random walks.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
gilmerNeuralMessagePassing2017
\cite{gilmerNeuralMessagePassing2017}
Neural Message Passing for Quantum Chemistry
http://arxiv.org/abs/1704.01212v2
Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels.
true
true
Gilmer, Justin and Schoenholz, Samuel S. and Riley, Patrick F. and Vinyals, Oriol and Dahl, George E.
2,017
null
null
null
null
Neural Message Passing for Quantum Chemistry
Neural Message Passing for Quantum Chemistry
http://arxiv.org/pdf/1704.01212v2
Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
velickovicDeepGraphInfomax2018
\cite{velickovicDeepGraphInfomax2018}
Deep Graph Infomax
http://arxiv.org/abs/1809.10341v2
We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs---both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node-wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning.
true
true
Velickovic, Petar and Fedus, William and Hamilton, William L and Li{\`o}, Pietro and Bengio, Yoshua and Hjelm, R Devon
2,019
null
null
null
null
Deep Graph Infomax
[1809.10341] Deep Graph Infomax - arXiv
https://arxiv.org/abs/1809.10341
Abstract:We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
xuHowPowerfulAre2019
\cite{xuHowPowerfulAre2019}
How Powerful are Graph Neural Networks?
http://arxiv.org/abs/1810.00826v3
Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.
true
true
Xu, Keyulu and Hu, Weihua and Leskovec, Jure and Jegelka, Stefanie
2,018
null
null
null
arXiv:1810.00826
How Powerful are Graph Neural Networks?
How Powerful are Graph Neural Networks?
http://arxiv.org/pdf/1810.00826v3
Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
defferrardConvolutionalNeuralNetworks2016
\cite{defferrardConvolutionalNeuralNetworks2016}
Convolutional neural networks on graphs with fast localized spectral filtering
null
null
true
false
Defferrard, Micha{\"e}l and Bresson, Xavier and Vandergheynst, Pierre
2,016
null
null
null
null
Convolutional neural networks on graphs with fast localized spectral filtering
Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering
http://arxiv.org/pdf/1606.09375v3
In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
chienAdaptiveUniversalGeneralized2021
\cite{chienAdaptiveUniversalGeneralized2021}
Adaptive Universal Generalized PageRank Graph Neural Network
http://arxiv.org/abs/2006.07988v6
In many important graph data processing applications the acquired information includes both node features and observations of the graph topology. Graph neural networks (GNNs) are designed to exploit both sources of evidence but they do not optimally trade-off their utility and integrate them in a manner that is also universal. Here, universality refers to independence on homophily or heterophily graph assumptions. We address these issues by introducing a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights so as to jointly optimize node feature and topological information extraction, regardless of the extent to which the node labels are homophilic or heterophilic. Learned GPR weights automatically adjust to the node label pattern, irrelevant on the type of initialization, and thereby guarantee excellent learning performance for label patterns that are usually hard to handle. Furthermore, they allow one to avoid feature over-smoothing, a process which renders feature information nondiscriminative, without requiring the network to be shallow. Our accompanying theoretical analysis of the GPR-GNN method is facilitated by novel synthetic benchmark datasets generated by the so-called contextual stochastic block model. We also compare the performance of our GNN architecture with that of several state-of-the-art GNNs on the problem of node-classification, using well-known benchmark homophilic and heterophilic datasets. The results demonstrate that GPR-GNN offers significant performance improvement compared to existing techniques on both synthetic and benchmark data.
true
true
Chien, Eli and Peng, Jianhao and Li, Pan and Milenkovic, Olgica
2,020
null
null
null
arXiv:2006.07988
Adaptive Universal Generalized PageRank Graph Neural Network
Adaptive Universal Generalized PageRank Graph Neural Network
http://arxiv.org/pdf/2006.07988v6
In many important graph data processing applications the acquired information includes both node features and observations of the graph topology. Graph neural networks (GNNs) are designed to exploit both sources of evidence but they do not optimally trade-off their utility and integrate them in a manner that is also universal. Here, universality refers to independence on homophily or heterophily graph assumptions. We address these issues by introducing a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights so as to jointly optimize node feature and topological information extraction, regardless of the extent to which the node labels are homophilic or heterophilic. Learned GPR weights automatically adjust to the node label pattern, irrelevant on the type of initialization, and thereby guarantee excellent learning performance for label patterns that are usually hard to handle. Furthermore, they allow one to avoid feature over-smoothing, a process which renders feature information nondiscriminative, without requiring the network to be shallow. Our accompanying theoretical analysis of the GPR-GNN method is facilitated by novel synthetic benchmark datasets generated by the so-called contextual stochastic block model. We also compare the performance of our GNN architecture with that of several state-of-the-art GNNs on the problem of node-classification, using well-known benchmark homophilic and heterophilic datasets. The results demonstrate that GPR-GNN offers significant performance improvement compared to existing techniques on both synthetic and benchmark data.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
heBernNetLearningArbitrary2021
\cite{heBernNetLearningArbitrary2021}
Bern{N}et: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation
null
null
true
false
He, Mingguo and Wei, Zhewei and Huang, zengfeng and Xu, Hongteng
2,021
null
null
null
null
Bern{N}et: Learning Arbitrary Graph Spectral Filters via Bernstein Approximation
[PDF] Learning Arbitrary Graph Spectral Filters via Bernstein Approximation
https://proceedings.neurips.cc/paper/2021/file/76f1cfd7754a6e4fc3281bcccb3d0902-Paper.pdf
BernNet is a graph neural network that learns arbitrary graph spectral filters using Bernstein polynomial approximation, designing spectral properties by
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
chenRevisitingGraphBased2020
\cite{chenRevisitingGraphBased2020}
Revisiting Graph based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach
http://arxiv.org/abs/2001.10167v1
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models by iteratively stacking multiple layers of convolution aggregation operations and non-linear activation operations. Recently, in Collaborative Filtering (CF) based Recommender Systems (RS), by treating the user-item interaction behavior as a bipartite graph, some researchers model higher-layer collaborative signals with GCNs. These GCN based recommender models show superior performance compared to traditional works. However, these models suffer from training difficulty with non-linear activations for large user-item graphs. Besides, most GCN based models could not model deeper layers due to the over smoothing effect with the graph convolution operation. In this paper, we revisit GCN based CF models from two aspects. First, we empirically show that removing non-linearities would enhance recommendation performance, which is consistent with the theories in simple graph convolutional networks. Second, we propose a residual network structure that is specifically designed for CF with user-item interaction modeling, which alleviates the over smoothing problem in graph convolution aggregation operation with sparse user-item interaction data. The proposed model is a linear model and it is easy to train, scale to large datasets, and yield better efficiency and effectiveness on two real datasets. We publish the source code at https://github.com/newlei/LRGCCF.
true
true
Chen, Lei and Wu, Le and Hong, Richang and Zhang, Kun and Wang, Meng
2,020
null
null
null
null
Revisiting Graph based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach
Revisiting Graph Based Collaborative Filtering: A Linear Residual ...
https://ojs.aaai.org/index.php/AAAI/article/view/5330
In this paper, we revisit GCN based CF models from two aspects. First, we empirically show that removing non-linearities would enhance recommendation
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
wangNeuralGraphCollaborative2019
\cite{wangNeuralGraphCollaborative2019}
Neural Graph Collaborative Filtering
http://arxiv.org/abs/1905.08108v2
Learning vector representations (aka. embeddings) of users and items lies at the core of modern recommender systems. Ranging from early matrix factorization to recently emerged deep learning based methods, existing efforts typically obtain a user's (or an item's) embedding by mapping from pre-existing features that describe the user (or the item), such as ID and attributes. We argue that an inherent drawback of such methods is that, the collaborative signal, which is latent in user-item interactions, is not encoded in the embedding process. As such, the resultant embeddings may not be sufficient to capture the collaborative filtering effect. In this work, we propose to integrate the user-item interactions -- more specifically the bipartite graph structure -- into the embedding process. We develop a new recommendation framework Neural Graph Collaborative Filtering (NGCF), which exploits the user-item graph structure by propagating embeddings on it. This leads to the expressive modeling of high-order connectivity in user-item graph, effectively injecting the collaborative signal into the embedding process in an explicit manner. We conduct extensive experiments on three public benchmarks, demonstrating significant improvements over several state-of-the-art models like HOP-Rec and Collaborative Memory Network. Further analysis verifies the importance of embedding propagation for learning better user and item representations, justifying the rationality and effectiveness of NGCF. Codes are available at https://github.com/xiangwang1223/neural_graph_collaborative_filtering.
true
true
Wang, Xiang and He, Xiangnan and Wang, Meng and Feng, Fuli and Chua, Tat-Seng
2,019
null
null
null
null
Neural Graph Collaborative Filtering
Neural Graph Collaborative Filtering
http://arxiv.org/pdf/1905.08108v2
Learning vector representations (aka. embeddings) of users and items lies at the core of modern recommender systems. Ranging from early matrix factorization to recently emerged deep learning based methods, existing efforts typically obtain a user's (or an item's) embedding by mapping from pre-existing features that describe the user (or the item), such as ID and attributes. We argue that an inherent drawback of such methods is that, the collaborative signal, which is latent in user-item interactions, is not encoded in the embedding process. As such, the resultant embeddings may not be sufficient to capture the collaborative filtering effect. In this work, we propose to integrate the user-item interactions -- more specifically the bipartite graph structure -- into the embedding process. We develop a new recommendation framework Neural Graph Collaborative Filtering (NGCF), which exploits the user-item graph structure by propagating embeddings on it. This leads to the expressive modeling of high-order connectivity in user-item graph, effectively injecting the collaborative signal into the embedding process in an explicit manner. We conduct extensive experiments on three public benchmarks, demonstrating significant improvements over several state-of-the-art models like HOP-Rec and Collaborative Memory Network. Further analysis verifies the importance of embedding propagation for learning better user and item representations, justifying the rationality and effectiveness of NGCF. Codes are available at https://github.com/xiangwang1223/neural_graph_collaborative_filtering.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
heLightGCNSimplifyingPowering2020
\cite{heLightGCNSimplifyingPowering2020}
Light{GCN}: Simplifying and Powering Graph Convolution Network for Recommendation
null
null
true
false
He, Xiangnan and Deng, Kuan and Wang, Xiang and Li, Yan and Zhang, YongDong and Wang, Meng
2,020
null
null
null
null
Light{GCN}: Simplifying and Powering Graph Convolution Network for Recommendation
[PDF] LightGCN: Simplifying and Powering Graph Convolution Network for ...
https://arxiv.org/pdf/2002.02126
In this work, we aim to simplify the design of GCN to make it more concise and appropriate for recommendation. We propose a new model named
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
maoUltraGCNUltraSimplification2021
\cite{maoUltraGCNUltraSimplification2021}
Ultra{GCN}: Ultra Simplification of Graph Convolutional Networks for Recommendation
null
null
true
false
Mao, Kelong and Zhu, Jieming and Xiao, Xi and Lu, Biao and Wang, Zhaowei and He, Xiuqiang
2,021
null
null
null
null
Ultra{GCN}: Ultra Simplification of Graph Convolutional Networks for Recommendation
UltraGCN: Ultra Simplification of Graph Convolutional Networks for ...
https://arxiv.org/abs/2110.15114
View a PDF of the paper titled UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation, by Kelong Mao and 5 other authors In this paper, we take one step further to propose an ultra-simplified formulation of GCNs (dubbed UltraGCN), which skips infinite layers of message passing for efficient recommendation. View a PDF of the paper titled UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation, by Kelong Mao and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
heSGCF2023
\cite{heSGCF2023}
Simplifying graph-based collaborative filtering for recommendation
null
null
true
false
He, Li and Wang, Xianzhi and Wang, Dingxian and Zou, Haoyuan and Yin, Hongzhi and Xu, Guandong
2,023
null
null
null
null
Simplifying graph-based collaborative filtering for recommendation
Simplifying Graph-based Collaborative Filtering for ...
https://opus.lib.uts.edu.au/bitstream/10453/164889/4/Simplifying%20Graph-based%20Collaborative%20Filtering%20for%20Recommendation.pdf
by L He · 2023 · Cited by 28 — First, we remove non-linearities to en- hance recommendation performance, which is consistent with the theories in simple graph convolutional networks. Second,
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
sunNeighborInteractionAware2020
\cite{sunNeighborInteractionAware2020}
Neighbor Interaction Aware Graph Convolution Networks for Recommendation
null
null
true
false
Sun, Jianing and Zhang, Yingxue and Guo, Wei and Guo, Huifeng and Tang, Ruiming and He, Xiuqiang and Ma, Chen and Coates, Mark
2,020
null
null
null
null
Neighbor Interaction Aware Graph Convolution Networks for Recommendation
Neighbor Interaction Aware Graph Convolution Networks ...
https://dl.acm.org/doi/10.1145/3397271.3401123
Neighbor Interaction Aware Graph Convolution Networks for Recommendation | Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval * Hotjar 3Learn more about this providerImage 8**_hjSession_#**Collects statistics on the visitor's visits to the website, such as the number of visits, average time spent on the website and what pages have been read.**Maximum Storage Duration**: 1 day**Type**: HTTP Cookie **_hjSessionUser_#**Collects statistics on the visitor's visits to the website, such as the number of visits, average time spent on the website and what pages have been read.**Maximum Storage Duration**: 1 year**Type**: HTTP Cookie **_hjTLDTest**Registers statistical data on users' behaviour on the website. * Zhou Z Yan X(2025)Multi-modal Recommendation based on Graph Neural Networks 2025 4th International Symposium on Computer Applications and Information Technology (ISCAIT)10.1109/ISCAIT64916.2025.11010536(219-223)Online publication date: 21-Mar-2025https://doi.org/10.1109/ISCAIT64916.2025.11010536 * Zhang M Liao X Wang X Wang X Jin L(2025)Multi-neighbor social recommendation with attentional graph convolutional network Data Mining and Knowledge Discovery 10.1007/s10618-025-01094-7**39**:3 Online publication date: 20-Mar-2025https://doi.org/10.1007/s10618-025-01094-7
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
wangDisentangledGraphCollaborative2020
\cite{wangDisentangledGraphCollaborative2020}
Disentangled Graph Collaborative Filtering
http://arxiv.org/abs/2007.01764v1
Learning informative representations of users and items from the interaction data is of crucial importance to collaborative filtering (CF). Present embedding functions exploit user-item relationships to enrich the representations, evolving from a single user-item instance to the holistic interaction graph. Nevertheless, they largely model the relationships in a uniform manner, while neglecting the diversity of user intents on adopting the items, which could be to pass time, for interest, or shopping for others like families. Such uniform approach to model user interests easily results in suboptimal representations, failing to model diverse relationships and disentangle user intents in representations. In this work, we pay special attention to user-item relationships at the finer granularity of user intents. We hence devise a new model, Disentangled Graph Collaborative Filtering (DGCF), to disentangle these factors and yield disentangled representations. Specifically, by modeling a distribution over intents for each user-item interaction, we iteratively refine the intent-aware interaction graphs and representations. Meanwhile, we encourage independence of different intents. This leads to disentangled representations, effectively distilling information pertinent to each intent. We conduct extensive experiments on three benchmark datasets, and DGCF achieves significant improvements over several state-of-the-art models like NGCF, DisenGCN, and MacridVAE. Further analyses offer insights into the advantages of DGCF on the disentanglement of user intents and interpretability of representations. Our codes are available in https://github.com/xiangwang1223/disentangled_graph_collaborative_filtering.
true
true
Wang, Xiang and Jin, Hongye and Zhang, An and He, Xiangnan and Xu, Tong and Chua, Tat-Seng
2,020
null
null
null
null
Disentangled Graph Collaborative Filtering
Disentangled Graph Collaborative Filtering
http://arxiv.org/pdf/2007.01764v1
Learning informative representations of users and items from the interaction data is of crucial importance to collaborative filtering (CF). Present embedding functions exploit user-item relationships to enrich the representations, evolving from a single user-item instance to the holistic interaction graph. Nevertheless, they largely model the relationships in a uniform manner, while neglecting the diversity of user intents on adopting the items, which could be to pass time, for interest, or shopping for others like families. Such uniform approach to model user interests easily results in suboptimal representations, failing to model diverse relationships and disentangle user intents in representations. In this work, we pay special attention to user-item relationships at the finer granularity of user intents. We hence devise a new model, Disentangled Graph Collaborative Filtering (DGCF), to disentangle these factors and yield disentangled representations. Specifically, by modeling a distribution over intents for each user-item interaction, we iteratively refine the intent-aware interaction graphs and representations. Meanwhile, we encourage independence of different intents. This leads to disentangled representations, effectively distilling information pertinent to each intent. We conduct extensive experiments on three benchmark datasets, and DGCF achieves significant improvements over several state-of-the-art models like NGCF, DisenGCN, and MacridVAE. Further analyses offer insights into the advantages of DGCF on the disentanglement of user intents and interpretability of representations. Our codes are available in https://github.com/xiangwang1223/disentangled_graph_collaborative_filtering.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
liuInterestawareMessagePassingGCN2021
\cite{liuInterestawareMessagePassingGCN2021}
Interest-aware Message-Passing GCN for Recommendation
http://arxiv.org/abs/2102.10044v2
Graph Convolution Networks (GCNs) manifest great potential in recommendation. This is attributed to their capability on learning good user and item embeddings by exploiting the collaborative signals from the high-order neighbors. Like other GCN models, the GCN based recommendation models also suffer from the notorious over-smoothing problem - when stacking more layers, node embeddings become more similar and eventually indistinguishable, resulted in performance degradation. The recently proposed LightGCN and LR-GCN alleviate this problem to some extent, however, we argue that they overlook an important factor for the over-smoothing problem in recommendation, that is, high-order neighboring users with no common interests of a user can be also involved in the user's embedding learning in the graph convolution operation. As a result, the multi-layer graph convolution will make users with dissimilar interests have similar embeddings. In this paper, we propose a novel Interest-aware Message-Passing GCN (IMP-GCN) recommendation model, which performs high-order graph convolution inside subgraphs. The subgraph consists of users with similar interests and their interacted items. To form the subgraphs, we design an unsupervised subgraph generation module, which can effectively identify users with common interests by exploiting both user feature and graph structure. To this end, our model can avoid propagating negative information from high-order neighbors into embedding learning. Experimental results on three large-scale benchmark datasets show that our model can gain performance improvement by stacking more layers and outperform the state-of-the-art GCN-based recommendation models significantly.
true
true
Liu, Fan and Cheng, Zhiyong and Zhu, Lei and Gao, Zan and Nie, Liqiang
2,021
null
null
null
null
Interest-aware Message-Passing GCN for Recommendation
Interest-aware Message-Passing GCN for Recommendation
https://dl.acm.org/doi/10.1145/3442381.3449986
In this paper, we propose a novel Interest-aware Message-Passing GCN (IMP-GCN) recommendation model, which performs high-order graph convolution inside
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
kongLinearNonLinearThat2022
\cite{kongLinearNonLinearThat2022}
Linear, or Non-Linear, That is the Question!
http://arxiv.org/abs/2111.07265v2
There were fierce debates on whether the non-linear embedding propagation of GCNs is appropriate to GCN-based recommender systems. It was recently found that the linear embedding propagation shows better accuracy than the non-linear embedding propagation. Since this phenomenon was discovered especially in recommender systems, it is required that we carefully analyze the linearity and non-linearity issue. In this work, therefore, we revisit the issues of i) which of the linear or non-linear propagation is better and ii) which factors of users/items decide the linearity/non-linearity of the embedding propagation. We propose a novel Hybrid Method of Linear and non-linEar collaborative filTering method (HMLET, pronounced as Hamlet). In our design, there exist both linear and non-linear propagation steps, when processing each user or item node, and our gating module chooses one of them, which results in a hybrid model of the linear and non-linear GCN-based collaborative filtering (CF). The proposed model yields the best accuracy in three public benchmark datasets. Moreover, we classify users/items into the following three classes depending on our gating modules' selections: Full-Non-Linearity (FNL), Partial-Non-Linearity (PNL), and Full-Linearity (FL). We found that there exist strong correlations between nodes' centrality and their class membership, i.e., important user/item nodes exhibit more preferences towards the non-linearity during the propagation steps. To our knowledge, we are the first who design a hybrid method and report the correlation between the graph centrality and the linearity/non-linearity of nodes. All HMLET codes and datasets are available at: https://github.com/qbxlvnf11/HMLET.
true
true
Kong, Taeyong and Kim, Taeri and Jeon, Jinsung and Choi, Jeongwhan and Lee, Yeon-Chang and Park, Noseong and Kim, Sang-Wook
2,022
null
null
null
null
Linear, or Non-Linear, That is the Question!
[2111.07265] Linear, or Non-Linear, That is the Question! - arXiv
https://arxiv.org/abs/2111.07265
It was recently found that the linear embedding propagation shows better accuracy than the non-linear embedding propagation.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
fanGraphTrendFiltering2022
\cite{fanGraphTrendFiltering2022}
Graph Trend Filtering Networks for Recommendations
http://arxiv.org/abs/2108.05552v2
Recommender systems aim to provide personalized services to users and are playing an increasingly important role in our daily lives. The key of recommender systems is to predict how likely users will interact with items based on their historical online behaviors, e.g., clicks, add-to-cart, purchases, etc. To exploit these user-item interactions, there are increasing efforts on considering the user-item interactions as a user-item bipartite graph and then performing information propagation in the graph via Graph Neural Networks (GNNs). Given the power of GNNs in graph representation learning, these GNNs-based recommendation methods have remarkably boosted the recommendation performance. Despite their success, most existing GNNs-based recommender systems overlook the existence of interactions caused by unreliable behaviors (e.g., random/bait clicks) and uniformly treat all the interactions, which can lead to sub-optimal and unstable performance. In this paper, we investigate the drawbacks (e.g., non-adaptive propagation and non-robustness) of existing GNN-based recommendation methods. To address these drawbacks, we introduce a principled graph trend collaborative filtering method and propose the Graph Trend Filtering Networks for recommendations (GTN) that can capture the adaptive reliability of the interactions. Comprehensive experiments and ablation studies are presented to verify and understand the effectiveness of the proposed framework. Our implementation based on PyTorch is available at https://github.com/wenqifan03/GTN-SIGIR2022.
true
true
Fan, Wenqi and Liu, Xiaorui and Jin, Wei and Zhao, Xiangyu and Tang, Jiliang and Li, Qing
2,022
null
null
null
null
Graph Trend Filtering Networks for Recommendations
Graph Trend Filtering Networks for Recommendations
http://arxiv.org/pdf/2108.05552v2
Recommender systems aim to provide personalized services to users and are playing an increasingly important role in our daily lives. The key of recommender systems is to predict how likely users will interact with items based on their historical online behaviors, e.g., clicks, add-to-cart, purchases, etc. To exploit these user-item interactions, there are increasing efforts on considering the user-item interactions as a user-item bipartite graph and then performing information propagation in the graph via Graph Neural Networks (GNNs). Given the power of GNNs in graph representation learning, these GNNs-based recommendation methods have remarkably boosted the recommendation performance. Despite their success, most existing GNNs-based recommender systems overlook the existence of interactions caused by unreliable behaviors (e.g., random/bait clicks) and uniformly treat all the interactions, which can lead to sub-optimal and unstable performance. In this paper, we investigate the drawbacks (e.g., non-adaptive propagation and non-robustness) of existing GNN-based recommendation methods. To address these drawbacks, we introduce a principled graph trend collaborative filtering method and propose the Graph Trend Filtering Networks for recommendations (GTN) that can capture the adaptive reliability of the interactions. Comprehensive experiments and ablation studies are presented to verify and understand the effectiveness of the proposed framework. Our implementation based on PyTorch is available at https://github.com/wenqifan03/GTN-SIGIR2022.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
guoJGCF2023
\cite{guoJGCF2023}
On Manipulating Signals of User-Item Graph: A Jacobi Polynomial-based Graph Collaborative Filtering
http://arxiv.org/abs/2306.03624v1
Collaborative filtering (CF) is an important research direction in recommender systems that aims to make recommendations given the information on user-item interactions. Graph CF has attracted more and more attention in recent years due to its effectiveness in leveraging high-order information in the user-item bipartite graph for better recommendations. Specifically, recent studies show the success of graph neural networks (GNN) for CF is attributed to its low-pass filtering effects. However, current researches lack a study of how different signal components contributes to recommendations, and how to design strategies to properly use them well. To this end, from the view of spectral transformation, we analyze the important factors that a graph filter should consider to achieve better performance. Based on the discoveries, we design JGCF, an efficient and effective method for CF based on Jacobi polynomial bases and frequency decomposition strategies. Extensive experiments on four widely used public datasets show the effectiveness and efficiency of the proposed methods, which brings at most 27.06% performance gain on Alibaba-iFashion. Besides, the experimental results also show that JGCF is better at handling sparse datasets, which shows potential in making recommendations for cold-start users.
true
true
Guo, Jiayan and Du, Lun and Chen, Xu and Ma, Xiaojun and Fu, Qiang and Han, Shi and Zhang, Dongmei and Zhang, Yan
2,023
null
null
null
null
On Manipulating Signals of User-Item Graph: A Jacobi Polynomial-based Graph Collaborative Filtering
A Jacobi Polynomial-based Graph Collaborative Filtering
https://www.bohrium.com/paper-details/on-manipulating-signals-of-user-item-graph-a-jacobi-polynomial-based-graph-collaborative-filtering/873226422896820882-108611
On Manipulating Signals of User-Item Graph: A Jacobi Polynomial-based Graph Collaborative Filtering ... 2025-06-16. ACM Transactions on
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
wangCollaborationAwareGraphConvolutional2023
\cite{wangCollaborationAwareGraphConvolutional2023}
Collaboration-Aware Graph Convolutional Network for Recommender Systems
http://arxiv.org/abs/2207.06221v4
Graph Neural Networks (GNNs) have been successfully adopted in recommender systems by virtue of the message-passing that implicitly captures collaborative effect. Nevertheless, most of the existing message-passing mechanisms for recommendation are directly inherited from GNNs without scrutinizing whether the captured collaborative effect would benefit the prediction of user preferences. In this paper, we first analyze how message-passing captures the collaborative effect and propose a recommendation-oriented topological metric, Common Interacted Ratio (CIR), which measures the level of interaction between a specific neighbor of a node with the rest of its neighbors. After demonstrating the benefits of leveraging collaborations from neighbors with higher CIR, we propose a recommendation-tailored GNN, Collaboration-Aware Graph Convolutional Network (CAGCN), that goes beyond 1-Weisfeiler-Lehman(1-WL) test in distinguishing non-bipartite-subgraph-isomorphic graphs. Experiments on six benchmark datasets show that the best CAGCN variant outperforms the most representative GNN-based recommendation model, LightGCN, by nearly 10% in Recall@20 and also achieves around 80% speedup. Our code is publicly available at https://github.com/YuWVandy/CAGCN.
true
true
Wang, Yu and Zhao, Yuying and Zhang, Yi and Derr, Tyler
2,023
null
null
null
null
Collaboration-Aware Graph Convolutional Network for Recommender Systems
Collaboration-Aware Graph Convolutional Network for ...
https://dl.acm.org/doi/abs/10.1145/3543507.3583229
by Y Wang · 2023 · Cited by 70 — We propose a recommendation-tailored GNN, Collaboration-Aware Graph Convolutional Network (CAGCN), that goes beyond 1-Weisfeiler-Lehman(1-WL) test.
Graph Spectral Filtering with Chebyshev Interpolation for Recommendation
2505.00552v1
zhuGiffCF2024
\cite{zhuGiffCF2024}
Graph Signal Diffusion Model for Collaborative Filtering
http://arxiv.org/abs/2311.08744v3
Collaborative filtering is a critical technique in recommender systems. It has been increasingly viewed as a conditional generative task for user feedback data, where newly developed diffusion model shows great potential. However, existing studies on diffusion model lack effective solutions for modeling implicit feedback. Particularly, the standard isotropic diffusion process overlooks correlation between items, misaligned with the graphical structure of the interaction space. Meanwhile, Gaussian noise destroys personalized information in a user's interaction vector, causing difficulty in its reconstruction. In this paper, we adapt standard diffusion model and propose a novel Graph Signal Diffusion Model for Collaborative Filtering (named GiffCF). To better represent the correlated distribution of user-item interactions, we define a generalized diffusion process using heat equation on the item-item similarity graph. Our forward process smooths interaction signals with an advanced family of graph filters, introducing the graph adjacency as beneficial prior knowledge for recommendation. Our reverse process iteratively refines and sharpens latent signals in a noise-free manner, where the updates are conditioned on the user's history and computed from a carefully designed two-stage denoiser, leading to high-quality reconstruction. Finally, through extensive experiments, we show that GiffCF effectively leverages the advantages of both diffusion model and graph signal processing, and achieves state-of-the-art performance on three benchmark datasets.
true
true
Zhu, Yunqin and Wang, Chao and Zhang, Qi and Xiong, Hui
2,024
null
null
null
null
Graph Signal Diffusion Model for Collaborative Filtering
Graph Signal Diffusion Model for Collaborative Filtering
http://arxiv.org/pdf/2311.08744v3
Collaborative filtering is a critical technique in recommender systems. It has been increasingly viewed as a conditional generative task for user feedback data, where newly developed diffusion model shows great potential. However, existing studies on diffusion model lack effective solutions for modeling implicit feedback. Particularly, the standard isotropic diffusion process overlooks correlation between items, misaligned with the graphical structure of the interaction space. Meanwhile, Gaussian noise destroys personalized information in a user's interaction vector, causing difficulty in its reconstruction. In this paper, we adapt standard diffusion model and propose a novel Graph Signal Diffusion Model for Collaborative Filtering (named GiffCF). To better represent the correlated distribution of user-item interactions, we define a generalized diffusion process using heat equation on the item-item similarity graph. Our forward process smooths interaction signals with an advanced family of graph filters, introducing the graph adjacency as beneficial prior knowledge for recommendation. Our reverse process iteratively refines and sharpens latent signals in a noise-free manner, where the updates are conditioned on the user's history and computed from a carefully designed two-stage denoiser, leading to high-quality reconstruction. Finally, through extensive experiments, we show that GiffCF effectively leverages the advantages of both diffusion model and graph signal processing, and achieves state-of-the-art performance on three benchmark datasets.