parent_paper_title
stringclasses 63
values | parent_paper_arxiv_id
stringclasses 63
values | citation_shorthand
stringlengths 2
56
| raw_citation_text
stringlengths 9
63
| cited_paper_title
stringlengths 5
161
| cited_paper_arxiv_link
stringlengths 32
37
⌀ | cited_paper_abstract
stringlengths 406
1.92k
⌀ | has_metadata
bool 1
class | is_arxiv_paper
bool 2
classes | bib_paper_authors
stringlengths 2
2.44k
⌀ | bib_paper_year
float64 1.97k
2.03k
⌀ | bib_paper_month
stringclasses 16
values | bib_paper_url
stringlengths 20
116
⌀ | bib_paper_doi
stringclasses 269
values | bib_paper_journal
stringlengths 3
148
⌀ | original_title
stringlengths 5
161
| search_res_title
stringlengths 4
122
| search_res_url
stringlengths 22
267
| search_res_content
stringlengths 19
1.92k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
NLCTables: A Dataset for Marrying Natural Language Conditions with Table
Discovery
|
2504.15849v1
|
Table2022dong
|
\cite{Table2022dong}
|
Table Enrichment System for Machine Learning
|
http://arxiv.org/abs/2204.08235v1
|
Data scientists are constantly facing the problem of how to improve
prediction accuracy with insufficient tabular data. We propose a table
enrichment system that enriches a query table by adding external attributes
(columns) from data lakes and improves the accuracy of machine learning
predictive models. Our system has four stages, join row search, task-related
table selection, row and column alignment, and feature selection and
evaluation, to efficiently create an enriched table for a given query table and
a specified machine learning task. We demonstrate our system with a web UI to
show the use cases of table enrichment.
| true | true |
Dong, Yuyang and Oyamada, Masafumi
| 2,022 | null | null |
10.1145/3477495.3531678
| null |
Table Enrichment System for Machine Learning
|
Table Enrichment System for Machine Learning
|
http://arxiv.org/pdf/2204.08235v1
|
Data scientists are constantly facing the problem of how to improve
prediction accuracy with insufficient tabular data. We propose a table
enrichment system that enriches a query table by adding external attributes
(columns) from data lakes and improves the accuracy of machine learning
predictive models. Our system has four stages, join row search, task-related
table selection, row and column alignment, and feature selection and
evaluation, to efficiently create an enriched table for a given query table and
a specified machine learning task. We demonstrate our system with a web UI to
show the use cases of table enrichment.
|
NLCTables: A Dataset for Marrying Natural Language Conditions with Table
Discovery
|
2504.15849v1
|
zhang_ad_2018
|
\cite{zhang_ad_2018}
|
Ad Hoc Table Retrieval using Semantic Similarity
|
http://arxiv.org/abs/1802.06159v3
|
We introduce and address the problem of ad hoc table retrieval: answering a
keyword query with a ranked list of tables. This task is not only interesting
on its own account, but is also being used as a core component in many other
table-based information access scenarios, such as table completion or table
mining. The main novel contribution of this work is a method for performing
semantic matching between queries and tables. Specifically, we (i) represent
queries and tables in multiple semantic spaces (both discrete sparse and
continuous dense vector representations) and (ii) introduce various similarity
measures for matching those semantic representations. We consider all possible
combinations of semantic representations and similarity measures and use these
as features in a supervised learning model. Using a purpose-built test
collection based on Wikipedia tables, we demonstrate significant and
substantial improvements over a state-of-the-art baseline.
| true | true |
Zhang, Shuo and Balog, Krisztian
| 2,018 | null | null |
10.1145/3178876.3186067
| null |
Ad Hoc Table Retrieval using Semantic Similarity
|
Ad Hoc Table Retrieval using Semantic Similarity
|
http://arxiv.org/pdf/1802.06159v3
|
We introduce and address the problem of ad hoc table retrieval: answering a
keyword query with a ranked list of tables. This task is not only interesting
on its own account, but is also being used as a core component in many other
table-based information access scenarios, such as table completion or table
mining. The main novel contribution of this work is a method for performing
semantic matching between queries and tables. Specifically, we (i) represent
queries and tables in multiple semantic spaces (both discrete sparse and
continuous dense vector representations) and (ii) introduce various similarity
measures for matching those semantic representations. We consider all possible
combinations of semantic representations and similarity measures and use these
as features in a supervised learning model. Using a purpose-built test
collection based on Wikipedia tables, we demonstrate significant and
substantial improvements over a state-of-the-art baseline.
|
NLCTables: A Dataset for Marrying Natural Language Conditions with Table
Discovery
|
2504.15849v1
|
deng2024lakebench
|
\cite{deng2024lakebench}
|
LakeBench: A Benchmark for Discovering Joinable and Unionable Tables in Data Lakes
| null | null | true | false |
Deng, Yuhao and Chai, Chengliang and Cao, Lei and Yuan, Qin and Chen, Siyuan and Yu, Yanrui and Sun, Zhaoze and Wang, Junyi and Li, Jiajun and Cao, Ziqi and others
| 2,024 | null | null | null |
Proc. VLDB Endow.
|
LakeBench: A Benchmark for Discovering Joinable and Unionable Tables in Data Lakes
|
[PDF] LakeBench: A Benchmark for Discovering Joinable and Unionable ...
|
https://www.vldb.org/pvldb/vol17/p1925-chai.pdf
|
Discovering tables from poorly maintained data lakes is a signifi- cant challenge in data management. Two key tasks are identifying joinable and unionable
|
NLCTables: A Dataset for Marrying Natural Language Conditions with Table
Discovery
|
2504.15849v1
|
opendata
|
\cite{opendata}
|
OpenData
| null | null | true | false | null | null | null |
https://open.canada.ca/
| null | null |
OpenData
|
NYC Open Data -
|
https://opendata.cityofnewyork.us/
|
NYC Open Data logo # Open Data for All New Yorkers Open Data is free public data published by New York City agencies and other partners. Attend a training class or sign up for the NYC Open Data mailing list to get the latest news and find out about upcoming events. ### NYC Open Data Week Explore how other people use Open Data! NYC Open Data Week ### New to Open Data View details on Open Data APIs. Ask a question, leave a comment, or suggest a dataset to the NYC Open Data team. ## Discover NYC Data View recently published datasets on the data catalog. View some of the most popular datasets on the data catalog.
|
NLCTables: A Dataset for Marrying Natural Language Conditions with Table
Discovery
|
2504.15849v1
|
venetis_recovering_2011
|
\cite{venetis_recovering_2011}
|
Recovering semantics of tables on the web
| null | null | true | false |
Venetis, Petros and Halevy, Alon and Madhavan, Jayant and Paşca, Marius and Shen, Warren and Wu, Fei and Miao, Gengxin and Wu, Chung
| 2,011 | null | null |
10.14778/2002938.2002939
|
Proc. VLDB Endow.
|
Recovering semantics of tables on the web
|
[PDF] Recovering Semantics of Tables on the Web - VLDB Endowment
|
http://www.vldb.org/pvldb/vol4/p528-venetis.pdf
|
To recover semantics of tables, we leverage a database of class labels and relationships automatically extracted from the Web. The database of classes and
|
NLCTables: A Dataset for Marrying Natural Language Conditions with Table
Discovery
|
2504.15849v1
|
cafarella2009data
|
\cite{cafarella2009data}
|
Data integration for the relational web
| null | null | true | false |
Cafarella, Michael J and Halevy, Alon and Khoussainova, Nodira
| 2,009 | null | null | null |
Proc. VLDB Endow.
|
Data integration for the relational web
|
Data Integration for the Relational Web.
|
https://dblp.org/rec/journals/pvldb/CafarellaHK09
|
Michael J. Cafarella, Alon Y. Halevy, Nodira Khoussainova: Data Integration for the Relational Web. Proc. VLDB Endow. 2(1): 1090-1101 (2009).
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
lifairness
|
\cite{lifairness}
|
Fairness in Recommendation: Foundations, Methods and Applications
|
http://arxiv.org/abs/2205.13619v6
|
As one of the most pervasive applications of machine learning, recommender
systems are playing an important role on assisting human decision making. The
satisfaction of users and the interests of platforms are closely related to the
quality of the generated recommendation results. However, as a highly
data-driven system, recommender system could be affected by data or algorithmic
bias and thus generate unfair results, which could weaken the reliance of the
systems. As a result, it is crucial to address the potential unfairness
problems in recommendation settings. Recently, there has been growing attention
on fairness considerations in recommender systems with more and more literature
on approaches to promote fairness in recommendation. However, the studies are
rather fragmented and lack a systematic organization, thus making it difficult
to penetrate for new researchers to the domain. This motivates us to provide a
systematic survey of existing works on fairness in recommendation. This survey
focuses on the foundations for fairness in recommendation literature. It first
presents a brief introduction about fairness in basic machine learning tasks
such as classification and ranking in order to provide a general overview of
fairness research, as well as introduce the more complex situations and
challenges that need to be considered when studying fairness in recommender
systems. After that, the survey will introduce fairness in recommendation with
a focus on the taxonomies of current fairness definitions, the typical
techniques for improving fairness, as well as the datasets for fairness studies
in recommendation. The survey also talks about the challenges and opportunities
in fairness research with the hope of promoting the fair recommendation
research area and beyond.
| true | true |
Li, Yunqi and Chen, Hanxiong and Xu, Shuyuan and Ge, Yingqiang and Tan, Juntao and Liu, Shuchang and Zhang, Yongfeng
| null | null | null | null |
ACM Transactions on Intelligent Systems and Technology
|
Fairness in Recommendation: Foundations, Methods and Applications
|
Fairness in Recommendation: Foundations, Methods and Applications
|
http://arxiv.org/pdf/2205.13619v6
|
As one of the most pervasive applications of machine learning, recommender
systems are playing an important role on assisting human decision making. The
satisfaction of users and the interests of platforms are closely related to the
quality of the generated recommendation results. However, as a highly
data-driven system, recommender system could be affected by data or algorithmic
bias and thus generate unfair results, which could weaken the reliance of the
systems. As a result, it is crucial to address the potential unfairness
problems in recommendation settings. Recently, there has been growing attention
on fairness considerations in recommender systems with more and more literature
on approaches to promote fairness in recommendation. However, the studies are
rather fragmented and lack a systematic organization, thus making it difficult
to penetrate for new researchers to the domain. This motivates us to provide a
systematic survey of existing works on fairness in recommendation. This survey
focuses on the foundations for fairness in recommendation literature. It first
presents a brief introduction about fairness in basic machine learning tasks
such as classification and ranking in order to provide a general overview of
fairness research, as well as introduce the more complex situations and
challenges that need to be considered when studying fairness in recommender
systems. After that, the survey will introduce fairness in recommendation with
a focus on the taxonomies of current fairness definitions, the typical
techniques for improving fairness, as well as the datasets for fairness studies
in recommendation. The survey also talks about the challenges and opportunities
in fairness research with the hope of promoting the fair recommendation
research area and beyond.
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
lipani2016fairness
|
\cite{lipani2016fairness}
|
Fairness in Information Retrieval
| null | null | true | false |
Lipani, Aldo
| 2,016 | null | null | null | null |
Fairness in Information Retrieval
|
FAIR: Fairness-Aware Information Retrieval Evaluation
|
https://arxiv.org/abs/2106.08527
|
by R Gao · 2021 · Cited by 33 — We propose a new metric called FAIR. By unifying standard IR metrics and fairness measures into an integrated metric, this metric offers a new perspective for
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
deldjoo2022survey
|
\cite{deldjoo2022survey}
|
A Survey of Research on Fair Recommender Systems
| null | null | true | false |
Deldjoo, Yashar and Jannach, Dietmar and Bellogin, Alejandro and Difonzo, Alessandro and Zanzonelli, Dario
| 2,022 | null | null | null |
arXiv preprint arXiv:2205.11127
|
A Survey of Research on Fair Recommender Systems
|
A Survey of Research on Fair Recommender Systems - OpenReview
|
https://openreview.net/forum?id=K7emU6kWa9
|
In this survey, we first review the fundamental concepts and notions of fairness that were put forward in the area in the recent past.
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
xu2025fairdiversecomprehensivetoolkitfair
|
\cite{xu2025fairdiversecomprehensivetoolkitfair}
|
FairDiverse: A Comprehensive Toolkit for Fair and Diverse Information
Retrieval Algorithms
|
http://arxiv.org/abs/2502.11883v1
|
In modern information retrieval (IR). achieving more than just accuracy is
essential to sustaining a healthy ecosystem, especially when addressing
fairness and diversity considerations. To meet these needs, various datasets,
algorithms, and evaluation frameworks have been introduced. However, these
algorithms are often tested across diverse metrics, datasets, and experimental
setups, leading to inconsistencies and difficulties in direct comparisons. This
highlights the need for a comprehensive IR toolkit that enables standardized
evaluation of fairness- and diversity-aware algorithms across different IR
tasks. To address this challenge, we present FairDiverse, an open-source and
standardized toolkit. FairDiverse offers a framework for integrating fair and
diverse methods, including pre-processing, in-processing, and post-processing
techniques, at different stages of the IR pipeline. The toolkit supports the
evaluation of 28 fairness and diversity algorithms across 16 base models,
covering two core IR tasks (search and recommendation) thereby establishing a
comprehensive benchmark. Moreover, FairDiverse is highly extensible, providing
multiple APIs that empower IR researchers to swiftly develop and evaluate their
own fairness and diversity aware models, while ensuring fair comparisons with
existing baselines. The project is open-sourced and available on
https://github.com/XuChen0427/FairDiverse.
| true | true |
Chen Xu and Zhirui Deng and Clara Rus and Xiaopeng Ye and Yuanna Liu and Jun Xu and Zhicheng Dou and Ji-Rong Wen and Maarten de Rijke
| 2,025 | null |
https://arxiv.org/abs/2502.11883
| null | null |
FairDiverse: A Comprehensive Toolkit for Fair and Diverse Information
Retrieval Algorithms
|
FairDiverse: A Comprehensive Toolkit for Fair and Diverse ... - arXiv
|
https://arxiv.org/html/2502.11883v1
|
FairDiverse offers a framework for integrating fairness- and diversity-focused methods, including pre-processing, in-processing, and post-processing techniques.
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
Calmon17
|
\cite{Calmon17}
|
Optimized Data Pre-Processing for Discrimination Prevention
|
http://arxiv.org/abs/1704.03354v1
|
Non-discrimination is a recognized objective in algorithmic decision making.
In this paper, we introduce a novel probabilistic formulation of data
pre-processing for reducing discrimination. We propose a convex optimization
for learning a data transformation with three goals: controlling
discrimination, limiting distortion in individual data samples, and preserving
utility. We characterize the impact of limited sample size in accomplishing
this objective, and apply two instances of the proposed optimization to
datasets, including one on real-world criminal recidivism. The results
demonstrate that all three criteria can be simultaneously achieved and also
reveal interesting patterns of bias in American society.
| true | true |
Calmon, Flavio P. and Wei, Dennis and Vinzamuri, Bhanukiran and Ramamurthy, Karthikeyan Natesan and Varshney, Kush R.
| 2,017 | null | null | null | null |
Optimized Data Pre-Processing for Discrimination Prevention
|
[PDF] Optimized Pre-Processing for Discrimination Prevention - NIPS
|
http://papers.neurips.cc/paper/6988-optimized-pre-processing-for-discrimination-prevention.pdf
|
We propose a convex optimization for learning a data transformation with three goals: controlling discrimination, limiting distortion in individual data samples
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
xiong2024fairwasp
|
\cite{xiong2024fairwasp}
|
FairWASP: Fast and Optimal Fair Wasserstein Pre-processing
|
http://arxiv.org/abs/2311.00109v3
|
Recent years have seen a surge of machine learning approaches aimed at
reducing disparities in model outputs across different subgroups. In many
settings, training data may be used in multiple downstream applications by
different users, which means it may be most effective to intervene on the
training data itself. In this work, we present FairWASP, a novel pre-processing
approach designed to reduce disparities in classification datasets without
modifying the original data. FairWASP returns sample-level weights such that
the reweighted dataset minimizes the Wasserstein distance to the original
dataset while satisfying (an empirical version of) demographic parity, a
popular fairness criterion. We show theoretically that integer weights are
optimal, which means our method can be equivalently understood as duplicating
or eliminating samples. FairWASP can therefore be used to construct datasets
which can be fed into any classification method, not just methods which accept
sample weights. Our work is based on reformulating the pre-processing task as a
large-scale mixed-integer program (MIP), for which we propose a highly
efficient algorithm based on the cutting plane method. Experiments demonstrate
that our proposed optimization algorithm significantly outperforms
state-of-the-art commercial solvers in solving both the MIP and its linear
program relaxation. Further experiments highlight the competitive performance
of FairWASP in reducing disparities while preserving accuracy in downstream
classification settings.
| true | true |
Xiong, Zikai and Dalmasso, Niccol{\`o} and Mishler, Alan and Potluru, Vamsi K and Balch, Tucker and Veloso, Manuela
| 2,024 | null | null | null | null |
FairWASP: Fast and Optimal Fair Wasserstein Pre-processing
|
[PDF] FairWASP: Fast and Optimal Fair Wasserstein Pre-processing
|
https://ojs.aaai.org/index.php/AAAI/article/view/29545/30909
|
In this work, we present FairWASP, a novel pre-processing approach designed to reduce dispar- ities in classification datasets without modifying the origi- nal
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
Tang23FairBias
|
\cite{Tang23FairBias}
|
When Fairness meets Bias: a Debiased Framework for Fairness aware Top-N Recommendation
| null | null | true | false |
Tang, Jiakai and Shen, Shiqi and Wang, Zhipeng and Gong, Zhi and Zhang, Jingsen and Chen, Xu
| 2,023 | null | null |
10.1145/3604915.3608770
| null |
When Fairness meets Bias: a Debiased Framework for Fairness aware Top-N Recommendation
|
a Debiased Framework for Fairness aware Top-N ...
|
https://openreview.net/forum?id=gb0XymwzJq&referrer=%5Bthe%20profile%20of%20Jiakai%20Tang%5D(%2Fprofile%3Fid%3D~Jiakai_Tang1)
|
To study this problem, in this paper, we formally define a novel task named as unbiased fairness aware Top-N recommendation. For solving this task, we firstly
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
xu2023p
|
\cite{xu2023p}
|
P-MMF: Provider Max-min Fairness Re-ranking in Recommender System
| null | null | true | false |
Xu, Chen and Chen, Sirui and Xu, Jun and Shen, Weiran and Zhang, Xiao and Wang, Gang and Dong, Zhenhua
| 2,023 | null | null | null | null |
P-MMF: Provider Max-min Fairness Re-ranking in Recommender System
|
[2303.06660] P-MMF: Provider Max-min Fairness Re- ...
|
https://arxiv.org/abs/2303.06660
|
[2303.06660] P-MMF: Provider Max-min Fairness Re-ranking in Recommender System Title:P-MMF: Provider Max-min Fairness Re-ranking in Recommender System View a PDF of the paper titled P-MMF: Provider Max-min Fairness Re-ranking in Recommender System, by Chen Xu and 6 other authors In this paper, we proposed an online re-ranking model named Provider Max-min Fairness Re-ranking (P-MMF) to tackle the problem. View a PDF of the paper titled P-MMF: Provider Max-min Fairness Re-ranking in Recommender System, by Chen Xu and 6 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Links to Code Toggle - [x] Links to Code Toggle - [x] Core recommender toggle
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
fairrec
|
\cite{fairrec}
|
FairRec: Two-Sided Fairness for Personalized Recommendations in
Two-Sided Platforms
|
http://arxiv.org/abs/2002.10764v2
|
We investigate the problem of fair recommendation in the context of two-sided
online platforms, comprising customers on one side and producers on the other.
Traditionally, recommendation services in these platforms have focused on
maximizing customer satisfaction by tailoring the results according to the
personalized preferences of individual customers. However, our investigation
reveals that such customer-centric design may lead to unfair distribution of
exposure among the producers, which may adversely impact their well-being. On
the other hand, a producer-centric design might become unfair to the customers.
Thus, we consider fairness issues that span both customers and producers. Our
approach involves a novel mapping of the fair recommendation problem to a
constrained version of the problem of fairly allocating indivisible goods. Our
proposed FairRec algorithm guarantees at least Maximin Share (MMS) of exposure
for most of the producers and Envy-Free up to One item (EF1) fairness for every
customer. Extensive evaluations over multiple real-world datasets show the
effectiveness of FairRec in ensuring two-sided fairness while incurring a
marginal loss in the overall recommendation quality.
| true | true |
Patro, Gourab K. and Biswas, Arpita and Ganguly, Niloy and Gummadi, Krishna P. and Chakraborty, Abhijnan
| 2,020 | null | null | null | null |
FairRec: Two-Sided Fairness for Personalized Recommendations in
Two-Sided Platforms
|
Two-Sided Fairness for Personalized Recommendations in ...
|
https://github.com/gourabkumarpatro/FairRec_www_2020
|
FairRec: Two-Sided Fairness for Personalized Recommendations in Two-Sided Platforms. Gourab K Patro, Arpita Biswas, Niloy Ganguly, Krishna P. Gummadi and
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
abdollahpouri2020multistakeholder
|
\cite{abdollahpouri2020multistakeholder}
|
Multistakeholder Recommendation: Survey and Research Directions
| null | null | true | false |
Abdollahpouri, Himan and Adomavicius, Gediminas and Burke, Robin and Guy, Ido and Jannach, Dietmar and Kamishima, Toshihiro and Krasnodebski, Jan and Pizzato, Luiz
| 2,020 | null | null | null |
User Modeling and User-Adapted Interaction
|
Multistakeholder Recommendation: Survey and Research Directions
|
Multistakeholder recommendation: Survey and research directions
|
https://experts.colorado.edu/display/pubid_280350
|
Multistakeholder recommendation: Survey and research directions | CU Experts | CU Boulder.
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
abdollahpouri2019multi
|
\cite{abdollahpouri2019multi}
|
Multi-stakeholder Recommendation and its Connection to Multi-sided
Fairness
|
http://arxiv.org/abs/1907.13158v1
|
There is growing research interest in recommendation as a multi-stakeholder
problem, one where the interests of multiple parties should be taken into
account. This category subsumes some existing well-established areas of
recommendation research including reciprocal and group recommendation, but a
detailed taxonomy of different classes of multi-stakeholder recommender systems
is still lacking. Fairness-aware recommendation has also grown as a research
area, but its close connection with multi-stakeholder recommendation is not
always recognized. In this paper, we define the most commonly observed classes
of multi-stakeholder recommender systems and discuss how different fairness
concerns may come into play in such systems.
| true | true |
Abdollahpouri, Himan and Burke, Robin
| 2,019 | null | null | null |
arXiv preprint arXiv:1907.13158
|
Multi-stakeholder Recommendation and its Connection to Multi-sided
Fairness
|
Multi-stakeholder Recommendation and its Connection to ...
|
https://www.researchgate.net/publication/334821953_Multi-stakeholder_Recommendation_and_its_Connection_to_Multi-sided_Fairness
|
In this paper, we define the most commonly observed classes of multi-stakeholder recommender systems and discuss how different fairness concerns may come into
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
abdollahpouri2019unfairness
|
\cite{abdollahpouri2019unfairness}
|
The Unfairness of Popularity Bias in Recommendation
|
http://arxiv.org/abs/1907.13286v3
|
Recommender systems are known to suffer from the popularity bias problem:
popular (i.e. frequently rated) items get a lot of exposure while less popular
ones are under-represented in the recommendations. Research in this area has
been mainly focusing on finding ways to tackle this issue by increasing the
number of recommended long-tail items or otherwise the overall catalog
coverage. In this paper, however, we look at this problem from the users'
perspective: we want to see how popularity bias causes the recommendations to
deviate from what the user expects to get from the recommender system. We
define three different groups of users according to their interest in popular
items (Niche, Diverse and Blockbuster-focused) and show the impact of
popularity bias on the users in each group. Our experimental results on a movie
dataset show that in many recommendation algorithms the recommendations the
users get are extremely concentrated on popular items even if a user is
interested in long-tail and non-popular items showing an extreme bias
disparity.
| true | true |
Abdollahpouri, Himan and Mansoury, Masoud and Burke, Robin and Mobasher, Bamshad
| 2,019 | null | null | null |
arXiv preprint arXiv:1907.13286
|
The Unfairness of Popularity Bias in Recommendation
|
The Unfairness of Popularity Bias in Recommendation
|
http://arxiv.org/pdf/1907.13286v3
|
Recommender systems are known to suffer from the popularity bias problem:
popular (i.e. frequently rated) items get a lot of exposure while less popular
ones are under-represented in the recommendations. Research in this area has
been mainly focusing on finding ways to tackle this issue by increasing the
number of recommended long-tail items or otherwise the overall catalog
coverage. In this paper, however, we look at this problem from the users'
perspective: we want to see how popularity bias causes the recommendations to
deviate from what the user expects to get from the recommender system. We
define three different groups of users according to their interest in popular
items (Niche, Diverse and Blockbuster-focused) and show the impact of
popularity bias on the users in each group. Our experimental results on a movie
dataset show that in many recommendation algorithms the recommendations the
users get are extremely concentrated on popular items even if a user is
interested in long-tail and non-popular items showing an extreme bias
disparity.
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
li2021user
|
\cite{li2021user}
|
User-oriented Fairness in Recommendation
|
http://arxiv.org/abs/2104.10671v1
|
As a highly data-driven application, recommender systems could be affected by
data bias, resulting in unfair results for different data groups, which could
be a reason that affects the system performance. Therefore, it is important to
identify and solve the unfairness issues in recommendation scenarios. In this
paper, we address the unfairness problem in recommender systems from the user
perspective. We group users into advantaged and disadvantaged groups according
to their level of activity, and conduct experiments to show that current
recommender systems will behave unfairly between two groups of users.
Specifically, the advantaged users (active) who only account for a small
proportion in data enjoy much higher recommendation quality than those
disadvantaged users (inactive). Such bias can also affect the overall
performance since the disadvantaged users are the majority. To solve this
problem, we provide a re-ranking approach to mitigate this unfairness problem
by adding constraints over evaluation metrics. The experiments we conducted on
several real-world datasets with various recommendation algorithms show that
our approach can not only improve group fairness of users in recommender
systems, but also achieve better overall recommendation performance.
| true | true |
Li, Yunqi and Chen, Hanxiong and Fu, Zuohui and Ge, Yingqiang and Zhang, Yongfeng
| 2,021 | null | null | null | null |
User-oriented Fairness in Recommendation
|
User-oriented Fairness in Recommendation
|
https://dl.acm.org/doi/10.1145/3442381.3449866
|
In this paper, we address the unfairness problem in recommender systems from the user perspective. We group users into advantaged and disadvantaged groups.
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
TaxRank
|
\cite{TaxRank}
|
A Taxation Perspective for Fair Re-ranking
|
http://arxiv.org/abs/2404.17826v1
|
Fair re-ranking aims to redistribute ranking slots among items more equitably
to ensure responsibility and ethics. The exploration of redistribution problems
has a long history in economics, offering valuable insights for conceptualizing
fair re-ranking as a taxation process. Such a formulation provides us with a
fresh perspective to re-examine fair re-ranking and inspire the development of
new methods. From a taxation perspective, we theoretically demonstrate that
most previous fair re-ranking methods can be reformulated as an item-level tax
policy. Ideally, a good tax policy should be effective and conveniently
controllable to adjust ranking resources. However, both empirical and
theoretical analyses indicate that the previous item-level tax policy cannot
meet two ideal controllable requirements: (1) continuity, ensuring minor
changes in tax rates result in small accuracy and fairness shifts; (2)
controllability over accuracy loss, ensuring precise estimation of the accuracy
loss under a specific tax rate. To overcome these challenges, we introduce a
new fair re-ranking method named Tax-rank, which levies taxes based on the
difference in utility between two items. Then, we efficiently optimize such an
objective by utilizing the Sinkhorn algorithm in optimal transport. Upon a
comprehensive analysis, Our model Tax-rank offers a superior tax policy for
fair re-ranking, theoretically demonstrating both continuity and
controllability over accuracy loss. Experimental results show that Tax-rank
outperforms all state-of-the-art baselines in terms of effectiveness and
efficiency on recommendation and advertising tasks.
| true | true |
Xu, Chen and Ye, Xiaopeng and Wang, Wenjie and Pang, Liang and Xu, Jun and Chua, Tat-Seng
| 2,024 | null |
https://doi.org/10.1145/3626772.3657766
|
10.1145/3626772.3657766
| null |
A Taxation Perspective for Fair Re-ranking
|
[PDF] A Taxation Perspective for Fair Re-ranking
|
https://gsai.ruc.edu.cn/uploads/20240924/2da852a5ebce07442e6392b4505ea4aa.pdf
|
ABSTRACT. Fair re-ranking aims to redistribute ranking slots among items more equitably to ensure responsibility and ethics.
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
singh2019policy
|
\cite{singh2019policy}
|
Policy Learning for Fairness in Ranking
|
http://arxiv.org/abs/1902.04056v2
|
Conventional Learning-to-Rank (LTR) methods optimize the utility of the
rankings to the users, but they are oblivious to their impact on the ranked
items. However, there has been a growing understanding that the latter is
important to consider for a wide range of ranking applications (e.g. online
marketplaces, job placement, admissions). To address this need, we propose a
general LTR framework that can optimize a wide range of utility metrics (e.g.
NDCG) while satisfying fairness of exposure constraints with respect to the
items. This framework expands the class of learnable ranking functions to
stochastic ranking policies, which provides a language for rigorously
expressing fairness specifications. Furthermore, we provide a new LTR algorithm
called Fair-PG-Rank for directly searching the space of fair ranking policies
via a policy-gradient approach. Beyond the theoretical evidence in deriving the
framework and the algorithm, we provide empirical results on simulated and
real-world datasets verifying the effectiveness of the approach in individual
and group-fairness settings.
| true | true |
Singh, Ashudeep and Joachims, Thorsten
| 2,019 | null | null | null |
Advances in Neural Information Processing Systems
|
Policy Learning for Fairness in Ranking
|
Policy Learning for Fairness in Ranking
|
http://arxiv.org/pdf/1902.04056v2
|
Conventional Learning-to-Rank (LTR) methods optimize the utility of the
rankings to the users, but they are oblivious to their impact on the ranked
items. However, there has been a growing understanding that the latter is
important to consider for a wide range of ranking applications (e.g. online
marketplaces, job placement, admissions). To address this need, we propose a
general LTR framework that can optimize a wide range of utility metrics (e.g.
NDCG) while satisfying fairness of exposure constraints with respect to the
items. This framework expands the class of learnable ranking functions to
stochastic ranking policies, which provides a language for rigorously
expressing fairness specifications. Furthermore, we provide a new LTR algorithm
called Fair-PG-Rank for directly searching the space of fair ranking policies
via a policy-gradient approach. Beyond the theoretical evidence in deriving the
framework and the algorithm, we provide empirical results on simulated and
real-world datasets verifying the effectiveness of the approach in individual
and group-fairness settings.
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
jaenich2024fairness
|
\cite{jaenich2024fairness}
|
Fairness-Aware Exposure Allocation via Adaptive Reranking
| null | null | true | false |
Jaenich, Thomas and McDonald, Graham and Ounis, Iadh
| 2,024 | null | null | null | null |
Fairness-Aware Exposure Allocation via Adaptive Reranking
|
[PDF] Fairness-Aware Exposure Allocation via Adaptive Reranking
|
https://eprints.gla.ac.uk/323883/1/323883.pdf
|
In this paper, we explore how adaptive re-ranking affects the fair distribution of exposure, compared to a standard re-ranking. 1504. Page 2
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
TaoSIGIRAP
|
\cite{TaoSIGIRAP}
|
Vertical Allocation-based Fair Exposure Amortizing in Ranking
|
http://arxiv.org/abs/2204.03046v2
|
Result ranking often affects consumer satisfaction as well as the amount of
exposure each item receives in the ranking services. Myopically maximizing
customer satisfaction by ranking items only according to relevance will lead to
unfair distribution of exposure for items, followed by unfair opportunities and
economic gains for item producers/providers. Such unfairness will force
providers to leave the system and discourage new providers from coming in.
Eventually, fewer purchase options would be left for consumers, and the
utilities of both consumers and providers would be harmed. Thus, to maintain a
balance between ranking relevance and fairness is crucial for both parties. In
this paper, we focus on the exposure fairness in ranking services. We
demonstrate that existing methods for amortized fairness optimization could be
suboptimal in terms of fairness-relevance tradeoff because they fail to utilize
the prior knowledge of consumers. We further propose a novel algorithm named
Vertical Allocation-based Fair Exposure Amortizing in Ranking, or VerFair, to
reach a better balance between exposure fairness and ranking performance.
Extensive experiments on three real-world datasets show that VerFair
significantly outperforms state-of-the-art fair ranking algorithms in
fairness-performance trade-offs from both the individual level and the group
level.
| true | true |
Yang, Tao and Xu, Zhichao and Ai, Qingyao
| 2,023 | null |
https://doi.org/10.1145/3624918.3625313
|
10.1145/3624918.3625313
| null |
Vertical Allocation-based Fair Exposure Amortizing in Ranking
|
Vertical Allocation-based Fair Exposure Amortizing in ...
|
https://arxiv.org/abs/2204.03046
|
by T Yang · 2022 · Cited by 10 — A novel algorithm named Vertical Allocation-based Fair Exposure Amortizing in Ranking, or VerFair, to reach a better balance between exposure fairness and
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
do2022optimizing
|
\cite{do2022optimizing}
|
Optimizing generalized Gini indices for fairness in rankings
|
http://arxiv.org/abs/2204.06521v4
|
There is growing interest in designing recommender systems that aim at being
fair towards item producers or their least satisfied users. Inspired by the
domain of inequality measurement in economics, this paper explores the use of
generalized Gini welfare functions (GGFs) as a means to specify the normative
criterion that recommender systems should optimize for. GGFs weight individuals
depending on their ranks in the population, giving more weight to worse-off
individuals to promote equality. Depending on these weights, GGFs minimize the
Gini index of item exposure to promote equality between items, or focus on the
performance on specific quantiles of least satisfied users. GGFs for ranking
are challenging to optimize because they are non-differentiable. We resolve
this challenge by leveraging tools from non-smooth optimization and projection
operators used in differentiable sorting. We present experiments using real
datasets with up to 15k users and items, which show that our approach obtains
better trade-offs than the baselines on a variety of recommendation tasks and
fairness criteria.
| true | true |
Do, Virginie and Usunier, Nicolas
| 2,022 | null | null | null | null |
Optimizing generalized Gini indices for fairness in rankings
|
Optimizing generalized Gini indices for fairness in rankings
|
http://arxiv.org/pdf/2204.06521v4
|
There is growing interest in designing recommender systems that aim at being
fair towards item producers or their least satisfied users. Inspired by the
domain of inequality measurement in economics, this paper explores the use of
generalized Gini welfare functions (GGFs) as a means to specify the normative
criterion that recommender systems should optimize for. GGFs weight individuals
depending on their ranks in the population, giving more weight to worse-off
individuals to promote equality. Depending on these weights, GGFs minimize the
Gini index of item exposure to promote equality between items, or focus on the
performance on specific quantiles of least satisfied users. GGFs for ranking
are challenging to optimize because they are non-differentiable. We resolve
this challenge by leveraging tools from non-smooth optimization and projection
operators used in differentiable sorting. We present experiments using real
datasets with up to 15k users and items, which show that our approach obtains
better trade-offs than the baselines on a variety of recommendation tasks and
fairness criteria.
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
cpfair
|
\cite{cpfair}
|
CPFair: Personalized Consumer and Producer Fairness Re-ranking for
Recommender Systems
|
http://arxiv.org/abs/2204.08085v1
|
Recently, there has been a rising awareness that when machine learning (ML)
algorithms are used to automate choices, they may treat/affect individuals
unfairly, with legal, ethical, or economic consequences. Recommender systems
are prominent examples of such ML systems that assist users in making
high-stakes judgments. A common trend in the previous literature research on
fairness in recommender systems is that the majority of works treat user and
item fairness concerns separately, ignoring the fact that recommender systems
operate in a two-sided marketplace. In this work, we present an
optimization-based re-ranking approach that seamlessly integrates fairness
constraints from both the consumer and producer-side in a joint objective
framework. We demonstrate through large-scale experiments on 8 datasets that
our proposed method is capable of improving both consumer and producer fairness
without reducing overall recommendation quality, demonstrating the role
algorithms may play in minimizing data biases.
| true | true |
Naghiaei, Mohammadmehdi and Rahmani, Hossein A and Deldjoo, Yashar
| 2,022 | null | null | null |
arXiv preprint arXiv:2204.08085
|
CPFair: Personalized Consumer and Producer Fairness Re-ranking for
Recommender Systems
|
CPFair: Personalized Consumer and Producer Fairness Re-ranking ...
|
https://arxiv.org/abs/2204.08085
|
We present an optimization-based re-ranking approach that seamlessly integrates fairness constraints from both the consumer and producer-side in a joint
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
wu2021tfrom
|
\cite{wu2021tfrom}
|
TFROM: A Two-sided Fairness-Aware Recommendation Model for Both
Customers and Providers
|
http://arxiv.org/abs/2104.09024v1
|
At present, most research on the fairness of recommender systems is conducted
either from the perspective of customers or from the perspective of product (or
service) providers. However, such a practice ignores the fact that when
fairness is guaranteed to one side, the fairness and rights of the other side
are likely to reduce. In this paper, we consider recommendation scenarios from
the perspective of two sides (customers and providers). From the perspective of
providers, we consider the fairness of the providers' exposure in recommender
system. For customers, we consider the fairness of the reduced quality of
recommendation results due to the introduction of fairness measures. We
theoretically analyzed the relationship between recommendation quality,
customers fairness, and provider fairness, and design a two-sided
fairness-aware recommendation model (TFROM) for both customers and providers.
Specifically, we design two versions of TFROM for offline and online
recommendation. The effectiveness of the model is verified on three real-world
data sets. The experimental results show that TFROM provides better two-sided
fairness while still maintaining a higher level of personalization than the
baseline algorithms.
| true | true |
Wu, Yao and Cao, Jian and Xu, Guandong and Tan, Yudong
| 2,021 | null | null | null | null |
TFROM: A Two-sided Fairness-Aware Recommendation Model for Both
Customers and Providers
|
TFROM: A Two-sided Fairness-Aware Recommendation Model for ...
|
https://arxiv.org/abs/2104.09024
|
In this paper, we consider recommendation scenarios from the perspective of two sides (customers and providers). From the perspective of
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
fairrecplus
|
\cite{fairrecplus}
|
Towards Fair Recommendation in Two-Sided Platforms
|
http://arxiv.org/abs/2201.01180v1
|
Many online platforms today (such as Amazon, Netflix, Spotify, LinkedIn, and
AirBnB) can be thought of as two-sided markets with producers and customers of
goods and services. Traditionally, recommendation services in these platforms
have focused on maximizing customer satisfaction by tailoring the results
according to the personalized preferences of individual customers. However, our
investigation reinforces the fact that such customer-centric design of these
services may lead to unfair distribution of exposure to the producers, which
may adversely impact their well-being. On the other hand, a pure
producer-centric design might become unfair to the customers. As more and more
people are depending on such platforms to earn a living, it is important to
ensure fairness to both producers and customers. In this work, by mapping a
fair personalized recommendation problem to a constrained version of the
problem of fairly allocating indivisible goods, we propose to provide fairness
guarantees for both sides. Formally, our proposed {\em FairRec} algorithm
guarantees Maxi-Min Share ($\alpha$-MMS) of exposure for the producers, and
Envy-Free up to One Item (EF1) fairness for the customers. Extensive
evaluations over multiple real-world datasets show the effectiveness of {\em
FairRec} in ensuring two-sided fairness while incurring a marginal loss in
overall recommendation quality. Finally, we present a modification of FairRec
(named as FairRecPlus) that at the cost of additional computation time,
improves the recommendation performance for the customers, while maintaining
the same fairness guarantees.
| true | true |
Biswas, Arpita and Patro, Gourab K. and Ganguly, Niloy and Gummadi, Krishna P and Chakraborty, Abhijnan
| 2,021 | null | null | null |
ACM Transactions on the Web (TWEB)
|
Towards Fair Recommendation in Two-Sided Platforms
|
Toward Fair Recommendation in Two-sided Platforms
|
https://dl.acm.org/doi/10.1145/3503624
|
While FairRec provides two-sided fair recommendations, it can be further tweaked to improve the recommendation performance for the customers. We
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
zafar2019fairness
|
\cite{zafar2019fairness}
|
Fairness Constraints: A Flexible Approach for Fair Classification
| null | null | true | false |
Zafar, Muhammad Bilal and Valera, Isabel and Gomez-Rodriguez, Manuel and Gummadi, Krishna P
| 2,019 | null | null | null |
The Journal of Machine Learning Research
|
Fairness Constraints: A Flexible Approach for Fair Classification
|
Fairness Constraints: A Flexible Approach for Fair Classification
|
https://jmlr.org/papers/v20/18-262.html
|
Image 1 Image 2: RSS Feed In this context, there is a need for computational techniques to limit unfairness in algorithmic decision making. In this work, we take a step forward to fulfill that need and introduce a flexible constraint-based framework to enable the design of fair margin-based classifiers. The main technical innovation of our framework is a general and intuitive measure of decision boundary unfairness, which serves as a tractable proxy to several of the most popular computational definitions of unfairness from the literature. Leveraging our measure, we can reduce the design of fair margin-based classifiers to adding tractable constraints on their decision boundaries. Experiments on multiple synthetic and real-world datasets show that our framework is able to successfully limit unfairness, often at a small cost in terms of accuracy.
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
lambert1992distribution
|
\cite{lambert1992distribution}
|
The Distribution and Redistribution of Income
| null | null | true | false |
Lambert, Peter J.
| 1,992 | null | null | null | null |
The Distribution and Redistribution of Income
|
[PDF] The distribution and redistribution of income - Cornell eCommons
|
https://ecommons.cornell.edu/bitstreams/4ec59bd5-8672-42b0-985c-9efd84472f75/download
|
This book seeks "to bring together, in a single body, the many strands of formal analysis of income distribution and redistribution which have developed since
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
saito2022fair
|
\cite{saito2022fair}
|
Fair Ranking as Fair Division: Impact-Based Individual Fairness in
Ranking
|
http://arxiv.org/abs/2206.07247v2
|
Rankings have become the primary interface in two-sided online markets. Many
have noted that the rankings not only affect the satisfaction of the users
(e.g., customers, listeners, employers, travelers), but that the position in
the ranking allocates exposure -- and thus economic opportunity -- to the
ranked items (e.g., articles, products, songs, job seekers, restaurants,
hotels). This has raised questions of fairness to the items, and most existing
works have addressed fairness by explicitly linking item exposure to item
relevance. However, we argue that any particular choice of such a link function
may be difficult to defend, and we show that the resulting rankings can still
be unfair. To avoid these shortcomings, we develop a new axiomatic approach
that is rooted in principles of fair division. This not only avoids the need to
choose a link function, but also more meaningfully quantifies the impact on the
items beyond exposure. Our axioms of envy-freeness and dominance over uniform
ranking postulate that for a fair ranking policy every item should prefer their
own rank allocation over that of any other item, and that no item should be
actively disadvantaged by the rankings. To compute ranking policies that are
fair according to these axioms, we propose a new ranking objective related to
the Nash Social Welfare. We show that the solution has guarantees regarding its
envy-freeness, its dominance over uniform rankings for every item, and its
Pareto optimality. In contrast, we show that conventional exposure-based
fairness can produce large amounts of envy and have a highly disparate impact
on the items. Beyond these theoretical results, we illustrate empirically how
our framework controls the trade-off between impact-based individual item
fairness and user utility.
| true | true |
Saito, Yuta and Joachims, Thorsten
| 2,022 | null | null | null | null |
Fair Ranking as Fair Division: Impact-Based Individual Fairness in
Ranking
|
[PDF] Fair Ranking as Fair Division: Impact-Based Individual Fairness in ...
|
https://www.cs.cornell.edu/people/tj/publications/saito_joachims_22b
|
Our axioms of envy-freeness and dominance over uniform ranking postulate that for a fair ranking policy every item should prefer their own rank allocation over
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
hanlon2010review
|
\cite{hanlon2010review}
|
A Review of Tax Research
| null | null | true | false |
Hanlon, Michelle and Heitzman, Shane
| 2,010 | null | null | null |
Journal of accounting and Economics
|
A Review of Tax Research
|
A Review of Tax Research by Michelle Hanlon, Shane Heitzman
|
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1476561
|
A Review of Tax Research by Michelle Hanlon, Shane Heitzman :: SSRN Hanlon, Michelle and Heitzman, Shane, A Review of Tax Research (July 25, 2010). Allee, Teri Lombardi Yohn The Demand for Financial Statements in an Unregulated Environment: an Examination of the Production and Use of Financial Statements By Privately-Held Small Businesses Pages: 49 Posted: 2 Feb 2005 Last revised: 14 May 2014 Download PDF Add Paper to My Library 4. April Klein, Simone Traini, Georgios Voulgaris Foreign Institutional Investors and Information Asymmetry: Evidence from Corporate Taxes NYU Stern School of Business ·57 Pages ·Posted: 17 Jun 2019 ·Last revised: 26 May 2023 ·Downloads: 373 Download PDF Add Paper to My Library Follow;) #### Corporate Finance: Capital Structure & Payout Policies eJournal
|
Understanding Accuracy-Fairness Trade-offs in Re-ranking through
Elasticity in Economics
|
2504.14991v1
|
nerre2001concept
|
\cite{nerre2001concept}
|
The Concept of Tax Culture
| null | null | true | false |
Nerr{\'e}, Birger
| 2,001 | null | null | null | null |
The Concept of Tax Culture
|
THE CONCEPT OF TAX CULTURE IN CONTEMPORARY TIMES
|
https://heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/iusplr13§ion=21
|
Accordingly, tax culture is more than "culture of taxation."' and "tax-paying culture" and studies the motives which impact on voluntary tax compliance,
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
HB
|
\cite{HB}
|
Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers
| null | null | true | false |
Zadrozny, Bianca and Elkan, Charles
| 2,001 | null |
https://dl.acm.org/doi/10.5555/645530.655658
| null | null |
Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers
|
(PDF) Obtaining Calibrated Probability Estimates from Decision ...
|
https://www.researchgate.net/publication/2368094_Obtaining_Calibrated_Probability_Estimates_from_Decision_Trees_and_Naive_Bayesian_Classifiers
|
This paper presents simple but successful methods for obtaining calibrated probability estimates from decision tree and naive Bayesian classifiers.
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
MBCT
|
\cite{MBCT}
|
MBCT: Tree-Based Feature-Aware Binning for Individual Uncertainty
Calibration
|
http://arxiv.org/abs/2202.04348v2
|
Most machine learning classifiers only concern classification accuracy, while
certain applications (such as medical diagnosis, meteorological forecasting,
and computation advertising) require the model to predict the true probability,
known as a calibrated estimate. In previous work, researchers have developed
several calibration methods to post-process the outputs of a predictor to
obtain calibrated values, such as binning and scaling methods. Compared with
scaling, binning methods are shown to have distribution-free theoretical
guarantees, which motivates us to prefer binning methods for calibration.
However, we notice that existing binning methods have several drawbacks: (a)
the binning scheme only considers the original prediction values, thus limiting
the calibration performance; and (b) the binning approach is non-individual,
mapping multiple samples in a bin to the same value, and thus is not suitable
for order-sensitive applications. In this paper, we propose a feature-aware
binning framework, called Multiple Boosting Calibration Trees (MBCT), along
with a multi-view calibration loss to tackle the above issues. Our MBCT
optimizes the binning scheme by the tree structures of features, and adopts a
linear function in a tree node to achieve individual calibration. Our MBCT is
non-monotonic, and has the potential to improve order accuracy, due to its
learnable binning scheme and the individual calibration. We conduct
comprehensive experiments on three datasets in different fields. Results show
that our method outperforms all competing models in terms of both calibration
error and order accuracy. We also conduct simulation experiments, justifying
that the proposed multi-view calibration loss is a better metric in modeling
calibration error.
| true | true |
Huang, Siguang and Wang, Yunli and Mou, Lili and Zhang, Huayue and Zhu, Han and Yu, Chuan and Zheng, Bo
| 2,022 | null |
https://doi.org/10.1145/3485447.3512096
|
10.1145/3485447.3512096
| null |
MBCT: Tree-Based Feature-Aware Binning for Individual Uncertainty
Calibration
|
MBCT: Tree-Based Feature-Aware Binning for Individual Uncertainty ...
|
https://dl.acm.org/doi/10.1145/3485447.3512096
|
Our MBCT is non-monotonic, and has the potential to improve order accuracy, due to its learnable binning scheme and the individual calibration.
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
IR
|
\cite{IR}
|
Transforming classifier scores into accurate multiclass probability estimates
| null | null | true | false |
Zadrozny, Bianca and Elkan, Charles
| 2,002 | null |
https://doi.org/10.1145/775047.775151
|
10.1145/775047.775151
| null |
Transforming classifier scores into accurate multiclass probability estimates
|
(PDF) Transforming Classifier Scores into Accurate Multiclass ...
|
https://www.researchgate.net/publication/2571315_Transforming_Classifier_Scores_into_Accurate_Multiclass_Probability_Estimates
|
Here, we show how to obtain accurate probability estimates for multiclass problems by combining calibrated binary probability estimates.
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
SIR
|
\cite{SIR}
|
Calibrating User Response Predictions in Online Advertising
| null | null | true | false |
Deng, Chao and Wang, Hao and Tan, Qing and Xu, Jian and Gai, Kun
| 2,020 | null |
https://doi.org/10.1007/978-3-030-67667-4_13
|
10.1007/978-3-030-67667-4_13
| null |
Calibrating User Response Predictions in Online Advertising
|
Calibrating User Response Predictions in Online Advertising
|
https://dl.acm.org/doi/abs/10.1007/978-3-030-67667-4_13
|
To obtain accurate probability, calibration is usually used to transform predicted probabilities to posterior probabilities.
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
PlattScaling
|
\cite{PlattScaling}
|
Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods
| null | null | true | false |
Platt, John and others
| 1,999 | null |
https://home.cs.colorado.edu/~mozer/Teaching/syllabi/6622/papers/Platt1999.pdf
| null |
Advances in large margin classifiers
|
Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods
|
[PDF] Probabilistic Outputs for Support Vector Machines and Comparisons ...
|
https://home.cs.colorado.edu/~mozer/Teaching/syllabi/6622/papers/Platt1999.pdf
|
This chapter compares classification error rate and likelihood scores for an SVM plus sigmoid versus a kernel method trained with a regularized.
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
TemperatureScaling
|
\cite{TemperatureScaling}
|
Revisiting the Calibration of Modern Neural Networks
|
http://arxiv.org/abs/2106.07998v2
|
Accurate estimation of predictive uncertainty (model calibration) is
essential for the safe application of neural networks. Many instances of
miscalibration in modern neural networks have been reported, suggesting a trend
that newer, more accurate models produce poorly calibrated predictions. Here,
we revisit this question for recent state-of-the-art image classification
models. We systematically relate model calibration and accuracy, and find that
the most recent models, notably those not using convolutions, are among the
best calibrated. Trends observed in prior model generations, such as decay of
calibration with distribution shift or model size, are less pronounced in
recent architectures. We also show that model size and amount of pretraining do
not fully explain these differences, suggesting that architecture is a major
determinant of calibration properties.
| true | true |
Guo, Chuan and Pleiss, Geoff and Sun, Yu and Weinberger, Kilian Q.
| 2,017 | null |
https://dl.acm.org/doi/10.5555/3305381.3305518
| null | null |
Revisiting the Calibration of Modern Neural Networks
|
Revisiting the Calibration of Modern Neural Networks
|
http://arxiv.org/pdf/2106.07998v2
|
Accurate estimation of predictive uncertainty (model calibration) is
essential for the safe application of neural networks. Many instances of
miscalibration in modern neural networks have been reported, suggesting a trend
that newer, more accurate models produce poorly calibrated predictions. Here,
we revisit this question for recent state-of-the-art image classification
models. We systematically relate model calibration and accuracy, and find that
the most recent models, notably those not using convolutions, are among the
best calibrated. Trends observed in prior model generations, such as decay of
calibration with distribution shift or model size, are less pronounced in
recent architectures. We also show that model size and amount of pretraining do
not fully explain these differences, suggesting that architecture is a major
determinant of calibration properties.
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
BetaCalib
|
\cite{BetaCalib}
|
Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers
| null | null | true | false |
Kull, Meelis and Silva Filho, Telmo and Flach, Peter
| 2,017 | null |
http://proceedings.mlr.press/v54/kull17a.html
| null | null |
Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers
|
Beta calibration: a well-founded and easily implemented ...
|
https://research-information.bris.ac.uk/en/publications/beta-calibration-a-well-founded-and-easily-implemented-improvemen
|
by M Kull · 2017 · Cited by 281 — Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers. Meelis Kull, Telmo De Menezes E Silva
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
GammaGauss
|
\cite{GammaGauss}
|
Obtaining Calibrated Probabilities with Personalized Ranking Models
|
http://arxiv.org/abs/2112.07428v2
|
For personalized ranking models, the well-calibrated probability of an item
being preferred by a user has great practical value. While existing work shows
promising results in image classification, probability calibration has not been
much explored for personalized ranking. In this paper, we aim to estimate the
calibrated probability of how likely a user will prefer an item. We investigate
various parametric distributions and propose two parametric calibration
methods, namely Gaussian calibration and Gamma calibration. Each proposed
method can be seen as a post-processing function that maps the ranking scores
of pre-trained models to well-calibrated preference probabilities, without
affecting the recommendation performance. We also design the unbiased empirical
risk minimization framework that guides the calibration methods to learning of
true preference probability from the biased user-item interaction dataset.
Extensive evaluations with various personalized ranking models on real-world
datasets show that both the proposed calibration methods and the unbiased
empirical risk minimization significantly improve the calibration performance.
| true | true |
Kweon, Wonbin and Kang, SeongKu and Yu, Hwanjo
| 2,022 | null |
https://aaai.org/papers/04083-obtaining-calibrated-probabilities-with-personalized-ranking-models/
| null | null |
Obtaining Calibrated Probabilities with Personalized Ranking Models
|
Obtaining Calibrated Probabilities with Personalized Ranking Models
|
http://arxiv.org/pdf/2112.07428v2
|
For personalized ranking models, the well-calibrated probability of an item
being preferred by a user has great practical value. While existing work shows
promising results in image classification, probability calibration has not been
much explored for personalized ranking. In this paper, we aim to estimate the
calibrated probability of how likely a user will prefer an item. We investigate
various parametric distributions and propose two parametric calibration
methods, namely Gaussian calibration and Gamma calibration. Each proposed
method can be seen as a post-processing function that maps the ranking scores
of pre-trained models to well-calibrated preference probabilities, without
affecting the recommendation performance. We also design the unbiased empirical
risk minimization framework that guides the calibration methods to learning of
true preference probability from the biased user-item interaction dataset.
Extensive evaluations with various personalized ranking models on real-world
datasets show that both the proposed calibration methods and the unbiased
empirical risk minimization significantly improve the calibration performance.
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
ConfCalib
|
\cite{ConfCalib}
|
Confidence-Aware Multi-Field Model Calibration
|
http://arxiv.org/abs/2402.17655v2
|
Accurately predicting the probabilities of user feedback, such as clicks and
conversions, is critical for advertisement ranking and bidding. However, there
often exist unwanted mismatches between predicted probabilities and true
likelihoods due to the rapid shift of data distributions and intrinsic model
biases. Calibration aims to address this issue by post-processing model
predictions, and field-aware calibration can adjust model output on different
feature field values to satisfy fine-grained advertising demands.
Unfortunately, the observed samples corresponding to certain field values can
be seriously limited to make confident calibrations, which may yield bias
amplification and online disturbance. In this paper, we propose a
confidence-aware multi-field calibration method, which adaptively adjusts the
calibration intensity based on confidence levels derived from sample
statistics. It also utilizes multiple fields for joint model calibration
according to their importance to mitigate the impact of data sparsity on a
single field. Extensive offline and online experiments show the superiority of
our method in boosting advertising performance and reducing prediction
deviations.
| true | true |
Zhao, Yuang and Wu, Chuhan and Jia, Qinglin and Zhu, Hong and Yan, Jia and Zong, Libin and Zhang, Linxuan and Dong, Zhenhua and Zhang, Muyu
| 2,024 | null |
https://doi.org/10.1145/3627673.3680043
|
10.1145/3627673.3680043
| null |
Confidence-Aware Multi-Field Model Calibration
|
[PDF] Confidence-Aware Multi-Field Model Calibration - arXiv
|
https://arxiv.org/pdf/2402.17655
|
In this paper, we propose a confidence-aware multi-field calibration method, which adaptively adjusts the calibration intensity based on confidence levels
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
LiRank
|
\cite{LiRank}
|
LiRank: Industrial Large Scale Ranking Models at LinkedIn
|
http://arxiv.org/abs/2402.06859v2
|
We present LiRank, a large-scale ranking framework at LinkedIn that brings to
production state-of-the-art modeling architectures and optimization methods. We
unveil several modeling improvements, including Residual DCN, which adds
attention and residual connections to the famous DCNv2 architecture. We share
insights into combining and tuning SOTA architectures to create a unified
model, including Dense Gating, Transformers and Residual DCN. We also propose
novel techniques for calibration and describe how we productionalized deep
learning based explore/exploit methods. To enable effective, production-grade
serving of large ranking models, we detail how to train and compress models
using quantization and vocabulary compression. We provide details about the
deployment setup for large-scale use cases of Feed ranking, Jobs
Recommendations, and Ads click-through rate (CTR) prediction. We summarize our
learnings from various A/B tests by elucidating the most effective technical
approaches. These ideas have contributed to relative metrics improvements
across the board at LinkedIn: +0.5% member sessions in the Feed, +1.76%
qualified job applications for Jobs search and recommendations, and +4.3% for
Ads CTR. We hope this work can provide practical insights and solutions for
practitioners interested in leveraging large-scale deep ranking systems.
| true | true |
Borisyuk, Fedor and Zhou, Mingzhou and Song, Qingquan and Zhu, Siyu and Tiwana, Birjodh and Parameswaran, Ganesh and Dangi, Siddharth and Hertel, Lars and Xiao, Qiang Charles and Hou, Xiaochen and Ouyang, Yunbo and Gupta, Aman and Singh, Sheallika and Liu, Dan and Cheng, Hailing and Le, Lei and Hung, Jonathan and Keerthi, Sathiya and Wang, Ruoyan and Zhang, Fengyu and Kothari, Mohit and Zhu, Chen and Sun, Daqi and Dai, Yun and Luan, Xun and Zhu, Sirou and Wang, Zhiwei and Daftary, Neil and Shen, Qianqi and Jiang, Chengming and Wei, Haichao and Varshney, Maneesh and Ghoting, Amol and Ghosh, Souvik
| 2,024 | null |
https://doi.org/10.1145/3637528.3671561
|
10.1145/3637528.3671561
| null |
LiRank: Industrial Large Scale Ranking Models at LinkedIn
|
LiRank: Industrial Large Scale Ranking Models at LinkedIn
|
http://arxiv.org/pdf/2402.06859v2
|
We present LiRank, a large-scale ranking framework at LinkedIn that brings to
production state-of-the-art modeling architectures and optimization methods. We
unveil several modeling improvements, including Residual DCN, which adds
attention and residual connections to the famous DCNv2 architecture. We share
insights into combining and tuning SOTA architectures to create a unified
model, including Dense Gating, Transformers and Residual DCN. We also propose
novel techniques for calibration and describe how we productionalized deep
learning based explore/exploit methods. To enable effective, production-grade
serving of large ranking models, we detail how to train and compress models
using quantization and vocabulary compression. We provide details about the
deployment setup for large-scale use cases of Feed ranking, Jobs
Recommendations, and Ads click-through rate (CTR) prediction. We summarize our
learnings from various A/B tests by elucidating the most effective technical
approaches. These ideas have contributed to relative metrics improvements
across the board at LinkedIn: +0.5% member sessions in the Feed, +1.76%
qualified job applications for Jobs search and recommendations, and +4.3% for
Ads CTR. We hope this work can provide practical insights and solutions for
practitioners interested in leveraging large-scale deep ranking systems.
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
NeuralCalib
|
\cite{NeuralCalib}
|
Field-aware Calibration: A Simple and Empirically Strong Method for
Reliable Probabilistic Predictions
|
http://arxiv.org/abs/1905.10713v3
|
It is often observed that the probabilistic predictions given by a machine
learning model can disagree with averaged actual outcomes on specific subsets
of data, which is also known as the issue of miscalibration. It is responsible
for the unreliability of practical machine learning systems. For example, in
online advertising, an ad can receive a click-through rate prediction of 0.1
over some population of users where its actual click rate is 0.15. In such
cases, the probabilistic predictions have to be fixed before the system can be
deployed.
In this paper, we first introduce a new evaluation metric named field-level
calibration error that measures the bias in predictions over the sensitive
input field that the decision-maker concerns. We show that existing post-hoc
calibration methods have limited improvements in the new field-level metric and
other non-calibration metrics such as the AUC score. To this end, we propose
Neural Calibration, a simple yet powerful post-hoc calibration method that
learns to calibrate by making full use of the field-aware information over the
validation set. We present extensive experiments on five large-scale datasets.
The results showed that Neural Calibration significantly improves against
uncalibrated predictions in common metrics such as the negative log-likelihood,
Brier score and AUC, as well as the proposed field-level calibration error.
| true | true |
Pan, Feiyang and Ao, Xiang and Tang, Pingzhong and Lu, Min and Liu, Dapeng and Xiao, Lei and He, Qing
| 2,020 | null |
https://doi.org/10.1145/3366423.3380154
|
10.1145/3366423.3380154
| null |
Field-aware Calibration: A Simple and Empirically Strong Method for
Reliable Probabilistic Predictions
|
Field-aware Calibration-A Simple and Empirically Strong Method for ...
|
https://zhuanlan.zhihu.com/p/527521112
|
... Reliable Probabilistic Prediction ... Field-aware Calibration- A Simple and Empirically Strong Method for Reliable Probabilistic Predictions.
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
AdaCalib
|
\cite{AdaCalib}
|
Posterior Probability Matters: Doubly-Adaptive Calibration for Neural
Predictions in Online Advertising
|
http://arxiv.org/abs/2205.07295v2
|
Predicting user response probabilities is vital for ad ranking and bidding.
We hope that predictive models can produce accurate probabilistic predictions
that reflect true likelihoods. Calibration techniques aim to post-process model
predictions to posterior probabilities. Field-level calibration -- which
performs calibration w.r.t. to a specific field value -- is fine-grained and
more practical. In this paper we propose a doubly-adaptive approach AdaCalib.
It learns an isotonic function family to calibrate model predictions with the
guidance of posterior statistics, and field-adaptive mechanisms are designed to
ensure that the posterior is appropriate for the field value to be calibrated.
Experiments verify that AdaCalib achieves significant improvement on
calibration performance. It has been deployed online and beats previous
approach.
| true | true |
Wei, Penghui and Zhang, Weimin and Hou, Ruijie and Liu, Jinquan and Liu, Shaoguo and Wang, Liang and Zheng, Bo
| 2,022 | null |
https://doi.org/10.1145/3477495.3531911
|
10.1145/3477495.3531911
| null |
Posterior Probability Matters: Doubly-Adaptive Calibration for Neural
Predictions in Online Advertising
|
Posterior Probability Matters: Doubly-Adaptive Calibration ...
|
https://www.researchgate.net/publication/360640754_Posterior_Probability_Matters_Doubly-Adaptive_Calibration_for_Neural_Predictions_in_Online_Advertising
|
In this paper we propose a doubly-adaptive approach AdaCalib. It learns an isotonic function family to calibrate model predictions with the
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
SBCR
|
\cite{SBCR}
|
A Self-boosted Framework for Calibrated Ranking
|
http://arxiv.org/abs/2406.08010v1
|
Scale-calibrated ranking systems are ubiquitous in real-world applications
nowadays, which pursue accurate ranking quality and calibrated probabilistic
predictions simultaneously. For instance, in the advertising ranking system,
the predicted click-through rate (CTR) is utilized for ranking and required to
be calibrated for the downstream cost-per-click ads bidding. Recently,
multi-objective based methods have been wildly adopted as a standard approach
for Calibrated Ranking, which incorporates the combination of two loss
functions: a pointwise loss that focuses on calibrated absolute values and a
ranking loss that emphasizes relative orderings. However, when applied to
industrial online applications, existing multi-objective CR approaches still
suffer from two crucial limitations. First, previous methods need to aggregate
the full candidate list within a single mini-batch to compute the ranking loss.
Such aggregation strategy violates extensive data shuffling which has long been
proven beneficial for preventing overfitting, and thus degrades the training
effectiveness. Second, existing multi-objective methods apply the two
inherently conflicting loss functions on a single probabilistic prediction,
which results in a sub-optimal trade-off between calibration and ranking. To
tackle the two limitations, we propose a Self-Boosted framework for Calibrated
Ranking (SBCR).
| true | true |
Zhang, Shunyu and Liu, Hu and Bao, Wentian and Yu, Enyun and Song, Yang
| 2,024 | null |
https://doi.org/10.1145/3637528.3671570
|
10.1145/3637528.3671570
| null |
A Self-boosted Framework for Calibrated Ranking
|
A Self-boosted Framework for Calibrated Ranking
|
https://arxiv.org/html/2406.08010v1
|
We propose a Self-Boosted framework for Calibrated Ranking (SBCR). In SBCR, the predicted ranking scores by the online deployed model are dumped into context
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
error
|
\cite{error}
|
On the error of linear interpolation and the orientation, aspect ratio, and internal angles of a triangle
| null | null | true | false |
Cao, Weiming
| 2,005 | null |
https://dl.acm.org/doi/abs/10.1137/S0036142903433492
| null |
SIAM journal on numerical analysis
|
On the error of linear interpolation and the orientation, aspect ratio, and internal angles of a triangle
|
Quirk in VertexColors interpolation when displaying Polygon
|
https://mathematica.stackexchange.com/questions/16168/quirk-in-vertexcolors-interpolation-when-displaying-polygon
|
The best general way to deal with this is to (1) triangulate your large polygon (2) assign vertex colors to the newly introduced vertices (could be tricky, in
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
DESC
|
\cite{DESC}
|
Deep Ensemble Shape Calibration: Multi-Field Post-hoc Calibration in
Online Advertising
|
http://arxiv.org/abs/2401.09507v2
|
In the e-commerce advertising scenario, estimating the true probabilities
(known as a calibrated estimate) on Click-Through Rate (CTR) and Conversion
Rate (CVR) is critical. Previous research has introduced numerous solutions for
addressing the calibration problem. These methods typically involve the
training of calibrators using a validation set and subsequently applying these
calibrators to correct the original estimated values during online inference.
However, what sets e-commerce advertising scenarios apart is the challenge of
multi-field calibration. Multi-field calibration requires achieving calibration
in each field. In order to achieve multi-field calibration, it is necessary to
have a strong data utilization ability. Because the quantity of pCTR specified
range for a single field-value (such as user ID and item ID) sample is
relatively small, this makes the calibrator more difficult to train. However,
existing methods have difficulty effectively addressing these issues.
To solve these problems, we propose a new method named Deep Ensemble Shape
Calibration (DESC). In terms of business understanding and interpretability, we
decompose multi-field calibration into value calibration and shape calibration.
We introduce innovative basis calibration functions, which enhance both
function expression capabilities and data utilization by combining these basis
calibration functions. A significant advancement lies in the development of an
allocator capable of allocating the most suitable calibrators to different
estimation error distributions within diverse fields and values. We achieve
significant improvements in both public and industrial datasets. In online
experiments, we observe a +2.5% increase in CVR and +4.0% in GMV (Gross
Merchandise Volume). Our code is now available at:
https://github.com/HaoYang0123/DESC.
| true | true |
Yang, Shuai and Yang, Hao and Zou, Zhuang and Xu, Linhe and Yuan, Shuo and Zeng, Yifan
| 2,024 | null |
https://doi.org/10.1145/3637528.3671529
|
10.1145/3637528.3671529
| null |
Deep Ensemble Shape Calibration: Multi-Field Post-hoc Calibration in
Online Advertising
|
Multi-Field Post-hoc Calibration in Online Advertising - arXiv
|
https://arxiv.org/abs/2401.09507
|
Image 4: arxiv logo>cs> arXiv:2401.09507 Title:Deep Ensemble Shape Calibration: Multi-Field Post-hoc Calibration in Online Advertising View a PDF of the paper titled Deep Ensemble Shape Calibration: Multi-Field Post-hoc Calibration in Online Advertising, by Shuai Yang and 5 other authors View a PDF of the paper titled Deep Ensemble Shape Calibration: Multi-Field Post-hoc Calibration in Online Advertising, by Shuai Yang and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Core recommender toggle - [x] IArxiv recommender toggle
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
ScaleCalib
|
\cite{ScaleCalib}
|
Scale Calibration of Deep Ranking Models
| null | null | true | false |
Yan, Le and Qin, Zhen and Wang, Xuanhui and Bendersky, Michael and Najork, Marc
| 2,022 | null |
https://doi.org/10.1145/3534678.3539072
|
10.1145/3534678.3539072
| null |
Scale Calibration of Deep Ranking Models
|
Scale Calibration of Deep Ranking Models - Google Research
|
https://research.google/pubs/scale-calibration-of-deep-ranking-models/
|
Scale Calibration of Deep Ranking Models Research Research Back to Research areas menu Back to Research areas menu Back to Computing Systems & Quantum AI menu Back to Programs & events menu Scale Calibration of Deep Ranking Models Learning-to-Rank (LTR) systems are ubiquitous in web applications nowadays. However, virtually all advanced ranking functions are not scale calibrated. This is a major reason that existing ads ranking methods use scale calibrated pointwise loss functions that may sacrifice ranking performance. Our results show that our proposed calibrated ranking losses can achieve nearly optimal results in terms of both ranking quality and score scale calibration. Learn more about how we conduct our research Our research philosophy
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
JRC
|
\cite{JRC}
|
Joint Optimization of Ranking and Calibration with Contextualized Hybrid
Model
|
http://arxiv.org/abs/2208.06164v2
|
Despite the development of ranking optimization techniques, pointwise loss
remains the dominating approach for click-through rate prediction. It can be
attributed to the calibration ability of the pointwise loss since the
prediction can be viewed as the click probability. In practice, a CTR
prediction model is also commonly assessed with the ranking ability. To
optimize the ranking ability, ranking loss (e.g., pairwise or listwise loss)
can be adopted as they usually achieve better rankings than pointwise loss.
Previous studies have experimented with a direct combination of the two losses
to obtain the benefit from both losses and observed an improved performance.
However, previous studies break the meaning of output logit as the
click-through rate, which may lead to sub-optimal solutions. To address this
issue, we propose an approach that can Jointly optimize the Ranking and
Calibration abilities (JRC for short). JRC improves the ranking ability by
contrasting the logit value for the sample with different labels and constrains
the predicted probability to be a function of the logit subtraction. We further
show that JRC consolidates the interpretation of logits, where the logits model
the joint distribution. With such an interpretation, we prove that JRC
approximately optimizes the contextualized hybrid discriminative-generative
objective. Experiments on public and industrial datasets and online A/B testing
show that our approach improves both ranking and calibration abilities. Since
May 2022, JRC has been deployed on the display advertising platform of Alibaba
and has obtained significant performance improvements.
| true | true |
Sheng, Xiang-Rong and Gao, Jingyue and Cheng, Yueyao and Yang, Siran and Han, Shuguang and Deng, Hongbo and Jiang, Yuning and Xu, Jian and Zheng, Bo
| 2,023 | null |
https://doi.org/10.1145/3580305.3599851
|
10.1145/3580305.3599851
| null |
Joint Optimization of Ranking and Calibration with Contextualized Hybrid
Model
|
[PDF] Joint Optimization of Ranking and Calibration with Contextualized ...
|
https://arxiv.org/pdf/2208.06164
|
The proposed JRC method extends the idea of hybrid modeling with contextualization for CTR prediction. Incorporating context information further enables our.
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
RCR
|
\cite{RCR}
|
Regression Compatible Listwise Objectives for Calibrated Ranking with
Binary Relevance
|
http://arxiv.org/abs/2211.01494v2
|
As Learning-to-Rank (LTR) approaches primarily seek to improve ranking
quality, their output scores are not scale-calibrated by design. This
fundamentally limits LTR usage in score-sensitive applications. Though a simple
multi-objective approach that combines a regression and a ranking objective can
effectively learn scale-calibrated scores, we argue that the two objectives are
not necessarily compatible, which makes the trade-off less ideal for either of
them. In this paper, we propose a practical regression compatible ranking (RCR)
approach that achieves a better trade-off, where the two ranking and regression
components are proved to be mutually aligned. Although the same idea applies to
ranking with both binary and graded relevance, we mainly focus on binary labels
in this paper. We evaluate the proposed approach on several public LTR
benchmarks and show that it consistently achieves either best or competitive
result in terms of both regression and ranking metrics, and significantly
improves the Pareto frontiers in the context of multi-objective optimization.
Furthermore, we evaluated the proposed approach on YouTube Search and found
that it not only improved the ranking quality of the production pCTR model, but
also brought gains to the click prediction accuracy. The proposed approach has
been successfully deployed in the YouTube production system.
| true | true |
Bai, Aijun and Jagerman, Rolf and Qin, Zhen and Yan, Le and Kar, Pratyush and Lin, Bing-Rong and Wang, Xuanhui and Bendersky, Michael and Najork, Marc
| 2,023 | null |
https://doi.org/10.1145/3583780.3614712
|
10.1145/3583780.3614712
| null |
Regression Compatible Listwise Objectives for Calibrated Ranking with
Binary Relevance
|
[PDF] Regression Compatible Listwise Objectives for Calibrated Ranking ...
|
https://arxiv.org/pdf/2211.01494
|
In this paper, we propose a practical regression compatible ranking (RCR) approach where the two ranking and regression components are proved to be mutually
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
CLID
|
\cite{CLID}
|
Calibration-compatible Listwise Distillation of Privileged Features for
CTR Prediction
|
http://arxiv.org/abs/2312.08727v1
|
In machine learning systems, privileged features refer to the features that
are available during offline training but inaccessible for online serving.
Previous studies have recognized the importance of privileged features and
explored ways to tackle online-offline discrepancies. A typical practice is
privileged features distillation (PFD): train a teacher model using all
features (including privileged ones) and then distill the knowledge from the
teacher model using a student model (excluding the privileged features), which
is then employed for online serving. In practice, the pointwise cross-entropy
loss is often adopted for PFD. However, this loss is insufficient to distill
the ranking ability for CTR prediction. First, it does not consider the
non-i.i.d. characteristic of the data distribution, i.e., other items on the
same page significantly impact the click probability of the candidate item.
Second, it fails to consider the relative item order ranked by the teacher
model's predictions, which is essential to distill the ranking ability. To
address these issues, we first extend the pointwise-based PFD to the
listwise-based PFD. We then define the calibration-compatible property of
distillation loss and show that commonly used listwise losses do not satisfy
this property when employed as distillation loss, thus compromising the model's
calibration ability, which is another important measure for CTR prediction. To
tackle this dilemma, we propose Calibration-compatible LIstwise Distillation
(CLID), which employs carefully-designed listwise distillation loss to achieve
better ranking ability than the pointwise-based PFD while preserving the
model's calibration ability. We theoretically prove it is
calibration-compatible. Extensive experiments on public datasets and a
production dataset collected from the display advertising system of Alibaba
further demonstrate the effectiveness of CLID.
| true | true |
Gui, Xiaoqiang and Cheng, Yueyao and Sheng, Xiang-Rong and Zhao, Yunfeng and Yu, Guoxian and Han, Shuguang and Jiang, Yuning and Xu, Jian and Zheng, Bo
| 2,024 | null |
https://doi.org/10.1145/3616855.3635810
|
10.1145/3616855.3635810
| null |
Calibration-compatible Listwise Distillation of Privileged Features for
CTR Prediction
|
[PDF] Calibration-compatible Listwise Distillation of Privileged Features for ...
|
https://arxiv.org/pdf/2312.08727
|
In the ranking stage, a CTR prediction model typically takes the user's features and candidate items' features as input. The model then predicts.
|
Unconstrained Monotonic Calibration of Predictions in Deep Ranking
Systems
|
2504.14243v1
|
BBP
|
\cite{BBP}
|
Beyond Binary Preference: Leveraging Bayesian Approaches for Joint Optimization of Ranking and Calibration
| null | null | true | false |
Liu, Chang and Wang, Qiwei and Lin, Wenqing and Ding, Yue and Lu, Hongtao
| 2,024 | null |
https://doi.org/10.1145/3637528.3671577
|
10.1145/3637528.3671577
| null |
Beyond Binary Preference: Leveraging Bayesian Approaches for Joint Optimization of Ranking and Calibration
|
Leveraging Bayesian Approaches for Joint Optimization of Ranking ...
|
https://www.researchgate.net/publication/383420396_Beyond_Binary_Preference_Leveraging_Bayesian_Approaches_for_Joint_Optimization_of_Ranking_and_Calibration
|
BBP [28] tackles the issue of insufficient samples for ranking loss by estimating beta distributions for users and items, generating continuously comparable
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
PORGraph
|
\cite{PORGraph}
|
Hierarchical Fashion Graph Network for Personalized Outfit
Recommendation
|
http://arxiv.org/abs/2005.12566v1
|
Fashion outfit recommendation has attracted increasing attentions from online
shopping services and fashion communities.Distinct from other scenarios (e.g.,
social networking or content sharing) which recommend a single item (e.g., a
friend or picture) to a user, outfit recommendation predicts user preference on
a set of well-matched fashion items.Hence, performing high-quality personalized
outfit recommendation should satisfy two requirements -- 1) the nice
compatibility of fashion items and 2) the consistence with user preference.
However, present works focus mainly on one of the requirements and only
consider either user-outfit or outfit-item relationships, thereby easily
leading to suboptimal representations and limiting the performance. In this
work, we unify two tasks, fashion compatibility modeling and personalized
outfit recommendation. Towards this end, we develop a new framework,
Hierarchical Fashion Graph Network(HFGN), to model relationships among users,
items, and outfits simultaneously. In particular, we construct a hierarchical
structure upon user-outfit interactions and outfit-item mappings. We then get
inspirations from recent graph neural networks, and employ the embedding
propagation on such hierarchical graph, so as to aggregate item information
into an outfit representation, and then refine a user's representation via
his/her historical outfits. Furthermore, we jointly train these two tasks to
optimize these representations. To demonstrate the effectiveness of HFGN, we
conduct extensive experiments on a benchmark dataset, and HFGN achieves
significant improvements over the state-of-the-art compatibility matching
models like NGNN and outfit recommenders like FHN.
| true | true |
Xingchen Li and
Xiang Wang and
Xiangnan He and
Long Chen and
Jun Xiao and
Tat{-}Seng Chua
| 2,020 | null | null | null | null |
Hierarchical Fashion Graph Network for Personalized Outfit
Recommendation
|
xcppy/hierarchical_fashion_graph_network - GitHub
|
https://github.com/xcppy/hierarchical_fashion_graph_network
|
Hierarchical Fashion Graph Network (HFGN) is a new recommendation framework for personalized outfit recommendation task based on hierarchical graph structure.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
PORAnchors
|
\cite{PORAnchors}
|
Personalized Outfit Recommendation With Learnable Anchors
| null | null | true | false |
Zhi Lu and
Yang Hu and
Yan Chen and
Bing Zeng
| 2,021 | null | null | null | null |
Personalized Outfit Recommendation With Learnable Anchors
|
[PDF] Personalized Outfit Recommendation With Learnable Anchors
|
https://openaccess.thecvf.com/content/CVPR2021/papers/Lu_Personalized_Outfit_Recommendation_With_Learnable_Anchors_CVPR_2021_paper.pdf
|
The fashion recommendation task, which is based on fashion compatibility learning, is to pre- dict whether a set of fashion items are well matched. In.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
A-FKG
|
\cite{A-FKG}
|
{\textdollar}A{\^{}}3{\textdollar}-FKG: Attentive Attribute-Aware
Fashion Knowledge Graph for Outfit Preference Prediction
| null | null | true | false |
Huijing Zhan and
Jie Lin and
Kenan Emir Ak and
Boxin Shi and
Ling{-}Yu Duan and
Alex C. Kot
| 2,022 | null | null | null |
{IEEE} Trans. Multim.
|
{\textdollar}A{\^{}}3{\textdollar}-FKG: Attentive Attribute-Aware
Fashion Knowledge Graph for Outfit Preference Prediction
|
[PDF] -FKG: Attentive Attribute-Aware Fashion Knowledge Graph for Outfit ...
|
http://www.jdl.link/doc/2011/20211231_Zhan_TMM21.pdf
|
In this paper, we address the task of personalized outfit preference prediction via a novel Attentive Attribute-Aware Fashion Knowledge Graph. (A3-FKG), which
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
FashionRecSurvey-23
|
\cite{FashionRecSurvey-23}
|
Computational Technologies for Fashion Recommendation: A Survey
|
http://arxiv.org/abs/2306.03395v2
|
Fashion recommendation is a key research field in computational fashion
research and has attracted considerable interest in the computer vision,
multimedia, and information retrieval communities in recent years. Due to the
great demand for applications, various fashion recommendation tasks, such as
personalized fashion product recommendation, complementary (mix-and-match)
recommendation, and outfit recommendation, have been posed and explored in the
literature. The continuing research attention and advances impel us to look
back and in-depth into the field for a better understanding. In this paper, we
comprehensively review recent research efforts on fashion recommendation from a
technological perspective. We first introduce fashion recommendation at a macro
level and analyse its characteristics and differences with general
recommendation tasks. We then clearly categorize different fashion
recommendation efforts into several sub-tasks and focus on each sub-task in
terms of its problem formulation, research focus, state-of-the-art methods, and
limitations. We also summarize the datasets proposed in the literature for use
in fashion recommendation studies to give readers a brief illustration.
Finally, we discuss several promising directions for future research in this
field. Overall, this survey systematically reviews the development of fashion
recommendation research. It also discusses the current limitations and gaps
between academic research and the real needs of the fashion industry. In the
process, we offer a deep insight into how the fashion industry could benefit
from the computational technologies of fashion recommendation.
| true | true |
Yujuan Ding and
Zhihui Lai and
P. Y. Mok and
Tat{-}Seng Chua
| 2,024 | null | null | null |
{ACM} Comput. Surv.
|
Computational Technologies for Fashion Recommendation: A Survey
|
Computational Technologies for Fashion Recommendation: A Survey
|
http://arxiv.org/pdf/2306.03395v2
|
Fashion recommendation is a key research field in computational fashion
research and has attracted considerable interest in the computer vision,
multimedia, and information retrieval communities in recent years. Due to the
great demand for applications, various fashion recommendation tasks, such as
personalized fashion product recommendation, complementary (mix-and-match)
recommendation, and outfit recommendation, have been posed and explored in the
literature. The continuing research attention and advances impel us to look
back and in-depth into the field for a better understanding. In this paper, we
comprehensively review recent research efforts on fashion recommendation from a
technological perspective. We first introduce fashion recommendation at a macro
level and analyse its characteristics and differences with general
recommendation tasks. We then clearly categorize different fashion
recommendation efforts into several sub-tasks and focus on each sub-task in
terms of its problem formulation, research focus, state-of-the-art methods, and
limitations. We also summarize the datasets proposed in the literature for use
in fashion recommendation studies to give readers a brief illustration.
Finally, we discuss several promising directions for future research in this
field. Overall, this survey systematically reviews the development of fashion
recommendation research. It also discusses the current limitations and gaps
between academic research and the real needs of the fashion industry. In the
process, we offer a deep insight into how the fashion industry could benefit
from the computational technologies of fashion recommendation.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
personalCom
|
\cite{personalCom}
|
Personalized Capsule Wardrobe Creation with Garment and User Modeling
| null | null | true | false |
Xue Dong and
Xuemeng Song and
Fuli Feng and
Peiguang Jing and
Xin{-}Shun Xu and
Liqiang Nie
| 2,019 | null | null | null | null |
Personalized Capsule Wardrobe Creation with Garment and User Modeling
|
Personalized Capsule Wardrobe Creation with Garment ...
|
https://www.researchgate.net/publication/336708181_Personalized_Capsule_Wardrobe_Creation_with_Garment_and_User_Modeling
|
[14] introduce a combinatorial optimization based personalized capsule wardrobe creation framework, which jointly integrates user modeling and garment modeling.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
PFOG
|
\cite{PFOG}
|
Personalized fashion outfit generation with user coordination preference learning
| null | null | true | false |
Yujuan Ding and
P. Y. Mok and
Yunshan Ma and
Yi Bin
| 2,023 | null | null | null |
Inf. Process. Manag.
|
Personalized fashion outfit generation with user coordination preference learning
|
Personalized fashion outfit generation with user coordination ...
|
https://www.sciencedirect.com/science/article/pii/S0306457323001711
|
Personalized fashion outfit generation with user coordination preference learning - ScienceDirect Personalized fashion outfit generation with user coordination preference learning Fashion outfit recommendation, aiming to model personal preference of users on outfits, is one of the most widely studied outfit-related tasks. In contrast, the task of fashion outfit generation (Bettaney et al., 2021, Li et al., 2019, Lorbert et al., 2021, Madan et al., 2021) specifically focuses on the generation process of fashion outfits based on individual items, while neglects user preferences, making the generated outfits less attractive to users. This paper addressed the personalized outfit generation problem by introducing user coordination preference, which refers to the template preference that users have when combining different categories of fashion items.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
POG
|
\cite{POG}
|
POG: Personalized Outfit Generation for Fashion Recommendation at
Alibaba iFashion
|
http://arxiv.org/abs/1905.01866v3
|
Increasing demand for fashion recommendation raises a lot of challenges for
online shopping platforms and fashion communities. In particular, there exist
two requirements for fashion outfit recommendation: the Compatibility of the
generated fashion outfits, and the Personalization in the recommendation
process. In this paper, we demonstrate these two requirements can be satisfied
via building a bridge between outfit generation and recommendation. Through
large data analysis, we observe that people have similar tastes in individual
items and outfits. Therefore, we propose a Personalized Outfit Generation (POG)
model, which connects user preferences regarding individual items and outfits
with Transformer architecture. Extensive offline and online experiments provide
strong quantitative evidence that our method outperforms alternative methods
regarding both compatibility and personalization metrics. Furthermore, we
deploy POG on a platform named Dida in Alibaba to generate personalized outfits
for the users of the online application iFashion.
This work represents a first step towards an industrial-scale fashion outfit
generation and recommendation solution, which goes beyond generating outfits
based on explicit queries, or merely recommending from existing outfit pools.
As part of this work, we release a large-scale dataset consisting of 1.01
million outfits with rich context information, and 0.28 billion user click
actions from 3.57 million users. To the best of our knowledge, this dataset is
the largest, publicly available, fashion related dataset, and the first to
provide user behaviors relating to both outfits and fashion items.
| true | true |
Wen Chen and
Pipei Huang and
Jiaming Xu and
Xin Guo and
Cheng Guo and
Fei Sun and
Chao Li and
Andreas Pfadler and
Huan Zhao and
Binqiang Zhao
| 2,019 | null | null | null | null |
POG: Personalized Outfit Generation for Fashion Recommendation at
Alibaba iFashion
|
iFashion Alibaba Dataset - Papers With Code
|
https://paperswithcode.com/dataset/ifashion-alibaba-pog
|
in POG: Personalized Outfit Generation for Fashion Recommendation at Alibaba iFashion. 1. 1.01 million outfits, 583K fashion items, with context information.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
MultiCBR
|
\cite{MultiCBR}
|
MultiCBR: Multi-view Contrastive Learning for Bundle Recommendation
|
http://arxiv.org/abs/2311.16751v3
|
Bundle recommendation seeks to recommend a bundle of related items to users
to improve both user experience and the profits of platform. Existing bundle
recommendation models have progressed from capturing only user-bundle
interactions to the modeling of multiple relations among users, bundles and
items. CrossCBR, in particular, incorporates cross-view contrastive learning
into a two-view preference learning framework, significantly improving SOTA
performance. It does, however, have two limitations: 1) the two-view
formulation does not fully exploit all the heterogeneous relations among users,
bundles and items; and 2) the "early contrast and late fusion" framework is
less effective in capturing user preference and difficult to generalize to
multiple views. In this paper, we present MultiCBR, a novel Multi-view
Contrastive learning framework for Bundle Recommendation. First, we devise a
multi-view representation learning framework capable of capturing all the
user-bundle, user-item and bundle-item relations, especially better utilizing
the bundle-item affiliations to enhance sparse bundles' representations.
Second, we innovatively adopt an "early fusion and late contrast" design that
first fuses the multi-view representations before performing self-supervised
contrastive learning. In comparison to existing approaches, our framework
reverses the order of fusion and contrast, introducing the following
advantages: 1)our framework is capable of modeling both cross-view and ego-view
preferences, allowing us to achieve enhanced user preference modeling; and 2)
instead of requiring quadratic number of cross-view contrastive losses, we only
require two self-supervised contrastive losses, resulting in minimal extra
costs. Experimental results on three public datasets indicate that our method
outperforms SOTA methods.
| true | true |
Yunshan Ma and
Yingzhi He and
Xiang Wang and
Yinwei Wei and
Xiaoyu Du and
Yuyangzi Fu and
Tat{-}Seng Chua
| 2,024 | null | null | null |
{ACM} Trans. Inf. Syst.
|
MultiCBR: Multi-view Contrastive Learning for Bundle Recommendation
|
Multi-view Contrastive Learning for Bundle Recommendation
|
https://dl.acm.org/doi/10.1145/3640810
|
In this article, we present MultiCBR, a novel Multi-view Contrastive learning framework for Bundle Recommendation. First, we devise a multi-view representation
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
EBRec
|
\cite{EBRec}
|
Enhancing Item-level Bundle Representation for Bundle Recommendation
|
http://arxiv.org/abs/2311.16892v1
|
Bundle recommendation approaches offer users a set of related items on a
particular topic. The current state-of-the-art (SOTA) method utilizes
contrastive learning to learn representations at both the bundle and item
levels. However, due to the inherent difference between the bundle-level and
item-level preferences, the item-level representations may not receive
sufficient information from the bundle affiliations to make accurate
predictions. In this paper, we propose a novel approach EBRec, short of
Enhanced Bundle Recommendation, which incorporates two enhanced modules to
explore inherent item-level bundle representations. First, we propose to
incorporate the bundle-user-item (B-U-I) high-order correlations to explore
more collaborative information, thus to enhance the previous bundle
representation that solely relies on the bundle-item affiliation information.
Second, we further enhance the B-U-I correlations by augmenting the observed
user-item interactions with interactions generated from pre-trained models,
thus improving the item-level bundle representations. We conduct extensive
experiments on three public datasets, and the results justify the effectiveness
of our approach as well as the two core modules. Codes and datasets are
available at https://github.com/answermycode/EBRec.
| true | true |
Du, Xiaoyu and Qian, Kun and Ma, Yunshan and Xiang, Xinguang
| 2,023 | null | null | null |
ACM Transactions on Recommender Systems
|
Enhancing Item-level Bundle Representation for Bundle Recommendation
|
Enhancing Item-level Bundle Representation ... - ACM Digital Library
|
https://dl.acm.org/doi/10.1145/3637067
|
In this article, we propose a novel approach, Enhanced Bundle Recommendation (EBRec), which incorporates two enhanced modules to explore inherent item-level
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
BundleMLLM
|
\cite{BundleMLLM}
|
Fine-tuning Multimodal Large Language Models for Product Bundling
|
http://arxiv.org/abs/2407.11712v4
|
Recent advances in product bundling have leveraged multimodal information
through sophisticated encoders, but remain constrained by limited semantic
understanding and a narrow scope of knowledge. Therefore, some attempts employ
In-context Learning (ICL) to explore the potential of large language models
(LLMs) for their extensive knowledge and complex reasoning abilities. However,
these efforts are inadequate in understanding mulitmodal data and exploiting
LLMs' knowledge for product bundling. To bridge the gap, we introduce
Bundle-MLLM, a novel framework that fine-tunes LLMs through a hybrid item
tokenization approach within a well-designed optimization strategy.
Specifically, we integrate textual, media, and relational data into a unified
tokenization, introducing a soft separation token to distinguish between
textual and non-textual tokens. Additionally, a streamlined yet powerful
multimodal fusion module is employed to embed all non-textual features into a
single, informative token, significantly boosting efficiency. To tailor product
bundling tasks for LLMs, we reformulate the task as a multiple-choice question
with candidate items as options. We further propose a progressive optimization
strategy that fine-tunes LLMs for disentangled objectives: 1) learning bundle
patterns and 2) enhancing multimodal semantic understanding specific to product
bundling. Extensive experiments on four datasets across two domains demonstrate
that our approach outperforms a range of state-of-the-art (SOTA) methods.
| true | true |
Xiaohao Liu and
Jie Wu and
Zhulin Tao and
Yunshan Ma and
Yinwei Wei and
Tat{-}Seng Chua
| 2,025 | null | null | null | null |
Fine-tuning Multimodal Large Language Models for Product Bundling
|
Fine-tuning Multimodal Large Language Models for Product Bundling
|
https://arxiv.org/abs/2407.11712
|
View a PDF of the paper titled Fine-tuning Multimodal Large Language Models for Product Bundling, by Xiaohao Liu and 5 other authors We further propose a progressive optimization strategy that fine-tunes LLMs for disentangled objectives: 1) learning bundle patterns and 2) enhancing multimodal semantic understanding specific to product bundling. View a PDF of the paper titled Fine-tuning Multimodal Large Language Models for Product Bundling, by Xiaohao Liu and 5 other authors [x] Bibliographic Explorer Toggle [x] Connected Papers Toggle [x] Litmaps Toggle [x] scite.ai Toggle [x] alphaXiv Toggle [x] Links to Code Toggle [x] DagsHub Toggle [x] GotitPub Toggle [x] Huggingface Toggle [x] Links to Code Toggle [x] ScienceCast Toggle [x] Replicate Toggle [x] Spaces Toggle [x] Spaces Toggle [x] Core recommender toggle
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
SD
|
\cite{SD}
|
High-Resolution Image Synthesis with Latent Diffusion Models
| null | null | true | false |
Robin Rombach and
Andreas Blattmann and
Dominik Lorenz and
Patrick Esser and
Bj{\"{o}}rn Ommer
| 2,022 | null | null | null | null |
High-Resolution Image Synthesis with Latent Diffusion Models
|
[PDF] High-Resolution Image Synthesis With Latent Diffusion Models
|
https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf
|
High-Resolution Image Synthesis with Latent Diffusion Models Robin Rombach1 ∗ Andreas Blattmann1 ∗ Dominik Lorenz1 Patrick Esser Bj¨ orn Ommer1 1Ludwig Maximilian University of Munich & IWR, Heidelberg University, Germany Runway ML https://github.com/CompVis/latent-diffusion Abstract By decomposing the image formation process into a se-quential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Our latent diffusion models (LDMs) achieve new state of the art scores for im-age inpainting and class-conditional image synthesis and highly competitive performance on various tasks, includ-ing unconditional image generation, text-to-image synthe-sis, and super-resolution, while significantly reducing com-putational requirements compared to pixel-based DMs. 1.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
controlNet
|
\cite{controlNet}
|
Adding Conditional Control to Text-to-Image Diffusion Models
|
http://arxiv.org/abs/2302.05543v3
|
We present ControlNet, a neural network architecture to add spatial
conditioning controls to large, pretrained text-to-image diffusion models.
ControlNet locks the production-ready large diffusion models, and reuses their
deep and robust encoding layers pretrained with billions of images as a strong
backbone to learn a diverse set of conditional controls. The neural
architecture is connected with "zero convolutions" (zero-initialized
convolution layers) that progressively grow the parameters from zero and ensure
that no harmful noise could affect the finetuning. We test various conditioning
controls, eg, edges, depth, segmentation, human pose, etc, with Stable
Diffusion, using single or multiple conditions, with or without prompts. We
show that the training of ControlNets is robust with small (<50k) and large
(>1m) datasets. Extensive results show that ControlNet may facilitate wider
applications to control image diffusion models.
| true | true |
Lvmin Zhang and
Anyi Rao and
Maneesh Agrawala
| 2,023 | null | null | null | null |
Adding Conditional Control to Text-to-Image Diffusion Models
|
[PDF] Adding Conditional Control to Text-to-Image Diffusion Models
|
https://openaccess.thecvf.com/content/ICCV2023/papers/Zhang_Adding_Conditional_Control_to_Text-to-Image_Diffusion_Models_ICCV_2023_paper.pdf
|
Abstract We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. This paper presents ControlNet, an end-to-end neural network architecture that learns conditional controls for large pretrained text-to-image diffusion models (Stable Diffusion in our implementation). In summary, (1) we propose ControlNet, a neural network architecture that can add spatially localized input conditions to a pretrained text-to-image diffusion model via efficient finetuning, (2) we present pretrained ControlNets to control Stable Diffusion, conditioned on Canny edges, Hough lines, user scribbles, human key points, segmentation maps, shape normals, depths, and cartoon line drawings, and (3) we val-idate the method with ablative experiments comparing to several alternative architectures, and conduct user studies focused on several previous baselines across different tasks.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
lora
|
\cite{lora}
|
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language
Models
| null | null | true | false |
Yuhui Xu and
Lingxi Xie and
Xiaotao Gu and
Xin Chen and
Heng Chang and
Hengheng Zhang and
Zhengsu Chen and
Xiaopeng Zhang and
Qi Tian
| 2,024 | null | null | null | null |
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language
Models
|
[PDF] QA-LORA: QUANTIZATION-AWARE LOW-RANK ADAPTATION OF ...
|
https://openreview.net/pdf?id=WvFoJccpo8
|
Hence,. QA-LoRA is an effective and off-the-shelf method for joint quantization and adaptation of LLMs. 2 RELATED WORK. Large language models (LLMs) (Devlin et
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
DiFashion
|
\cite{DiFashion}
|
Diffusion Models for Generative Outfit Recommendation
|
http://arxiv.org/abs/2402.17279v3
|
Outfit Recommendation (OR) in the fashion domain has evolved through two
stages: Pre-defined Outfit Recommendation and Personalized Outfit Composition.
However, both stages are constrained by existing fashion products, limiting
their effectiveness in addressing users' diverse fashion needs. Recently, the
advent of AI-generated content provides the opportunity for OR to transcend
these limitations, showcasing the potential for personalized outfit generation
and recommendation.
To this end, we introduce a novel task called Generative Outfit
Recommendation (GOR), aiming to generate a set of fashion images and compose
them into a visually compatible outfit tailored to specific users. The key
objectives of GOR lie in the high fidelity, compatibility, and personalization
of generated outfits. To achieve these, we propose a generative outfit
recommender model named DiFashion, which empowers exceptional diffusion models
to accomplish the parallel generation of multiple fashion images. To ensure
three objectives, we design three kinds of conditions to guide the parallel
generation process and adopt Classifier-Free-Guidance to enhance the alignment
between the generated images and conditions. We apply DiFashion on both
personalized Fill-In-The-Blank and GOR tasks and conduct extensive experiments
on iFashion and Polyvore-U datasets. The quantitative and human-involved
qualitative evaluation demonstrate the superiority of DiFashion over
competitive baselines.
| true | true |
Yiyan Xu and
Wenjie Wang and
Fuli Feng and
Yunshan Ma and
Jizhi Zhang and
Xiangnan He
| 2,024 | null | null | null | null |
Diffusion Models for Generative Outfit Recommendation
|
Diffusion Models for Generative Outfit Recommendation
|
http://arxiv.org/pdf/2402.17279v3
|
Outfit Recommendation (OR) in the fashion domain has evolved through two
stages: Pre-defined Outfit Recommendation and Personalized Outfit Composition.
However, both stages are constrained by existing fashion products, limiting
their effectiveness in addressing users' diverse fashion needs. Recently, the
advent of AI-generated content provides the opportunity for OR to transcend
these limitations, showcasing the potential for personalized outfit generation
and recommendation.
To this end, we introduce a novel task called Generative Outfit
Recommendation (GOR), aiming to generate a set of fashion images and compose
them into a visually compatible outfit tailored to specific users. The key
objectives of GOR lie in the high fidelity, compatibility, and personalization
of generated outfits. To achieve these, we propose a generative outfit
recommender model named DiFashion, which empowers exceptional diffusion models
to accomplish the parallel generation of multiple fashion images. To ensure
three objectives, we design three kinds of conditions to guide the parallel
generation process and adopt Classifier-Free-Guidance to enhance the alignment
between the generated images and conditions. We apply DiFashion on both
personalized Fill-In-The-Blank and GOR tasks and conduct extensive experiments
on iFashion and Polyvore-U datasets. The quantitative and human-involved
qualitative evaluation demonstrate the superiority of DiFashion over
competitive baselines.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
yang2018recommendation
|
\cite{yang2018recommendation}
|
From recommendation to generation: A novel fashion clothing advising framework
| null | null | true | false |
Yang, Zilin and Su, Zhuo and Yang, Yang and Lin, Ge
| 2,018 | null | null | null | null |
From recommendation to generation: A novel fashion clothing advising framework
|
From Recommendation to Generation: A Novel Fashion Clothing ...
|
https://ieeexplore.ieee.org/document/8634794
|
From Recommendation to Generation: A Novel Fashion Clothing Advising Framework | IEEE Conference Publication | IEEE Xplore Publisher: IEEE In this paper, we combine visual features of clothing images, user's implicit feedback and the price factor to construct a recommendation model based on Siamese network and Bayesian personalized ranking to recommend clothing satisfying user's preference and consumption level. Recommendation system is expected to excavate valid information from a large amount of history records to learn user's preference and the attributes of the clothing they wish to purchase. Image 4: Contact IEEE to Subscribe About IEEE _Xplore_ | Contact Us | Help | Accessibility | Terms of Use | Nondiscrimination Policy | IEEE Ethics Reporting | Sitemap | IEEE Privacy Policy
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
Compatibility
|
\cite{Compatibility}
|
Compatibility Family Learning for Item Recommendation and Generation
|
http://arxiv.org/abs/1712.01262v1
|
Compatibility between items, such as clothes and shoes, is a major factor
among customer's purchasing decisions. However, learning "compatibility" is
challenging due to (1) broader notions of compatibility than those of
similarity, (2) the asymmetric nature of compatibility, and (3) only a small
set of compatible and incompatible items are observed. We propose an end-to-end
trainable system to embed each item into a latent vector and project a query
item into K compatible prototypes in the same space. These prototypes reflect
the broad notions of compatibility. We refer to both the embedding and
prototypes as "Compatibility Family". In our learned space, we introduce a
novel Projected Compatibility Distance (PCD) function which is differentiable
and ensures diversity by aiming for at least one prototype to be close to a
compatible item, whereas none of the prototypes are close to an incompatible
item. We evaluate our system on a toy dataset, two Amazon product datasets, and
Polyvore outfit dataset. Our method consistently achieves state-of-the-art
performance. Finally, we show that we can visualize the candidate compatible
prototypes using a Metric-regularized Conditional Generative Adversarial
Network (MrCGAN), where the input is a projected prototype and the output is a
generated image of a compatible item. We ask human evaluators to judge the
relative compatibility between our generated images and images generated by
CGANs conditioned directly on query items. Our generated images are
significantly preferred, with roughly twice the number of votes as others.
| true | true |
Yong{-}Siang Shih and
Kai{-}Yueh Chang and
Hsuan{-}Tien Lin and
Min Sun
| 2,018 | null | null | null | null |
Compatibility Family Learning for Item Recommendation and Generation
|
Compatibility Family Learning for Item Recommendation and Generation
|
http://arxiv.org/pdf/1712.01262v1
|
Compatibility between items, such as clothes and shoes, is a major factor
among customer's purchasing decisions. However, learning "compatibility" is
challenging due to (1) broader notions of compatibility than those of
similarity, (2) the asymmetric nature of compatibility, and (3) only a small
set of compatible and incompatible items are observed. We propose an end-to-end
trainable system to embed each item into a latent vector and project a query
item into K compatible prototypes in the same space. These prototypes reflect
the broad notions of compatibility. We refer to both the embedding and
prototypes as "Compatibility Family". In our learned space, we introduce a
novel Projected Compatibility Distance (PCD) function which is differentiable
and ensures diversity by aiming for at least one prototype to be close to a
compatible item, whereas none of the prototypes are close to an incompatible
item. We evaluate our system on a toy dataset, two Amazon product datasets, and
Polyvore outfit dataset. Our method consistently achieves state-of-the-art
performance. Finally, we show that we can visualize the candidate compatible
prototypes using a Metric-regularized Conditional Generative Adversarial
Network (MrCGAN), where the input is a projected prototype and the output is a
generated image of a compatible item. We ask human evaluators to judge the
relative compatibility between our generated images and images generated by
CGANs conditioned directly on query items. Our generated images are
significantly preferred, with roughly twice the number of votes as others.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
FashionReGen24
|
\cite{FashionReGen24}
|
FashionReGen: LLM-Empowered Fashion Report Generation
|
http://arxiv.org/abs/2403.06660v1
|
Fashion analysis refers to the process of examining and evaluating trends,
styles, and elements within the fashion industry to understand and interpret
its current state, generating fashion reports. It is traditionally performed by
fashion professionals based on their expertise and experience, which requires
high labour cost and may also produce biased results for relying heavily on a
small group of people. In this paper, to tackle the Fashion Report Generation
(FashionReGen) task, we propose an intelligent Fashion Analyzing and Reporting
system based the advanced Large Language Models (LLMs), debbed as GPT-FAR.
Specifically, it tries to deliver FashionReGen based on effective catwalk
analysis, which is equipped with several key procedures, namely, catwalk
understanding, collective organization and analysis, and report generation. By
posing and exploring such an open-ended, complex and domain-specific task of
FashionReGen, it is able to test the general capability of LLMs in fashion
domain. It also inspires the explorations of more high-level tasks with
industrial significance in other domains. Video illustration and more materials
of GPT-FAR can be found in https://github.com/CompFashion/FashionReGen.
| true | true |
Yujuan Ding and
Yunshan Ma and
Wenqi Fan and
Yige Yao and
Tat{-}Seng Chua and
Qing Li
| 2,024 | null | null | null | null |
FashionReGen: LLM-Empowered Fashion Report Generation
|
FashionReGen: LLM-Empowered Fashion Report Generation
|
https://dl.acm.org/doi/10.1145/3589335.3651232
|
In this paper, to tackle the Fashion Report Generation (FashionReGen) task, we propose an intelligent Fashion Analyzing and Reporting system
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
CRAFT
|
\cite{CRAFT}
|
CRAFT: Complementary Recommendations Using Adversarial Feature
Transformer
|
http://arxiv.org/abs/1804.10871v3
|
Traditional approaches for complementary product recommendations rely on
behavioral and non-visual data such as customer co-views or co-buys. However,
certain domains such as fashion are primarily visual. We propose a framework
that harnesses visual cues in an unsupervised manner to learn the distribution
of co-occurring complementary items in real world images. Our model learns a
non-linear transformation between the two manifolds of source and target
complementary item categories (e.g., tops and bottoms in outfits). Given a
large dataset of images containing instances of co-occurring object categories,
we train a generative transformer network directly on the feature
representation space by casting it as an adversarial optimization problem. Such
a conditional generative model can produce multiple novel samples of
complementary items (in the feature space) for a given query item. The final
recommendations are selected from the closest real world examples to the
synthesized complementary features. We apply our framework to the task of
recommending complementary tops for a given bottom clothing item. The
recommendations made by our system are diverse, and are favored by human
experts over the baseline approaches.
| true | true |
Cong Phuoc Huynh and
Arri Ciptadi and
Ambrish Tyagi and
Amit Agrawal
| 2,018 | null | null | null |
CoRR
|
CRAFT: Complementary Recommendations Using Adversarial Feature
Transformer
|
[PDF] Complementary Recommendation by Adversarial Feature Transform
|
https://assets.amazon.science/ee/8c/533b6ca64dec898bf74950316de1/craft-complementary-recommendation-by-adversarial-feature-transform.pdf
|
The feature transformer in CRAFT samples a con- ditional distribution to generate diverse and relevant item recommendations for a given query.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
VITON
|
\cite{VITON}
|
VITON: An Image-based Virtual Try-on Network
|
http://arxiv.org/abs/1711.08447v4
|
We present an image-based VIirtual Try-On Network (VITON) without using 3D
information in any form, which seamlessly transfers a desired clothing item
onto the corresponding region of a person using a coarse-to-fine strategy.
Conditioned upon a new clothing-agnostic yet descriptive person representation,
our framework first generates a coarse synthesized image with the target
clothing item overlaid on that same person in the same pose. We further enhance
the initial blurry clothing area with a refinement network. The network is
trained to learn how much detail to utilize from the target clothing item, and
where to apply to the person in order to synthesize a photo-realistic image in
which the target item deforms naturally with clear visual patterns. Experiments
on our newly collected Zalando dataset demonstrate its promise in the
image-based virtual try-on task over state-of-the-art generative models.
| true | true |
Xintong Han and
Zuxuan Wu and
Zhe Wu and
Ruichi Yu and
Larry S. Davis
| 2,018 | null | null | null | null |
VITON: An Image-based Virtual Try-on Network
|
[1711.08447] VITON: An Image-based Virtual Try-on Network
|
https://arxiv.org/abs/1711.08447
|
by X Han · 2017 · Cited by 823 — We present an image-based VIirtual Try-On Network (VITON) without using 3D information in any form, which seamlessly transfers a desired clothing item onto the
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
GP-VTON
|
\cite{GP-VTON}
|
{GP-VTON:} Towards General Purpose Virtual Try-On via Collaborative
Local-Flow Global-Parsing Learning
| null | null | true | false |
Zhenyu Xie and
Zaiyu Huang and
Xin Dong and
Fuwei Zhao and
Haoye Dong and
Xijin Zhang and
Feida Zhu and
Xiaodan Liang
| 2,023 | null | null | null | null |
{GP-VTON:} Towards General Purpose Virtual Try-On via Collaborative
Local-Flow Global-Parsing Learning
|
Incorporating Visual Correspondence into Diffusion Model for Virtual ...
|
https://openreview.net/forum?id=XXzOzJRyOZ
|
Gp-vton: Towards general purpose virtual try-on via collaborative local-flow global-parsing learning. In CVPR, 2023. [5] Li, Xiu and Kampffmeyer, Michael
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
DCI-VTON
|
\cite{DCI-VTON}
|
Taming the Power of Diffusion Models for High-Quality Virtual Try-On
with Appearance Flow
|
http://arxiv.org/abs/2308.06101v1
|
Virtual try-on is a critical image synthesis task that aims to transfer
clothes from one image to another while preserving the details of both humans
and clothes. While many existing methods rely on Generative Adversarial
Networks (GANs) to achieve this, flaws can still occur, particularly at high
resolutions. Recently, the diffusion model has emerged as a promising
alternative for generating high-quality images in various applications.
However, simply using clothes as a condition for guiding the diffusion model to
inpaint is insufficient to maintain the details of the clothes. To overcome
this challenge, we propose an exemplar-based inpainting approach that leverages
a warping module to guide the diffusion model's generation effectively. The
warping module performs initial processing on the clothes, which helps to
preserve the local details of the clothes. We then combine the warped clothes
with clothes-agnostic person image and add noise as the input of diffusion
model. Additionally, the warped clothes is used as local conditions for each
denoising process to ensure that the resulting output retains as much detail as
possible. Our approach, namely Diffusion-based Conditional Inpainting for
Virtual Try-ON (DCI-VTON), effectively utilizes the power of the diffusion
model, and the incorporation of the warping module helps to produce
high-quality and realistic virtual try-on results. Experimental results on
VITON-HD demonstrate the effectiveness and superiority of our method.
| true | true |
Junhong Gou and
Siyu Sun and
Jianfu Zhang and
Jianlou Si and
Chen Qian and
Liqing Zhang
| 2,023 | null | null | null | null |
Taming the Power of Diffusion Models for High-Quality Virtual Try-On
with Appearance Flow
|
bcmi/DCI-VTON-Virtual-Try-On - GitHub
|
https://github.com/bcmi/DCI-VTON-Virtual-Try-On
|
[ACM Multimedia 2023] Taming the Power of Diffusion Models for High-Quality Virtual Try-On with Appearance Flow. We then combine the warped clothes with clothes-agnostic person image and add noise as the input of diffusion model. Our approach effectively utilizes the power of the diffusion model, and the incorporation of the warping module helps to produce high-quality and realistic virtual try-on results. After inference, you can put the results in the VITON-HD for inference and training of the diffusion model. To train a new model on VITON-HD, you should first modify the dataroot of VITON-HD dataset in `configs/viton512.yaml` and then use `main.py` for training. [ACM Multimedia 2023] Taming the Power of Diffusion Models for High-Quality Virtual Try-On with Appearance Flow.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
stableVTON
|
\cite{stableVTON}
|
StableVITON: Learning Semantic Correspondence with Latent Diffusion
Model for Virtual Try-On
|
http://arxiv.org/abs/2312.01725v1
|
Given a clothing image and a person image, an image-based virtual try-on aims
to generate a customized image that appears natural and accurately reflects the
characteristics of the clothing image. In this work, we aim to expand the
applicability of the pre-trained diffusion model so that it can be utilized
independently for the virtual try-on task.The main challenge is to preserve the
clothing details while effectively utilizing the robust generative capability
of the pre-trained model. In order to tackle these issues, we propose
StableVITON, learning the semantic correspondence between the clothing and the
human body within the latent space of the pre-trained diffusion model in an
end-to-end manner. Our proposed zero cross-attention blocks not only preserve
the clothing details by learning the semantic correspondence but also generate
high-fidelity images by utilizing the inherent knowledge of the pre-trained
model in the warping process. Through our proposed novel attention total
variation loss and applying augmentation, we achieve the sharp attention map,
resulting in a more precise representation of clothing details. StableVITON
outperforms the baselines in qualitative and quantitative evaluation, showing
promising quality in arbitrary person images. Our code is available at
https://github.com/rlawjdghek/StableVITON.
| true | true |
Jeongho Kim and
Gyojung Gu and
Minho Park and
Sunghyun Park and
Jaegul Choo
| 2,023 | null | null | null |
CoRR
|
StableVITON: Learning Semantic Correspondence with Latent Diffusion
Model for Virtual Try-On
|
[CVPR2024] StableVITON: Learning Semantic ...
|
https://github.com/rlawjdghek/StableVITON
|
This repository is the official implementation of StableVITON. StableVITON: Learning Semantic Correspondence with Latent Diffusion Model for Virtual Try-On
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
HMaVTON
|
\cite{HMaVTON}
|
Smart Fitting Room: A One-stop Framework for Matching-aware Virtual
Try-on
|
http://arxiv.org/abs/2401.16825v2
|
The development of virtual try-on has revolutionized online shopping by
allowing customers to visualize themselves in various fashion items, thus
extending the in-store try-on experience to the cyber space. Although virtual
try-on has attracted considerable research initiatives, existing systems only
focus on the quality of image generation, overlooking whether the fashion item
is a good match to the given person and clothes. Recognizing this gap, we
propose to design a one-stop Smart Fitting Room, with the novel formulation of
matching-aware virtual try-on. Following this formulation, we design a Hybrid
Matching-aware Virtual Try-On Framework (HMaVTON), which combines
retrieval-based and generative methods to foster a more personalized virtual
try-on experience. This framework integrates a hybrid mix-and-match module and
an enhanced virtual try-on module. The former can recommend fashion items
available on the platform to boost sales and generate clothes that meets the
diverse tastes of consumers. The latter provides high-quality try-on effects,
delivering a one-stop shopping service. To validate the effectiveness of our
approach, we enlist the expertise of fashion designers for a professional
evaluation, assessing the rationality and diversity of the clothes combinations
and conducting an evaluation matrix analysis. Our method significantly enhances
the practicality of virtual try-on. The code is available at
https://github.com/Yzcreator/HMaVTON.
| true | true |
Mingzhe Yu and
Yunshan Ma and
Lei Wu and
Kai Cheng and
Xue Li and
Lei Meng and
Tat{-}Seng Chua
| 2,024 | null | null | null | null |
Smart Fitting Room: A One-stop Framework for Matching-aware Virtual
Try-on
|
A One-stop Framework for Matching-aware Virtual Try-On
|
https://dl.acm.org/doi/10.1145/3652583.3658064
|
This framework integrates a hybrid mix-and-match module and an enhanced virtual try-on module. The former can recommend fashion items available
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
Jedi
|
\cite{Jedi}
|
JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized
Text-to-Image Generation
|
http://arxiv.org/abs/2407.06187v1
|
Personalized text-to-image generation models enable users to create images
that depict their individual possessions in diverse scenes, finding
applications in various domains. To achieve the personalization capability,
existing methods rely on finetuning a text-to-image foundation model on a
user's custom dataset, which can be non-trivial for general users,
resource-intensive, and time-consuming. Despite attempts to develop
finetuning-free methods, their generation quality is much lower compared to
their finetuning counterparts. In this paper, we propose Joint-Image Diffusion
(\jedi), an effective technique for learning a finetuning-free personalization
model. Our key idea is to learn the joint distribution of multiple related
text-image pairs that share a common subject. To facilitate learning, we
propose a scalable synthetic dataset generation technique. Once trained, our
model enables fast and easy personalization at test time by simply using
reference images as input during the sampling process. Our approach does not
require any expensive optimization process or additional modules and can
faithfully preserve the identity represented by any number of reference images.
Experimental results show that our model achieves state-of-the-art generation
quality, both quantitatively and qualitatively, significantly outperforming
both the prior finetuning-based and finetuning-free personalization baselines.
| true | true |
Yu Zeng and
Vishal M. Patel and
Haochen Wang and
Xun Huang and
Ting{-}Chun Wang and
Ming{-}Yu Liu and
Yogesh Balaji
| 2,024 | null | null | null | null |
JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized
Text-to-Image Generation
|
[PDF] JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized ...
|
https://openaccess.thecvf.com/content/CVPR2024/papers/Zeng_JeDi_Joint-Image_Diffusion_Models_for_Finetuning-Free_Personalized_Text-to-Image_Generation_CVPR_2024_paper.pdf
|
JeDi is a finetuning-free model for personalized text-to-image generation, learning from text-image pairs and using reference images for fast personalization.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
ELITE
|
\cite{ELITE}
|
ELITE: Encoding Visual Concepts into Textual Embeddings for Customized
Text-to-Image Generation
|
http://arxiv.org/abs/2302.13848v2
|
In addition to the unprecedented ability in imaginary creation, large
text-to-image models are expected to take customized concepts in image
generation. Existing works generally learn such concepts in an
optimization-based manner, yet bringing excessive computation or memory burden.
In this paper, we instead propose a learning-based encoder, which consists of a
global and a local mapping networks for fast and accurate customized
text-to-image generation. In specific, the global mapping network projects the
hierarchical features of a given image into multiple new words in the textual
word embedding space, i.e., one primary word for well-editable concept and
other auxiliary words to exclude irrelevant disturbances (e.g., background). In
the meantime, a local mapping network injects the encoded patch features into
cross attention layers to provide omitted details, without sacrificing the
editability of primary concepts. We compare our method with existing
optimization-based approaches on a variety of user-defined concepts, and
demonstrate that our method enables high-fidelity inversion and more robust
editability with a significantly faster encoding process. Our code is publicly
available at https://github.com/csyxwei/ELITE.
| true | true |
Yuxiang Wei and
Yabo Zhang and
Zhilong Ji and
Jinfeng Bai and
Lei Zhang and
Wangmeng Zuo
| 2,023 | null | null | null | null |
ELITE: Encoding Visual Concepts into Textual Embeddings for Customized
Text-to-Image Generation
|
ELITE: Encoding Visual Concepts into Textual Embeddings for ...
|
https://openaccess.thecvf.com/content/ICCV2023/papers/Wei_ELITE_Encoding_Visual_Concepts_into_Textual_Embeddings_for_Customized_Text-to-Image_ICCV_2023_paper.pdf
|
by Y Wei · 2023 · Cited by 417 — To achieve fast and accurate customized text-to-image generation, we propose an encoder ELITE to encode the visual concept into textual embeddings. As
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
PathchDPO
|
\cite{PathchDPO}
|
PatchDPO: Patch-level DPO for Finetuning-free Personalized Image
Generation
|
http://arxiv.org/abs/2412.03177v2
|
Finetuning-free personalized image generation can synthesize customized
images without test-time finetuning, attracting wide research interest owing to
its high efficiency. Current finetuning-free methods simply adopt a single
training stage with a simple image reconstruction task, and they typically
generate low-quality images inconsistent with the reference images during
test-time. To mitigate this problem, inspired by the recent DPO (i.e., direct
preference optimization) technique, this work proposes an additional training
stage to improve the pre-trained personalized generation models. However,
traditional DPO only determines the overall superiority or inferiority of two
samples, which is not suitable for personalized image generation because the
generated images are commonly inconsistent with the reference images only in
some local image patches. To tackle this problem, this work proposes PatchDPO
that estimates the quality of image patches within each generated image and
accordingly trains the model. To this end, PatchDPO first leverages the
pre-trained vision model with a proposed self-supervised training method to
estimate the patch quality. Next, PatchDPO adopts a weighted training approach
to train the model with the estimated patch quality, which rewards the image
patches with high quality while penalizing the image patches with low quality.
Experiment results demonstrate that PatchDPO significantly improves the
performance of multiple pre-trained personalized generation models, and
achieves state-of-the-art performance on both single-object and multi-object
personalized image generation. Our code is available at
https://github.com/hqhQAQ/PatchDPO.
| true | true |
Qihan Huang and
Long Chan and
Jinlong Liu and
Wanggui He and
Hao Jiang and
Mingli Song and
Jie Song
| 2,024 | null | null | null |
CoRR
|
PatchDPO: Patch-level DPO for Finetuning-free Personalized Image
Generation
|
[CVPR 2025] PatchDPO: Patch-level DPO for Finetuning- ...
|
https://github.com/hqhQAQ/PatchDPO
|
GitHub - hqhQAQ/PatchDPO: [CVPR 2025] PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation To tackle this problem, this work proposes PatchDPO that estimates the quality of image patches within each generated image and accordingly trains the model. With PatchDPO, our model achieves state-of-the-art performance on personalized image generation, with only 4 hours of training time on 8 GPUs, as shown in Table 1 & 2. Detailedly, `$output_dir` contains 30 subfolders (corresponding to 30 objects), and each subfolder saves the generated images for each object, which is also named with this object (_i.e._, the folder names are consistent with those in dreambench/dataset). [CVPR 2025] PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
BDPO
|
\cite{BDPO}
|
Boost Your Own Human Image Generation Model via Direct Preference
Optimization with {AI} Feedback
| null | null | true | false |
Sanghyeon Na and
Yonggyu Kim and
Hyunjoon Lee
| 2,024 | null | null | null |
CoRR
|
Boost Your Own Human Image Generation Model via Direct Preference
Optimization with {AI} Feedback
|
Boost Your Own Human Image Generation Model via Direct ...
|
https://ui.adsabs.harvard.edu/abs/2024arXiv240520216N/abstract
|
Boost Your Human Image Generation Model via Direct Preference Optimization - Astrophysics Data System * About ADS Therefore, our approach, HG-DPO (Human image Generation through DPO), employs a novel curriculum learning framework that gradually improves the output of the model toward greater realism, making training more feasible. The ADS is operated by the Smithsonian Astrophysical Observatory under NASA Cooperative Agreement _80NSSC25M7105_ * About ADS * ADS Help #### Missing/Incorrect Record Submit a missing record or correct an existing record.#### Missing References Submit missing references to an existing ADS record.#### Associated Articles Submit associated articles to an existing record (e.g. arXiv / published paper).#### General Feedback Send your comments and suggestions for improvements.;)
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
DPO
|
\cite{DPO}
|
Direct Preference Optimization: Your Language Model is Secretly a Reward
Model
|
http://arxiv.org/abs/2305.18290v3
|
While large-scale unsupervised language models (LMs) learn broad world
knowledge and some reasoning skills, achieving precise control of their
behavior is difficult due to the completely unsupervised nature of their
training. Existing methods for gaining such steerability collect human labels
of the relative quality of model generations and fine-tune the unsupervised LM
to align with these preferences, often with reinforcement learning from human
feedback (RLHF). However, RLHF is a complex and often unstable procedure, first
fitting a reward model that reflects the human preferences, and then
fine-tuning the large unsupervised LM using reinforcement learning to maximize
this estimated reward without drifting too far from the original model. In this
paper we introduce a new parameterization of the reward model in RLHF that
enables extraction of the corresponding optimal policy in closed form, allowing
us to solve the standard RLHF problem with only a simple classification loss.
The resulting algorithm, which we call Direct Preference Optimization (DPO), is
stable, performant, and computationally lightweight, eliminating the need for
sampling from the LM during fine-tuning or performing significant
hyperparameter tuning. Our experiments show that DPO can fine-tune LMs to align
with human preferences as well as or better than existing methods. Notably,
fine-tuning with DPO exceeds PPO-based RLHF in ability to control sentiment of
generations, and matches or improves response quality in summarization and
single-turn dialogue while being substantially simpler to implement and train.
| true | true |
Rafael Rafailov and
Archit Sharma and
Eric Mitchell and
Christopher D. Manning and
Stefano Ermon and
Chelsea Finn
| 2,023 | null | null | null | null |
Direct Preference Optimization: Your Language Model is Secretly a Reward
Model
|
Direct Preference Optimization: Your Language Model is Secretly a ...
|
https://arxiv.org/abs/2305.18290
|
**arXiv:2305.18290** (cs) View a PDF of the paper titled Direct Preference Optimization: Your Language Model is Secretly a Reward Model, by Rafael Rafailov and 5 other authors View a PDF of the paper titled Direct Preference Optimization: Your Language Model is Secretly a Reward Model, by Rafael Rafailov and 5 other authors - [x] Bibliographic Explorer Toggle - [x] Connected Papers Toggle - [x] Litmaps Toggle - [x] scite.ai Toggle - [x] alphaXiv Toggle - [x] Links to Code Toggle - [x] DagsHub Toggle - [x] GotitPub Toggle - [x] Huggingface Toggle - [x] Links to Code Toggle - [x] ScienceCast Toggle - [x] Replicate Toggle - [x] Spaces Toggle - [x] Core recommender toggle - [x] IArxiv recommender toggle
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
Diffusion-DPO
|
\cite{Diffusion-DPO}
|
Diffusion Model Alignment Using Direct Preference Optimization
|
http://arxiv.org/abs/2311.12908v1
|
Large language models (LLMs) are fine-tuned using human comparison data with
Reinforcement Learning from Human Feedback (RLHF) methods to make them better
aligned with users' preferences. In contrast to LLMs, human preference learning
has not been widely explored in text-to-image diffusion models; the best
existing approach is to fine-tune a pretrained model using carefully curated
high quality images and captions to improve visual appeal and text alignment.
We propose Diffusion-DPO, a method to align diffusion models to human
preferences by directly optimizing on human comparison data. Diffusion-DPO is
adapted from the recently developed Direct Preference Optimization (DPO), a
simpler alternative to RLHF which directly optimizes a policy that best
satisfies human preferences under a classification objective. We re-formulate
DPO to account for a diffusion model notion of likelihood, utilizing the
evidence lower bound to derive a differentiable objective. Using the Pick-a-Pic
dataset of 851K crowdsourced pairwise preferences, we fine-tune the base model
of the state-of-the-art Stable Diffusion XL (SDXL)-1.0 model with
Diffusion-DPO. Our fine-tuned base model significantly outperforms both base
SDXL-1.0 and the larger SDXL-1.0 model consisting of an additional refinement
model in human evaluation, improving visual appeal and prompt alignment. We
also develop a variant that uses AI feedback and has comparable performance to
training on human preferences, opening the door for scaling of diffusion model
alignment methods.
| true | true |
Bram Wallace and
Meihua Dang and
Rafael Rafailov and
Linqi Zhou and
Aaron Lou and
Senthil Purushwalkam and
Stefano Ermon and
Caiming Xiong and
Shafiq Joty and
Nikhil Naik
| 2,023 | null | null | null |
CoRR
|
Diffusion Model Alignment Using Direct Preference Optimization
|
Diffusion Model Alignment Using Direct Preference Optimization
|
http://arxiv.org/pdf/2311.12908v1
|
Large language models (LLMs) are fine-tuned using human comparison data with
Reinforcement Learning from Human Feedback (RLHF) methods to make them better
aligned with users' preferences. In contrast to LLMs, human preference learning
has not been widely explored in text-to-image diffusion models; the best
existing approach is to fine-tune a pretrained model using carefully curated
high quality images and captions to improve visual appeal and text alignment.
We propose Diffusion-DPO, a method to align diffusion models to human
preferences by directly optimizing on human comparison data. Diffusion-DPO is
adapted from the recently developed Direct Preference Optimization (DPO), a
simpler alternative to RLHF which directly optimizes a policy that best
satisfies human preferences under a classification objective. We re-formulate
DPO to account for a diffusion model notion of likelihood, utilizing the
evidence lower bound to derive a differentiable objective. Using the Pick-a-Pic
dataset of 851K crowdsourced pairwise preferences, we fine-tune the base model
of the state-of-the-art Stable Diffusion XL (SDXL)-1.0 model with
Diffusion-DPO. Our fine-tuned base model significantly outperforms both base
SDXL-1.0 and the larger SDXL-1.0 model consisting of an additional refinement
model in human evaluation, improving visual appeal and prompt alignment. We
also develop a variant that uses AI feedback and has comparable performance to
training on human preferences, opening the door for scaling of diffusion model
alignment methods.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
D3PO
|
\cite{D3PO}
|
Using Human Feedback to Fine-tune Diffusion Models without Any Reward
Model
|
http://arxiv.org/abs/2311.13231v3
|
Using reinforcement learning with human feedback (RLHF) has shown significant
promise in fine-tuning diffusion models. Previous methods start by training a
reward model that aligns with human preferences, then leverage RL techniques to
fine-tune the underlying models. However, crafting an efficient reward model
demands extensive datasets, optimal architecture, and manual hyperparameter
tuning, making the process both time and cost-intensive. The direct preference
optimization (DPO) method, effective in fine-tuning large language models,
eliminates the necessity for a reward model. However, the extensive GPU memory
requirement of the diffusion model's denoising process hinders the direct
application of the DPO method. To address this issue, we introduce the Direct
Preference for Denoising Diffusion Policy Optimization (D3PO) method to
directly fine-tune diffusion models. The theoretical analysis demonstrates that
although D3PO omits training a reward model, it effectively functions as the
optimal reward model trained using human feedback data to guide the learning
process. This approach requires no training of a reward model, proving to be
more direct, cost-effective, and minimizing computational overhead. In
experiments, our method uses the relative scale of objectives as a proxy for
human preference, delivering comparable results to methods using ground-truth
rewards. Moreover, D3PO demonstrates the ability to reduce image distortion
rates and generate safer images, overcoming challenges lacking robust reward
models. Our code is publicly available at https://github.com/yk7333/D3PO.
| true | true |
Kai Yang and
Jian Tao and
Jiafei Lyu and
Chunjiang Ge and
Jiaxin Chen and
Qimai Li and
Weihan Shen and
Xiaolong Zhu and
Xiu Li
| 2,023 | null | null | null |
CoRR
|
Using Human Feedback to Fine-tune Diffusion Models without Any Reward
Model
|
yk7333/d3po: [CVPR 2024] Code for the paper "Using ...
|
https://github.com/yk7333/d3po
|
D3PO can directly fine-tune the diffusion model through human feedback without the need to train a reward model. Our repository's code is referenced from DDPO.
|
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct
Preference Optimization
|
2504.12900v1
|
SPO
|
\cite{SPO}
|
Step-aware Preference Optimization: Aligning Preference with Denoising
Performance at Each Step
| null | null | true | false |
Zhanhao Liang and
Yuhui Yuan and
Shuyang Gu and
Bohan Chen and
Tiankai Hang and
Ji Li and
Liang Zheng
| 2,024 | null | null | null |
CoRR
|
Step-aware Preference Optimization: Aligning Preference with Denoising
Performance at Each Step
|
AK - X
|
https://x.com/_akhaliq/status/1798920414644642035?lang=en
|
Step-aware Preference Optimization Aligning Preference with Denoising Performance at Each Step Recently, Direct Preference Optimization (DPO)
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
liSurveyGenerativeIR2024
|
\cite{liSurveyGenerativeIR2024}
|
From Matching to Generation: A Survey on Generative Information
Retrieval
|
http://arxiv.org/abs/2404.14851v4
|
Information Retrieval (IR) systems are crucial tools for users to access
information, which have long been dominated by traditional methods relying on
similarity matching. With the advancement of pre-trained language models,
generative information retrieval (GenIR) emerges as a novel paradigm,
attracting increasing attention. Based on the form of information provided to
users, current research in GenIR can be categorized into two aspects:
\textbf{(1) Generative Document Retrieval} (GR) leverages the generative
model's parameters for memorizing documents, enabling retrieval by directly
generating relevant document identifiers without explicit indexing. \textbf{(2)
Reliable Response Generation} employs language models to directly generate
information users seek, breaking the limitations of traditional IR in terms of
document granularity and relevance matching while offering flexibility,
efficiency, and creativity to meet practical needs. This paper aims to
systematically review the latest research progress in GenIR. We will summarize
the advancements in GR regarding model training and structure, document
identifier, incremental learning, etc., as well as progress in reliable
response generation in aspects of internal knowledge memorization, external
knowledge augmentation, etc. We also review the evaluation, challenges and
future developments in GenIR systems. This review aims to offer a comprehensive
reference for researchers, encouraging further development in the GenIR field.
Github Repository: https://github.com/RUC-NLPIR/GenIR-Survey
| true | true |
Xiaoxi Li and Jiajie Jin and Yujia Zhou and Yuyao Zhang and Peitian Zhang and Yutao Zhu and Zhicheng Dou
| null | null |
https://doi.org/10.48550/arXiv.2404.14851
|
10.48550/ARXIV.2404.14851
|
CoRR
|
From Matching to Generation: A Survey on Generative Information
Retrieval
|
From Matching to Generation: A Survey on Generative Information ...
|
https://dl.acm.org/doi/10.1145/3722552
|
Currently, research in GenIR primarily focuses on two main patterns: (1) Generative Retrieval (GR), which involves retrieving documents by generating their
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
white2025surveyinformationaccess
|
\cite{white2025surveyinformationaccess}
|
Information Access in the Era of Generative AI
| null | null | true | false |
Ryen W. White and Chirag Shah
| null | null |
https://doi.org/10.1007/978-3-031-73147-1
| null | null |
Information Access in the Era of Generative AI
|
Information Access in the Era of Generative AI - SpringerLink
|
https://link.springer.com/book/10.1007/978-3-031-73147-1
|
This book discusses GenAI and its role in information access, covering topics like e.g. interactions, evaluations, recommendations and future developments.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
metzlerRethinkingSearch2021
|
\cite{metzlerRethinkingSearch2021}
|
Rethinking Search: Making Domain Experts out of Dilettantes
|
http://arxiv.org/abs/2105.02274v2
|
When experiencing an information need, users want to engage with a domain
expert, but often turn to an information retrieval system, such as a search
engine, instead. Classical information retrieval systems do not answer
information needs directly, but instead provide references to (hopefully
authoritative) answers. Successful question answering systems offer a limited
corpus created on-demand by human experts, which is neither timely nor
scalable. Pre-trained language models, by contrast, are capable of directly
generating prose that may be responsive to an information need, but at present
they are dilettantes rather than domain experts -- they do not have a true
understanding of the world, they are prone to hallucinating, and crucially they
are incapable of justifying their utterances by referring to supporting
documents in the corpus they were trained over. This paper examines how ideas
from classical information retrieval and pre-trained language models can be
synthesized and evolved into systems that truly deliver on the promise of
domain expert advice.
| true | true |
Metzler, Donald and Tay, Yi and Bahri, Dara and Najork, Marc
| null | null |
https://doi.org/10.1145/3476415.3476428
|
10.1145/3476415.3476428
|
SIGIR Forum
|
Rethinking Search: Making Domain Experts out of Dilettantes
|
Rethinking Search: Making Domain Experts out of Dilettantes
|
http://arxiv.org/pdf/2105.02274v2
|
When experiencing an information need, users want to engage with a domain
expert, but often turn to an information retrieval system, such as a search
engine, instead. Classical information retrieval systems do not answer
information needs directly, but instead provide references to (hopefully
authoritative) answers. Successful question answering systems offer a limited
corpus created on-demand by human experts, which is neither timely nor
scalable. Pre-trained language models, by contrast, are capable of directly
generating prose that may be responsive to an information need, but at present
they are dilettantes rather than domain experts -- they do not have a true
understanding of the world, they are prone to hallucinating, and crucially they
are incapable of justifying their utterances by referring to supporting
documents in the corpus they were trained over. This paper examines how ideas
from classical information retrieval and pre-trained language models can be
synthesized and evolved into systems that truly deliver on the promise of
domain expert advice.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
decaoAutoregressiveEntityRetrieval2020
|
\cite{decaoAutoregressiveEntityRetrieval2020}
|
Autoregressive Entity Retrieval
|
http://arxiv.org/abs/2010.00904v3
|
Entities are at the center of how we represent and aggregate knowledge. For
instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one
per Wikipedia article). The ability to retrieve such entities given a query is
fundamental for knowledge-intensive tasks such as entity linking and
open-domain question answering. Current approaches can be understood as
classifiers among atomic labels, one for each entity. Their weight vectors are
dense entity representations produced by encoding entity meta information such
as their descriptions. This approach has several shortcomings: (i) context and
entity affinity is mainly captured through a vector dot product, potentially
missing fine-grained interactions; (ii) a large memory footprint is needed to
store dense representations when considering large entity sets; (iii) an
appropriately hard set of negative data has to be subsampled at training time.
In this work, we propose GENRE, the first system that retrieves entities by
generating their unique names, left to right, token-by-token in an
autoregressive fashion. This mitigates the aforementioned technical issues
since: (i) the autoregressive formulation directly captures relations between
context and entity name, effectively cross encoding both; (ii) the memory
footprint is greatly reduced because the parameters of our encoder-decoder
architecture scale with vocabulary size, not entity count; (iii) the softmax
loss is computed without subsampling negative data. We experiment with more
than 20 datasets on entity disambiguation, end-to-end entity linking and
document retrieval tasks, achieving new state-of-the-art or very competitive
results while using a tiny fraction of the memory footprint of competing
systems. Finally, we demonstrate that new entities can be added by simply
specifying their names. Code and pre-trained models at
https://github.com/facebookresearch/GENRE.
| true | true |
Nicola De Cao and Gautier Izacard and Sebastian Riedel and Fabio Petroni
| null | null |
https://openreview.net/forum?id=5k8F6UU39V
| null | null |
Autoregressive Entity Retrieval
|
Autoregressive Entity Retrieval
|
http://arxiv.org/pdf/2010.00904v3
|
Entities are at the center of how we represent and aggregate knowledge. For
instance, Encyclopedias such as Wikipedia are structured by entities (e.g., one
per Wikipedia article). The ability to retrieve such entities given a query is
fundamental for knowledge-intensive tasks such as entity linking and
open-domain question answering. Current approaches can be understood as
classifiers among atomic labels, one for each entity. Their weight vectors are
dense entity representations produced by encoding entity meta information such
as their descriptions. This approach has several shortcomings: (i) context and
entity affinity is mainly captured through a vector dot product, potentially
missing fine-grained interactions; (ii) a large memory footprint is needed to
store dense representations when considering large entity sets; (iii) an
appropriately hard set of negative data has to be subsampled at training time.
In this work, we propose GENRE, the first system that retrieves entities by
generating their unique names, left to right, token-by-token in an
autoregressive fashion. This mitigates the aforementioned technical issues
since: (i) the autoregressive formulation directly captures relations between
context and entity name, effectively cross encoding both; (ii) the memory
footprint is greatly reduced because the parameters of our encoder-decoder
architecture scale with vocabulary size, not entity count; (iii) the softmax
loss is computed without subsampling negative data. We experiment with more
than 20 datasets on entity disambiguation, end-to-end entity linking and
document retrieval tasks, achieving new state-of-the-art or very competitive
results while using a tiny fraction of the memory footprint of competing
systems. Finally, we demonstrate that new entities can be added by simply
specifying their names. Code and pre-trained models at
https://github.com/facebookresearch/GENRE.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
sunLearningTokenizeGenerative2023
|
\cite{sunLearningTokenizeGenerative2023}
|
Learning to Tokenize for Generative Retrieval
|
http://arxiv.org/abs/2304.04171v1
|
Conventional document retrieval techniques are mainly based on the
index-retrieve paradigm. It is challenging to optimize pipelines based on this
paradigm in an end-to-end manner. As an alternative, generative retrieval
represents documents as identifiers (docid) and retrieves documents by
generating docids, enabling end-to-end modeling of document retrieval tasks.
However, it is an open question how one should define the document identifiers.
Current approaches to the task of defining document identifiers rely on fixed
rule-based docids, such as the title of a document or the result of clustering
BERT embeddings, which often fail to capture the complete semantic information
of a document. We propose GenRet, a document tokenization learning method to
address the challenge of defining document identifiers for generative
retrieval. GenRet learns to tokenize documents into short discrete
representations (i.e., docids) via a discrete auto-encoding approach. Three
components are included in GenRet: (i) a tokenization model that produces
docids for documents; (ii) a reconstruction model that learns to reconstruct a
document based on a docid; and (iii) a sequence-to-sequence retrieval model
that generates relevant document identifiers directly for a designated query.
By using an auto-encoding framework, GenRet learns semantic docids in a fully
end-to-end manner. We also develop a progressive training scheme to capture the
autoregressive nature of docids and to stabilize training. We conduct
experiments on the NQ320K, MS MARCO, and BEIR datasets to assess the
effectiveness of GenRet. GenRet establishes the new state-of-the-art on the
NQ320K dataset. Especially, compared to generative retrieval baselines, GenRet
can achieve significant improvements on the unseen documents. GenRet also
outperforms comparable baselines on MS MARCO and BEIR, demonstrating the
method's generalizability.
| true | true |
Sun, Weiwei and Yan, Lingyong and Chen, Zheng and Wang, Shuaiqiang and Zhu, Haichao and Ren, Pengjie and Chen, Zhumin and Yin, Dawei and Rijke, Maarten and Ren, Zhaochun
| null | null |
https://proceedings.neurips.cc/paper_files/paper/2023/file/91228b942a4528cdae031c1b68b127e8-Paper-Conference.pdf
| null | null |
Learning to Tokenize for Generative Retrieval
|
Learning to Tokenize for Generative Retrieval
|
http://arxiv.org/pdf/2304.04171v1
|
Conventional document retrieval techniques are mainly based on the
index-retrieve paradigm. It is challenging to optimize pipelines based on this
paradigm in an end-to-end manner. As an alternative, generative retrieval
represents documents as identifiers (docid) and retrieves documents by
generating docids, enabling end-to-end modeling of document retrieval tasks.
However, it is an open question how one should define the document identifiers.
Current approaches to the task of defining document identifiers rely on fixed
rule-based docids, such as the title of a document or the result of clustering
BERT embeddings, which often fail to capture the complete semantic information
of a document. We propose GenRet, a document tokenization learning method to
address the challenge of defining document identifiers for generative
retrieval. GenRet learns to tokenize documents into short discrete
representations (i.e., docids) via a discrete auto-encoding approach. Three
components are included in GenRet: (i) a tokenization model that produces
docids for documents; (ii) a reconstruction model that learns to reconstruct a
document based on a docid; and (iii) a sequence-to-sequence retrieval model
that generates relevant document identifiers directly for a designated query.
By using an auto-encoding framework, GenRet learns semantic docids in a fully
end-to-end manner. We also develop a progressive training scheme to capture the
autoregressive nature of docids and to stabilize training. We conduct
experiments on the NQ320K, MS MARCO, and BEIR datasets to assess the
effectiveness of GenRet. GenRet establishes the new state-of-the-art on the
NQ320K dataset. Especially, compared to generative retrieval baselines, GenRet
can achieve significant improvements on the unseen documents. GenRet also
outperforms comparable baselines on MS MARCO and BEIR, demonstrating the
method's generalizability.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
wangNeuralCorpusIndexer2023
|
\cite{wangNeuralCorpusIndexer2023}
|
A Neural Corpus Indexer for Document Retrieval
|
http://arxiv.org/abs/2206.02743v3
|
Current state-of-the-art document retrieval solutions mainly follow an
index-retrieve paradigm, where the index is hard to be directly optimized for
the final retrieval target. In this paper, we aim to show that an end-to-end
deep neural network unifying training and indexing stages can significantly
improve the recall performance of traditional methods. To this end, we propose
Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates
relevant document identifiers directly for a designated query. To optimize the
recall performance of NCI, we invent a prefix-aware weight-adaptive decoder
architecture, and leverage tailored techniques including query generation,
semantic document identifiers, and consistency-based regularization. Empirical
studies demonstrated the superiority of NCI on two commonly used academic
benchmarks, achieving +21.4% and +16.8% relative enhancement for Recall@1 on
NQ320k dataset and R-Precision on TriviaQA dataset, respectively, compared to
the best baseline method.
| true | true |
Yujing Wang and Yingyan Hou and Haonan Wang and Ziming Miao and Shibin Wu and Qi Chen and Yuqing Xia and Chengmin Chi and Guoshuai Zhao and Zheng Liu and Xing Xie and Hao Sun and Weiwei Deng and Qi Zhang and Mao Yang
| null | null |
http://papers.nips.cc/paper\_files/paper/2022/hash/a46156bd3579c3b268108ea6aca71d13-Abstract-Conference.html
| null | null |
A Neural Corpus Indexer for Document Retrieval
|
A Neural Corpus Indexer for Document Retrieval
|
http://arxiv.org/pdf/2206.02743v3
|
Current state-of-the-art document retrieval solutions mainly follow an
index-retrieve paradigm, where the index is hard to be directly optimized for
the final retrieval target. In this paper, we aim to show that an end-to-end
deep neural network unifying training and indexing stages can significantly
improve the recall performance of traditional methods. To this end, we propose
Neural Corpus Indexer (NCI), a sequence-to-sequence network that generates
relevant document identifiers directly for a designated query. To optimize the
recall performance of NCI, we invent a prefix-aware weight-adaptive decoder
architecture, and leverage tailored techniques including query generation,
semantic document identifiers, and consistency-based regularization. Empirical
studies demonstrated the superiority of NCI on two commonly used academic
benchmarks, achieving +21.4% and +16.8% relative enhancement for Recall@1 on
NQ320k dataset and R-Precision on TriviaQA dataset, respectively, compared to
the best baseline method.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
liLearningRankGenerative2023
|
\cite{liLearningRankGenerative2023}
|
Learning to Rank in Generative Retrieval
|
http://arxiv.org/abs/2306.15222v2
|
Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR.
| true | true |
Yongqi Li and Nan Yang and Liang Wang and Furu Wei and Wenjie Li
| null | null |
https://doi.org/10.1609/aaai.v38i8.28717
|
10.1609/AAAI.V38I8.28717
| null |
Learning to Rank in Generative Retrieval
|
Learning to Rank in Generative Retrieval
|
http://arxiv.org/pdf/2306.15222v2
|
Generative retrieval stands out as a promising new paradigm in text retrieval
that aims to generate identifier strings of relevant passages as the retrieval
target. This generative paradigm taps into powerful generative language models,
distinct from traditional sparse or dense retrieval methods. However, only
learning to generate is insufficient for generative retrieval. Generative
retrieval learns to generate identifiers of relevant passages as an
intermediate goal and then converts predicted identifiers into the final
passage rank list. The disconnect between the learning objective of
autoregressive models and the desired passage ranking target leads to a
learning gap. To bridge this gap, we propose a learning-to-rank framework for
generative retrieval, dubbed LTRGR. LTRGR enables generative retrieval to learn
to rank passages directly, optimizing the autoregressive model toward the final
passage ranking target via a rank loss. This framework only requires an
additional learning-to-rank training phase to enhance current generative
retrieval systems and does not add any burden to the inference stage. We
conducted experiments on three public benchmarks, and the results demonstrate
that LTRGR achieves state-of-the-art performance among generative retrieval
methods. The code and checkpoints are released at
https://github.com/liyongqi67/LTRGR.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
Zhuang2022BridgingTG
|
\cite{Zhuang2022BridgingTG}
|
Bridging the Gap Between Indexing and Retrieval for Differentiable
Search Index with Query Generation
|
http://arxiv.org/abs/2206.10128v3
|
The Differentiable Search Index (DSI) is an emerging paradigm for information
retrieval. Unlike traditional retrieval architectures where index and retrieval
are two different and separate components, DSI uses a single transformer model
to perform both indexing and retrieval.
In this paper, we identify and tackle an important issue of current DSI
models: the data distribution mismatch that occurs between the DSI indexing and
retrieval processes. Specifically, we argue that, at indexing, current DSI
methods learn to build connections between the text of long documents and the
identifier of the documents, but then retrieval of document identifiers is
based on queries that are commonly much shorter than the indexed documents.
This problem is further exacerbated when using DSI for cross-lingual retrieval,
where document text and query text are in different languages.
To address this fundamental problem of current DSI models, we propose a
simple yet effective indexing framework for DSI, called DSI-QG. When indexing,
DSI-QG represents documents with a number of potentially relevant queries
generated by a query generation model and re-ranked and filtered by a
cross-encoder ranker. The presence of these queries at indexing allows the DSI
models to connect a document identifier to a set of queries, hence mitigating
data distribution mismatches present between the indexing and the retrieval
phases. Empirical results on popular mono-lingual and cross-lingual passage
retrieval datasets show that DSI-QG significantly outperforms the original DSI
model.
| true | true |
Shengyao Zhuang and Houxing Ren and Linjun Shou and Jian Pei and Ming Gong and Zuccon, Guido and Daxin Jiang
| null | null |
https://api.semanticscholar.org/CorpusID:249890267
| null |
ArXiv
|
Bridging the Gap Between Indexing and Retrieval for Differentiable
Search Index with Query Generation
|
Bridging the Gap Between Indexing and Retrieval for Differentiable ...
|
https://arxiv.org/abs/2206.10128
|
Missing: 04/08/2025
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
Zhang2023TermSetsCB
|
\cite{Zhang2023TermSetsCB}
|
Term-Sets Can Be Strong Document Identifiers For Auto-Regressive Search Engines
| null | null | true | false |
Peitian Zhang and Zheng Liu and Yujia Zhou and Zhicheng Dou and Zhao Cao
| null | null |
https://api.semanticscholar.org/CorpusID:258841428
| null |
ArXiv
|
Term-Sets Can Be Strong Document Identifiers For Auto-Regressive Search Engines
|
[PDF] Term-Sets Can Be Strong Document Identifiers For Auto-Regressive ...
|
https://openreview.net/pdf?id=uZv73g6f1mL
|
We propose a novel framework AutoTSG for auto-regressive search engines. The proposed method is featured by its unordered term-based document identifier and the
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
yangAutoSearchIndexer2023
|
\cite{yangAutoSearchIndexer2023}
|
Auto Search Indexer for End-to-End Document Retrieval
|
http://arxiv.org/abs/2310.12455v2
|
Generative retrieval, which is a new advanced paradigm for document
retrieval, has recently attracted research interests, since it encodes all
documents into the model and directly generates the retrieved documents.
However, its power is still underutilized since it heavily relies on the
"preprocessed" document identifiers (docids), thus limiting its retrieval
performance and ability to retrieve new documents. In this paper, we propose a
novel fully end-to-end retrieval paradigm. It can not only end-to-end learn the
best docids for existing and new documents automatically via a semantic
indexing module, but also perform end-to-end document retrieval via an
encoder-decoder-based generative model, namely Auto Search Indexer (ASI).
Besides, we design a reparameterization mechanism to combine the above two
modules into a joint optimization framework. Extensive experimental results
demonstrate the superiority of our model over advanced baselines on both public
and industrial datasets and also verify the ability to deal with new documents.
| true | true |
Yang, Tianchi and Song, Minghui and Zhang, Zihan and Huang, Haizhen and Deng, Weiwei and Sun, Feng and Zhang, Qi
| null | null | null | null | null |
Auto Search Indexer for End-to-End Document Retrieval
|
Auto Search Indexer for End-to-End Document Retrieval
|
https://openreview.net/forum?id=ZhZFUOV5hb¬eId=ORsULzg9Ip
|
This paper presents an end-to-end generative information retrieval pipeline, Auto Search Indexer (ASI), that supports document-id assignment as well as
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
tang2023semantic
|
\cite{tang2023semantic}
|
Semantic-Enhanced Differentiable Search Index Inspired by Learning
Strategies
|
http://arxiv.org/abs/2305.15115v1
|
Recently, a new paradigm called Differentiable Search Index (DSI) has been
proposed for document retrieval, wherein a sequence-to-sequence model is
learned to directly map queries to relevant document identifiers. The key idea
behind DSI is to fully parameterize traditional ``index-retrieve'' pipelines
within a single neural model, by encoding all documents in the corpus into the
model parameters. In essence, DSI needs to resolve two major questions: (1) how
to assign an identifier to each document, and (2) how to learn the associations
between a document and its identifier. In this work, we propose a
Semantic-Enhanced DSI model (SE-DSI) motivated by Learning Strategies in the
area of Cognitive Psychology. Our approach advances original DSI in two ways:
(1) For the document identifier, we take inspiration from Elaboration
Strategies in human learning. Specifically, we assign each document an
Elaborative Description based on the query generation technique, which is more
meaningful than a string of integers in the original DSI; and (2) For the
associations between a document and its identifier, we take inspiration from
Rehearsal Strategies in human learning. Specifically, we select fine-grained
semantic features from a document as Rehearsal Contents to improve document
memorization. Both the offline and online experiments show improved retrieval
performance over prevailing baselines.
| true | true |
Tang, Yubao and Zhang, Ruqing and Guo, Jiafeng and Chen, Jiangui and Zhu, Zuowei and Wang, Shuaiqiang and Yin, Dawei and Cheng, Xueqi
| null | null |
https://doi.org/10.1145/3580305.3599903
|
10.1145/3580305.3599903
| null |
Semantic-Enhanced Differentiable Search Index Inspired by Learning
Strategies
|
Semantic-Enhanced Differentiable Search Index Inspired ...
|
https://dl.acm.org/doi/10.1145/3580305.3599903
|
In this work, we propose a Semantic-Enhanced DSI model (SE-DSI) motivated by Learning Strategies in the area of Cognitive Psychology.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
tang2024generative
|
\cite{tang2024generative}
|
Generative Retrieval Meets Multi-Graded Relevance
|
http://arxiv.org/abs/2409.18409v1
|
Generative retrieval represents a novel approach to information retrieval. It
uses an encoder-decoder architecture to directly produce relevant document
identifiers (docids) for queries. While this method offers benefits, current
approaches are limited to scenarios with binary relevance data, overlooking the
potential for documents to have multi-graded relevance. Extending generative
retrieval to accommodate multi-graded relevance poses challenges, including the
need to reconcile likelihood probabilities for docid pairs and the possibility
of multiple relevant documents sharing the same identifier. To address these
challenges, we introduce a framework called GRaded Generative Retrieval
(GR$^2$). GR$^2$ focuses on two key components: ensuring relevant and distinct
identifiers, and implementing multi-graded constrained contrastive training.
First, we create identifiers that are both semantically relevant and
sufficiently distinct to represent individual documents effectively. This is
achieved by jointly optimizing the relevance and distinctness of docids through
a combination of docid generation and autoencoder models. Second, we
incorporate information about the relationship between relevance grades to
guide the training process. We use a constrained contrastive training strategy
to bring the representations of queries and the identifiers of their relevant
documents closer together, based on their respective relevance grades.
Extensive experiments on datasets with both multi-graded and binary relevance
demonstrate the effectiveness of GR$^2$.
| true | true |
Yubao Tang and Ruqing Zhang and Jiafeng Guo and Maarten de Rijke and Wei Chen and Xueqi Cheng
| null | null |
https://openreview.net/forum?id=2xTkeyJFJb
| null | null |
Generative Retrieval Meets Multi-Graded Relevance
|
Generative Retrieval Meets Multi-Graded Relevance
|
https://proceedings.neurips.cc/paper_files/paper/2024/hash/853e781cb2af58956ed5c89aa59da3fc-Abstract-Conference.html
|
Generative retrieval represents a novel approach to information retrieval, utilizing an encoder-decoder architecture to directly produce relevant document
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
wuGenerativeRetrievalMultiVector2024
|
\cite{wuGenerativeRetrievalMultiVector2024}
|
Generative Retrieval as Multi-Vector Dense Retrieval
|
http://arxiv.org/abs/2404.00684v1
|
Generative retrieval generates identifiers of relevant documents in an
end-to-end manner using a sequence-to-sequence architecture for a given query.
The relation between generative retrieval and other retrieval methods,
especially those based on matching within dense retrieval models, is not yet
fully comprehended. Prior work has demonstrated that generative retrieval with
atomic identifiers is equivalent to single-vector dense retrieval. Accordingly,
generative retrieval exhibits behavior analogous to hierarchical search within
a tree index in dense retrieval when using hierarchical semantic identifiers.
However, prior work focuses solely on the retrieval stage without considering
the deep interactions within the decoder of generative retrieval.
In this paper, we fill this gap by demonstrating that generative retrieval
and multi-vector dense retrieval share the same framework for measuring the
relevance to a query of a document. Specifically, we examine the attention
layer and prediction head of generative retrieval, revealing that generative
retrieval can be understood as a special case of multi-vector dense retrieval.
Both methods compute relevance as a sum of products of query and document
vectors and an alignment matrix. We then explore how generative retrieval
applies this framework, employing distinct strategies for computing document
token vectors and the alignment matrix. We have conducted experiments to verify
our conclusions and show that both paradigms exhibit commonalities of term
matching in their alignment matrix.
| true | true |
Shiguang Wu and Wenda Wei and Mengqi Zhang and Zhumin Chen and Jun Ma and Zhaochun Ren and Maarten de Rijke and Pengjie Ren
| null | null |
https://doi.org/10.1145/3626772.3657697
|
10.1145/3626772.3657697
| null |
Generative Retrieval as Multi-Vector Dense Retrieval
|
Generative Retrieval as Multi-Vector Dense Retrieval
|
https://dl.acm.org/doi/10.1145/3626772.3657697
|
Generative retrieval and multi-vector dense retrieval share the same framework for measuring the relevance to a query of a document.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
seal2022
|
\cite{seal2022}
|
Autoregressive Search Engines: Generating Substrings as Document
Identifiers
|
http://arxiv.org/abs/2204.10628v1
|
Knowledge-intensive language tasks require NLP systems to both provide the
correct answer and retrieve supporting evidence for it in a given corpus.
Autoregressive language models are emerging as the de-facto standard for
generating answers, with newer and more powerful systems emerging at an
astonishing pace. In this paper we argue that all this (and future) progress
can be directly applied to the retrieval problem with minimal intervention to
the models' architecture. Previous work has explored ways to partition the
search space into hierarchical structures and retrieve documents by
autoregressively generating their unique identifier. In this work we propose an
alternative that doesn't force any structure in the search space: using all
ngrams in a passage as its possible identifiers. This setup allows us to use an
autoregressive model to generate and score distinctive ngrams, that are then
mapped to full passages through an efficient data structure. Empirically, we
show this not only outperforms prior autoregressive approaches but also leads
to an average improvement of at least 10 points over more established retrieval
solutions for passage-level retrieval on the KILT benchmark, establishing new
state-of-the-art downstream performance on some datasets, while using a
considerably lighter memory footprint than competing systems. Code and
pre-trained models at https://github.com/facebookresearch/SEAL.
| true | true |
Bevilacqua, Michele and Ottaviano, Giuseppe and Lewis, Patrick and Yih, Scott and Riedel, Sebastian and Petroni, Fabio
| null | null | null | null |
Advances in Neural Information Processing Systems
|
Autoregressive Search Engines: Generating Substrings as Document
Identifiers
|
[PDF] Autoregressive Search Engines: Generating Substrings as ...
|
https://proceedings.neurips.cc/paper_files/paper/2022/file/cd88d62a2063fdaf7ce6f9068fb15dcd-Paper-Conference.pdf
|
One way to approach retrieval with autoregressive models makes use of unique identifiers, i.e., string pointers to documents that are in some way easier to
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
tayTransformerMemoryDifferentiable2022a
|
\cite{tayTransformerMemoryDifferentiable2022a}
|
Transformer Memory as a Differentiable Search Index
|
http://arxiv.org/abs/2202.06991v3
|
In this paper, we demonstrate that information retrieval can be accomplished
with a single Transformer, in which all information about the corpus is encoded
in the parameters of the model. To this end, we introduce the Differentiable
Search Index (DSI), a new paradigm that learns a text-to-text model that maps
string queries directly to relevant docids; in other words, a DSI model answers
queries directly using only its parameters, dramatically simplifying the whole
retrieval process. We study variations in how documents and their identifiers
are represented, variations in training procedures, and the interplay between
models and corpus sizes. Experiments demonstrate that given appropriate design
choices, DSI significantly outperforms strong baselines such as dual encoder
models. Moreover, DSI demonstrates strong generalization capabilities,
outperforming a BM25 baseline in a zero-shot setup.
| true | true |
Yi Tay and Vinh Tran and Mostafa Dehghani and Jianmo Ni and Dara Bahri and Harsh Mehta and Zhen Qin and Kai Hui and Zhe Zhao and Jai Prakash Gupta and Tal Schuster and William W. Cohen and Donald Metzler
| null | null |
http://papers.nips.cc/paper\_files/paper/2022/hash/892840a6123b5ec99ebaab8be1530fba-Abstract-Conference.html
| null | null |
Transformer Memory as a Differentiable Search Index
|
Transformer Memory as a Differentiable Search Index
|
http://arxiv.org/pdf/2202.06991v3
|
In this paper, we demonstrate that information retrieval can be accomplished
with a single Transformer, in which all information about the corpus is encoded
in the parameters of the model. To this end, we introduce the Differentiable
Search Index (DSI), a new paradigm that learns a text-to-text model that maps
string queries directly to relevant docids; in other words, a DSI model answers
queries directly using only its parameters, dramatically simplifying the whole
retrieval process. We study variations in how documents and their identifiers
are represented, variations in training procedures, and the interplay between
models and corpus sizes. Experiments demonstrate that given appropriate design
choices, DSI significantly outperforms strong baselines such as dual encoder
models. Moreover, DSI demonstrates strong generalization capabilities,
outperforming a BM25 baseline in a zero-shot setup.
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
dynamic-retriever2023
|
\cite{dynamic-retriever2023}
|
DynamicRetriever: A Pre-trained Model-based IR System Without an Explicit Index
| null | null | true | false |
Yujia Zhou and Jing Yao and Zhicheng Dou and Ledell Wu and Ji-Rong Wen
| null |
April
|
https://doi.org/10.1007/s11633-022-1373-9
| null |
Mach. Intell. Res.
|
DynamicRetriever: A Pre-trained Model-based IR System Without an Explicit Index
|
[PDF] DynamicRetriever: A Pre-training Model-based IR System ... - arXiv
|
https://arxiv.org/pdf/2203.00537
|
Specifically, we propose a pre-training model-based IR system with neither sparse not dense index, called DynamicRetriever. It is comprised
|
Constrained Auto-Regressive Decoding Constrains Generative Retrieval
|
2504.09935v1
|
nguyen-2023-generative
|
\cite{nguyen-2023-generative}
|
Generative Retrieval as Dense Retrieval
|
http://arxiv.org/abs/2306.11397v1
|
Generative retrieval is a promising new neural retrieval paradigm that aims
to optimize the retrieval pipeline by performing both indexing and retrieval
with a single transformer model. However, this new paradigm faces challenges
with updating the index and scaling to large collections. In this paper, we
analyze two prominent variants of generative retrieval and show that they can
be conceptually viewed as bi-encoders for dense retrieval. Specifically, we
analytically demonstrate that the generative retrieval process can be
decomposed into dot products between query and document vectors, similar to
dense retrieval. This analysis leads us to propose a new variant of generative
retrieval, called Tied-Atomic, which addresses the updating and scaling issues
by incorporating techniques from dense retrieval. In experiments on two
datasets, NQ320k and the full MSMARCO, we confirm that this approach does not
reduce retrieval effectiveness while enabling the model to scale to large
collections.
| true | true |
Thong Nguyen and Andrew Yates
| null | null |
https://doi.org/10.48550/arXiv.2306.11397
|
10.48550/ARXIV.2306.11397
|
CoRR
|
Generative Retrieval as Dense Retrieval
|
Generative Retrieval as Dense Retrieval
|
http://arxiv.org/pdf/2306.11397v1
|
Generative retrieval is a promising new neural retrieval paradigm that aims
to optimize the retrieval pipeline by performing both indexing and retrieval
with a single transformer model. However, this new paradigm faces challenges
with updating the index and scaling to large collections. In this paper, we
analyze two prominent variants of generative retrieval and show that they can
be conceptually viewed as bi-encoders for dense retrieval. Specifically, we
analytically demonstrate that the generative retrieval process can be
decomposed into dot products between query and document vectors, similar to
dense retrieval. This analysis leads us to propose a new variant of generative
retrieval, called Tied-Atomic, which addresses the updating and scaling issues
by incorporating techniques from dense retrieval. In experiments on two
datasets, NQ320k and the full MSMARCO, we confirm that this approach does not
reduce retrieval effectiveness while enabling the model to scale to large
collections.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.