Parallel personalized pairwise learning to rank

Murat Yaǧci, Tevfik Aytekin, Hürol Türen, Fikret Gürgen

Research output: Contribution to conferencePaperpeer-review

2 Citations (Scopus)

Abstract

Methods based on online matrix factorization are commonly used for personalized learning to rank (LtR). These methods often use stochastic gradient descent (SGD) for optimization. SGD has a sequential nature which can be problematic for large relevance feedback datasets due to late convergence or when multiple passes are required over the dataset. In this paper, we investigate a shared memory lock-free parallel SGD scheme to deal with these problems in a personalized pairwise LtR setting. We show that such a parallel online algorithm is directly applicable to pairwise LtR as in pointwise LtR.We also show that a few modifications to the parallel online algorithm help solve several problems arising from stream processing of implicit user feedback. Experiments with the two proposed algorithms show remarkable results with respect to their comparative ranking ability and speedup patterns.

Original languageEnglish
Pages53-58
Number of pages6
Publication statusPublished - 2016
Externally publishedYes
Event2016 EURO Mini Conference: From Multicriteria Decision Aid to Preference Learning, DA2PL 2016 - Paderborn, Germany
Duration: 7 Nov 20168 Nov 2016

Conference

Conference2016 EURO Mini Conference: From Multicriteria Decision Aid to Preference Learning, DA2PL 2016
Country/TerritoryGermany
CityPaderborn
Period7/11/168/11/16

Fingerprint

Dive into the research topics of 'Parallel personalized pairwise learning to rank'. Together they form a unique fingerprint.

Cite this