Abstract
Methods based on online matrix factorization are commonly used for personalized learning to rank (LtR). These methods often use stochastic gradient descent (SGD) for optimization. SGD has a sequential nature which can be problematic for large relevance feedback datasets due to late convergence or when multiple passes are required over the dataset. In this paper, we investigate a shared memory lock-free parallel SGD scheme to deal with these problems in a personalized pairwise LtR setting. We show that such a parallel online algorithm is directly applicable to pairwise LtR as in pointwise LtR.We also show that a few modifications to the parallel online algorithm help solve several problems arising from stream processing of implicit user feedback. Experiments with the two proposed algorithms show remarkable results with respect to their comparative ranking ability and speedup patterns.
Original language | English |
---|---|
Pages | 53-58 |
Number of pages | 6 |
Publication status | Published - 2016 |
Externally published | Yes |
Event | 2016 EURO Mini Conference: From Multicriteria Decision Aid to Preference Learning, DA2PL 2016 - Paderborn, Germany Duration: 7 Nov 2016 → 8 Nov 2016 |
Conference
Conference | 2016 EURO Mini Conference: From Multicriteria Decision Aid to Preference Learning, DA2PL 2016 |
---|---|
Country/Territory | Germany |
City | Paderborn |
Period | 7/11/16 → 8/11/16 |