Abstract
Pairwise learning to rank is known to be suitable for a wide range of collaborative filtering applications. In this work, we show that its efficiency can be greatly improved with parallel stochastic gradient descent schemes. Accordingly, we first propose to extrapolate two such state-of-the-art schemes to the pairwise learning to rank problem setting. We then show the versatility of these proposals by showing the applicability of several important extensions commonly desired in practice. Theoretical as well as extensive empirical analyses of our proposals show remarkable efficiency results for pairwise learning to rank in offline and stream learning settings.
Original language | English |
---|---|
Article number | e5141 |
Journal | Concurrency and Computation: Practice and Experience |
Volume | 31 |
Issue number | 15 |
DOIs | |
Publication status | Published - 10 Aug 2019 |
Externally published | Yes |
Keywords
- learning to rank
- pairwise loss
- parallel SGD
- recommender systems