On parallelizing SGD for pairwise learning to rank in collaborative filtering recommender systems

Murat Yagci, Tevfik Aytekin, Fikret Gurgen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Citations (Scopus)

Abstract

Learning to rank with pairwise loss functions has been found useful in collaborative filtering recommender systems. At web scale, the optimization is often based on matrix factorization with stochastic gradient descent (SGD) which has a sequential nature. We investigate two different shared memory lock-free parallel SGD schemes based on block partitioning and no partitioning for use with pairwise loss functions. To speed up convergence to a solution, we extrapolate simple practical algorithms from their application to pointwise learning to rank. Experimental results show that the proposed algorithms are quite useful regarding their ranking ability and speedup patterns in comparison to their sequential counterpart.

Original languageEnglish
Title of host publicationRecSys 2017 - Proceedings of the 11th ACM Conference on Recommender Systems
PublisherAssociation for Computing Machinery, Inc
Pages37-41
Number of pages5
ISBN (Electronic)9781450346528
DOIs
Publication statusPublished - 27 Aug 2017
Externally publishedYes
Event11th ACM Conference on Recommender Systems, RecSys 2017 - Como, Italy
Duration: 27 Aug 201731 Aug 2017

Publication series

NameRecSys 2017 - Proceedings of the 11th ACM Conference on Recommender Systems

Conference

Conference11th ACM Conference on Recommender Systems, RecSys 2017
Country/TerritoryItaly
CityComo
Period27/08/1731/08/17

Keywords

  • Learning to rank
  • Pairwise loss
  • Parallel SGD
  • Personalization

Fingerprint

Dive into the research topics of 'On parallelizing SGD for pairwise learning to rank in collaborative filtering recommender systems'. Together they form a unique fingerprint.

Cite this