On parallelizing SGD for pairwise learning to rank in collaborative filtering recommender systems

Murat Yagci, Tevfik Aytekin, Fikret Gurgen

Araştırma sonucu: Kitap/Rapor/Konferans sürecindeki bölümKonferans katkısıbilirkişi

10 Alıntılar (Scopus)

Özet

Learning to rank with pairwise loss functions has been found useful in collaborative filtering recommender systems. At web scale, the optimization is often based on matrix factorization with stochastic gradient descent (SGD) which has a sequential nature. We investigate two different shared memory lock-free parallel SGD schemes based on block partitioning and no partitioning for use with pairwise loss functions. To speed up convergence to a solution, we extrapolate simple practical algorithms from their application to pointwise learning to rank. Experimental results show that the proposed algorithms are quite useful regarding their ranking ability and speedup patterns in comparison to their sequential counterpart.

Orijinal dilİngilizce
Ana bilgisayar yayını başlığıRecSys 2017 - Proceedings of the 11th ACM Conference on Recommender Systems
YayınlayanAssociation for Computing Machinery, Inc
Sayfalar37-41
Sayfa sayısı5
ISBN (Elektronik)9781450346528
DOI'lar
Yayın durumuYayınlanan - 27 Ağu 2017
Etkinlik11th ACM Conference on Recommender Systems, RecSys 2017 - Como, Italy
Süre: 27 Ağu 201731 Ağu 2017

Yayın serisi

AdıRecSys 2017 - Proceedings of the 11th ACM Conference on Recommender Systems

???event.eventtypes.event.conference???

???event.eventtypes.event.conference???11th ACM Conference on Recommender Systems, RecSys 2017
Ülke/BölgeItaly
ŞehirComo
Periyot27/08/1731/08/17

Parmak izi

On parallelizing SGD for pairwise learning to rank in collaborative filtering recommender systems' araştırma başlıklarına git. Birlikte benzersiz bir parmak izi oluştururlar.

Bundan alıntı yap