Ing. Petr Kasalický

Publikace

Bridging Offline-Online Evaluation with a Time-dependent and Popularity Bias-free Offline Metric for Recommenders

Rok
2023
Publikováno
Proceedings of EvalRS: A Rounded Evaluation Of Recommender Systems 2023. CEUR-WS.org, 2023. ISSN 1613-0073.
Typ
Stať ve sborníku
Anotace
The evaluation of recommendation systems is a complex task. The offline and online evaluation metrics for recommender systems are ambiguous in their true objectives. The majority of recently published papers benchmark their methods using ill-posed offline evaluation methodology that often fails to predict true online performance. Because of this, the impact that academic research has on the industry is reduced. The aim of our research is to investigate and compare the online performance of offline evaluation metrics. We show that penalizing popular items and considering the time of transactions during the evaluation significantly improves our ability to choose the best recommendation model for a live recommender system. Our results, averaged over five large-size real-world live data procured from recommenders, aim to help the academic community to understand better offline evaluation and optimization criteria that are more relevant for real applications of recommender systems.

Uncertainty-adjusted Inductive Matrix Completion with Graph Neural Networks

Autoři
Rok
2023
Publikováno
RecSys '23: Proceedings of the 17th ACM Conference on Recommender Systems. New York: Association for Computing Machinery, 2023. p. 1169-1174. ISBN 979-8-4007-0241-9.
Typ
Stať ve sborníku
Anotace
We propose a robust recommender systems model which performs matrix completion and a ratings-wise uncertainty estimation jointly. Whilst the prediction module is purely based on an implicit low-rank assumption imposed via nuclear norm regularization, our loss function is augmented by an uncertainty estimation module which learns an anomaly score for each individual rating via a Graph Neural Network: data points deemed more anomalous by the GNN are downregulated in the loss function used to train the low-rank module. The whole model is trained in an end-to-end fashion, allowing the anomaly detection module to tap on the supervised information available in the form of ratings. Thus, our model’s predictors enjoy the favourable generalization properties that come with being chosen from small function space (i.e., low-rank matrices), whilst exhibiting the robustness to outliers and flexibility that comes with deep learning methods. Furthermore, the anomaly scores themselves contain valuable qualitative information. Experiments on various real-life datasets demonstrate that our model outperforms standard matrix completion and other baselines, confirming the usefulness of the anomaly detection module.

Scalable Linear Shallow Autoencoder for Collaborative Filtering

Rok
2022
Publikováno
RecSys '22: Proceedings of the 16th ACM Conference on Recommender Systems. New York: Association for Computing Machinery, 2022. p. 604-609. ISBN 978-1-4503-9278-5.
Typ
Stať ve sborníku
Anotace
Recently, the RS research community has witnessed a surge in popularity for shallow autoencoder-based CF methods. Due to its straightforward implementation and high accuracy on item retrieval metrics, EASE is potentially the most prominent of these models. Despite its accuracy and simplicity, EASE cannot be employed in some real-world recommender system applications due to its inability to scale to huge interaction matrices. In this paper, we proposed ELSA, a scalable shallow autoencoder method for implicit feedback recommenders. ELSA is a scalable autoencoder in which the hidden layer is factorizable into a low-rank plus sparse structure, thereby drastically lowering memory consumption and computation time. We conducted a comprehensive offline experimental section that combined synthetic and several real-world datasets. We also validated our strategy in an online setting by comparing ELSA to baselines in a live recommender system using an A/B test. Experiments demonstrate that ELSA is scalable and has competitive performance. Finally, we demonstrate the explainability of ELSA by illustrating the recovered latent space.