CALAUZENES Clément
Supervision : Patrick GALLINARI
Co-supervision : USUNIER Nicolas
On the Consistency of Convex Surrogate Formulations for Ranking
This thesis studies machine learning techniques for ranking. Due to the nature of the applications – e.g. web search engines – there is a crucial need for scalability in learning to rank. Thus a direct optimization of the risk associated to the evaluation metric is usually NP-hard – so intractable. Consequently, during the optimization step, it is usual to substitute a surro- gate loss – easier to optimize (e.g. continuous, differentiable, convex) – to the evaluation metric. To that end, we need to ensure that the risk associated to the surrogate loss “behaves well” with respect to the initial objective: the optimization of the evaluation risk. So, a desirable, if not mandatory, prop- erty of the surrogate loss is that the optimization of the surrogate risk leads to solutions that are also optimal for the evaluation risk. Such a property is equivalent to the notion of calibration of the surrogate loss with respect to the evaluation metric. In this thesis, we focus on the different ranking evaluation metrics and on the possibility to design convex surrogate losses – so that we can optimize efficiently – that are calibrated with respect to these evaluation metrics. The central result of this thesis is a theorem that characterizes the ranking evaluation metrics for which there exist calibrated convex surrogate losses. On the first hand, it allows us to state that several well-known ranking eval- uation metrics – e.g. the Expected Reciprocal Rank, the Average Precision and the Pairwise Disagreement – do not have any calibrated convex surro- gate loss. It means that the optimization of a convex surrogate loss leads to solutions that are not optimal for these evaluation metrics. On the other hand, we deduce from our main theorem a method that, given an evaluation metric, gives explicit formulae for convex surrogate losses. The resulting convex losses are calibrated with respect to the evaluation metric if (and only if) there exist at least one convex loss calibrated with respect to the underlying evaluation metric. Then for the evaluation metrics which gen- eral structure is similar to a Discounted Cumulative Gain, we prove that the calibration of a surrogate loss induces a guarantee stronger than calibration:
Defence : 07/19/2013
Jury members :
Francis Bach - INRIA / ENS [Rapporteur]
Eyke Hüllermeier - Université de Marburg [Rapporteur]
Stephan Clémençon - Telecom ParisTech
Patrick Gallinari - Université Pierre et Maris Curie
Patrice Perny - Université Pierre et Maris Curie
Nicolas Usunier - Université de Technologie de Compiègne
2011-2013 Publications
-
2013
- C. Calauzenes : “On the Consistency of Convex Surrogate Formulations for Ranking”, thesis, phd defence 07/19/2013, supervision Gallinari, Patrick, co-supervision : Usunier, Nicolas (2013)
- C. Calauzènes, N. Usunier, P. Gallinari : “Calibration and regret bounds for order-preserving surrogate losses in learning to rank”, Machine Learning, vol. 93 (2-3), pp. 227-260, (Springer Verlag) (2013)
-
2012
- C. Calauzènes, N. Usunier, P. Gallinari : “On the (Non-)existence of Convex, Calibrated Surrogate Losses for Ranking”, Advances in Neural Information Processing Systems 25, Lake Tahoe, NV, United States, pp. 197-205, (Curran Associates, Inc.) (2012)
-
2011
- D. Buffoni, C. Calauzènes, P. Gallinari, N. Usunier : “Learning Scoring Functions with Order-Preserving Losses and Standardized Supervision”, The 28th International Conference on Machine Learning (ICML 2011), Bellevue, WA, United States, pp. 825-832 (2011)