XEFTERIS Michalis

PhD student at Sorbonne University
Team : RO
https://perso.lip6.fr/Michail.Xefteris

Supervision : Evripidis BAMPIS, Bruno ESCOFFIER

Online and Approximation Algorithms: Beyond Worst-Case Paradigms

This thesis explores the evolving landscape of algorithm analysis, from the traditional worst-case metrics to the innovative framework of learning-augmented algorithms. While traditional worst-case analysis has been the core of advancements in theoretical computer science over the past 50 years, there are many real-world problems and algorithms for which worst-case analysis does not provide a convincing explanation. A well-known example is the online paging problem, or caching. In caching, worst-case analysis cannot distinguish between the FIFO and LRU methods, even though the LRU method is clearly superior in practice.

This serves as a motivation to explore approaches that go beyond worst-case analysis with a particular focus on the emerging field of algorithms with predictions (learning-augmented algorithms), which has been inspired by the recent progress in the machine learning community. Machine learning algorithms have the ability to learn from past data and exploit underlying patterns in the application domain, leading to surprisingly effective solutions. In the learning-augmented framework, one designs algorithms that work in conjunction with a black-box machine learning model, which provides them with predictive information about the data. Since these predictions can be error-prone, the algorithm must handle them with care, a task that introduces various challenges.

The core of our work lies in online algorithms, which must make decisions without complete knowledge of the input sequence. We begin with a purely online problem that illustrates the necessity for beyond worst-case analysis. Then, we explore a new variant of a classical online problem that gives more power to the online algorithm by providing additional information about the input.

Next, we study two different online problems through the lens of learning-augmented algorithms. We design algorithms using predictions that overcome known computational barriers when the predictions are accurate enough. Even when the predictions are bad, our algorithms still maintain worst-case guarantees that are close to the best achievable guarantees without predictions.

Finally, we introduce learning-augmented approximation schemes for NP-hard optimization problems. We show that even a small number of predictions suffices to improve the running time of well-known approximation algorithms, surpassing known computational limits of worst-case analysis.


Phd defence : 12/17/2024

Jury members :

Vianney Perchet, ENSAE [Rapporteur]
Ola Svensson, EPFL [Rapporteur]
Johanne Cohen, Université Paris-Saclay
Christoph Dürr, Sorbonne Université
Nicole Megow, Universität Bremen
Evripidis Bampis, Sorbonne Université
Bruno Escoffier, Sorbonne Université

Departure date : 12/31/2024

2022-2024 Publications

  • 2024
  • 2023
    • E. Bampis, B. Escoffier, N. Hahn, M. Xefteris : “Online TSP with Known Locations”, Algorithms and Data Structures Symposium (WADS), vol. 14079, Lecture Notes in Computer Science, Montreal, Canada, pp. 65-78, (Springer Nature Switzerland) (2023)
    • E. Bampis, B. Escoffier, Th. Gouleakis, N. Hahn, K. Lakis, G. Shahkarami, M. Xefteris : “Learning-Augmented Online TSP on Rings, Trees, Flowers and (Almost) Everywhere Else”, 31st Annual European Symposium on Algorithms (ESA 2023), vol. 274, Leibniz International Proceedings in Informatics (LIPIcs), Amsterdam, Netherlands, pp. 12:1-12:17, (Schloss Dagstuhl - Leibniz-Zentrum für Informatik) (2023)
    • N. Hahn, M. Xefteris : “The Covering Canadian Traveller Problem Revisited”, International Symposium on Mathematical Foundations of Computer Science, vol. 272, Leibniz International Proceedings in Informatics (LIPIcs), Bordeaux, France, (Schloss Dagstuhl - Leibniz-Zentrum für Informatik) (2023)
  • 2022
    • E. Bampis, B. Escoffier, M. Xefteris : “Canadian Traveller Problem with Predictions”, 20th International Workshop on Approximation and Online Algorithms, WAOA 2022, vol. 13538, Lecture Notes in Computer Science, Potsdam, Germany, pp. 116-133 (2022)