This thesis lies at the intersection of decision theory, artificial intelligence, and machine learning, focusing on preference learning and modeling on sets. The presented work addresses the modeling of decision-maker preferences expressed through pairwise comparisons over sets of elements, pursuing two distinct objectives: predicting unobserved preferences and prescribing potentially optimal alternatives. In this framework, we pay special attention to preference models with interactions, their theoretical properties, and the practical challenges raised by real-world applications.
The contributions are structured along three complementary axes: the first axis extends the Robust Ordinal Regression (ROR) method to enhance prediction robustness in the presence of multiple compatible models, particularly when handling positive or negative interactions between elements in a set. The second axis introduces a hybrid approach combining Gaussian processes and linear programming, establishing a probabilistic framework for managing inconsistencies in preference statements. The third axis extends the classical additive model by incorporating bilinear terms, enabling the representation of intransitive preferences while maintaining computational tractability of model parameter learning.