Distributionally Robust Learning (DRL) is a framework in machine learning that aims to ensure robust performance under distribution shifts. This is particularly important in scenarios where the data distribution at test time may differ from the training distribution, a common occurrence in real-world applications. DRL seeks to optimize the worst-case performance over a set of possible distributions, known as the uncertainty set. The challenge lies in specifying this set appropriately to avoid overly conservative solutions. In the context of DRL, shape-constrained approaches have been proposed to incorporate prior knowledge about how the target distribution might differ from the estimated distribution. This involves assuming that the density ratio between the target and estimated distributions is isotonic with respect to some partial order. Such approaches have shown improved accuracy in empirical studies, both on synthetic and real data.
Shape-constrained optimization
Not specified
Synthetic and real data examples
Accuracy under distribution shift
Not specified
No
No
Robustness to distribution shifts, shape-constrained optimization
No
Not specified
Not specified
Not specified
Not specified
Not specified
Not specified
No
Not specified
Not specified
Not specified
Not specified
Not specified
Not specified
Not specified
Challenge in specifying the uncertainty set
Not specified
Not specified
Not specified
Not specified
Not specified
Not specified
Not specified
Not specified
No
Not specified
Not specified
No
Not specified
Not specified
Not specified
Not specified
Not specified
No
Not specified
Not specified
0.00
Not specified
Not specified
01/01/1970
01/01/1970
Not specified
Not specified
Yes