Description
Distributional Generalization is a type of generalization that roughly states that outputs of a classifier at train and test time are close as distributions, as opposed to close in just their average error. This behavior is not captured by classical generalization, which would only consider the average error and not the distribution of errors over the input domain.
Papers Using This Method
Learning Counterfactual Distributions via Kernel Nearest Neighbors2024-10-17What You See is What You Get: Principled Deep Learning via Distributional Generalization2022-04-07A Distributional Perspective on Actor-Critic Framework2021-01-01Distributional Generalization: A New Kind of Generalization2020-09-17