Domain Generalization Vs. ID-OOD Generalization Vs. Domain Adaptation Vs. Robustness Vs.Open-Set Recognition

--

Model Reliability in production is getting attention from AI leaders like Google and Facebook. Different terminologies are used to describe a model’s ability to generalize.

In-Distribution (ID) Generalization

We start with the simplest and most popular In-Distribution Generalization where a dataset is divided into Train/Val/Test sets and then performance measured on the Test Set is considered to be the In-Distribution generalization.

Domain Generalization and OOD Generalization

Google in [2] describes Out-Of-Distribution generalization as possibly two of the four mentioned in Robust Generalization which are covariate and sub-population shifts.

“In contrast to OOD generalization where the test example belongs to the same in-distribution training classes.” and also “In the real world, we care not only about metrics on new data obtained from the same distribution the model was trained on (i.i.d.), but also about robustness, as measured by metrics on data under out-of-distribution shifts such as covariate or subpopulation shift.” [2]

Facebook in [3] describes Domain Generalization as a classifier's ability to accurately predict samples from the same class but from a different data source as shown in the image below from [3]. Which is similar in description to google’s description of Out-Of-Domain Generalization, especially for Covariate Shift. Actually, Google in [1] gives an example of Covariate Shift, training on dog images, and predicting on dog drawings, which is similar to the nature of datasets Facebook used in [2].

Facebook in [3] Domain Generalization datasets, each row is a combination of datasets and each dataset is considered a domain.

“Covariate shift refers to scenarios where the distribution of inputs changes while the conditional distribution of outputs is unchanged (Sugiyama and Kawanabe, 2012). For example, the training set may include natural dog images and the new input is a drawing of a dog. We use the same metrics as those used for assessing in-distribution generalization.” [1]

All in all, the Domain Generalization term used by Facebook in [3] and Out-of-Distribution Generalization used by Google in [1] and [2] are two terms describing the same quality for a model. Perhaps the only difference is that Google dissects it into Covariate and Sub-population shifts. Where most likely Facebook’s Domain Generalization just means generalization on Covariate Shifted data.

Robustness

Google in [1] defined Out-of-Distribution (OOD) Generalization by four types and describes a model’s ability to perform well on all four types as “Robust Generalization”.

“Robust Generalization involves an estimate or forecast about an unseen event. We investigate four types of out-of-distribution data: covariate shift (when the input distribution changes between training and application and the output distribution is unchanged), semantic (or class) shift, label uncertainty, and subpopulation shift.” [1]

Furthermore, Facebook in [4] defines Robustness as a model’s resilience to Adversarial attacks. Also in [5] Google presents the same concept under “Robust Generalization” as well.

Google in [5] presented the ViT-Plex model announced in [1] and demoed its Robust Generalization in the context of covariate shift.

All in all, Robustness seems to be agreeably defined as the model’s resilience to Adversarial Attacks. Google presented adversarial attacks in the context of Covariate Shifts. That implies that if a model is resilient to Adversarial Attacks, it will most likely also have good OOD Generalization on Covariate Shifted data. However, while google defined Uncertainty in [1] as part of Robust Generaalizattion, they in [5] made a separate section for it. Moreover, Google in [6] (research paper associated with [1] and [5] )describes Robustness as “Robustness to Spurious Correlations”. So all in all, if a model is resilient to Adversarial attacks, it will be Robust to Spurious Correlations and it will have good OOD/Domain Generalization on Covariate Shifts.

Domain Adaptation

Domain Adaptation in [1] Google defines it as including Active Learning, One-Shot Learning, and Zero-Shot Performance.

Facebook in [6] describes it as :

“Domain generalization differs from unsupervised domain adaptation. In the latter, it is assumed that unlabeled data from the test domain is available during training” [6].

Conclusion

All in all, the Domain Generalization term used by Facebook in [3] and Out-of-Distribution Generalization used by Google in [1] and [2] are two terms describing the same quality for a model. Perhaps the only difference is that Google dissects it into Covariate and Sub-population shifts. Where most likely Facebook’s Domain Generalization just means generalization on Covariate Shifted data. Also, if a model is resilient to Adversarial attacks, it will be Robust to Spurious Correlations and it will have good OOD/Domain Generalization on Covariate Shifts.

Google’s KPI for reliability [1]
Facebook in [6]

[1] https://ai.googleblog.com/2022/07/towards-reliability-in-deep-learning.html

[2] https://arxiv.org/pdf/2207.07411.pdf

[3] https://github.com/facebookresearch/DomainBed

[4] https://captum.ai/tutorials/CIFAR_Captum_Robustness

[5] https://colab.research.google.com/github/google/uncertainty-baselines/blob/main/experimental/plex/plex_vit_demo.ipynb

[6] https://arxiv.org/pdf/2007.01434.pdf

--

--

Emad Ezzeldin ,Sr. DataScientist@UnitedHealthGroup
Emad Ezzeldin ,Sr. DataScientist@UnitedHealthGroup

Written by Emad Ezzeldin ,Sr. DataScientist@UnitedHealthGroup

5 years Data Scientist and a MSc from George Mason University in Data Analytics. I enjoy experimenting with Data Science tools. emad.ezzeldin4@gmail.com