Abstract: Does the dominant approach to learn representations (as a side effect of optimizing an expected cost for a single training distribution) remain a good approach when we are dealing with multiple distributions? Our thesis is that such scenarios are better served by representations that are richer than those obtained with a single optimization episode. We support this thesis with simple theoretical arguments and with experiments utilizing an apparently naïve ensembling technique: concatenating the representations obtained from multiple training episodes using the same data, model, algorithm, and hyper-parameters, but different random seeds. These independently trained networks perform similarly. Yet, in a number of scenarios involving new distributions, the concatenated representation performs substantially better than an equivalently sized network trained with a single training run. This proves that the representations constructed by multiple training episodes are in fact different. Although their concatenation carries little additional information about the training task under the training distribution, it becomes substantially more informative when tasks or distributions change. Meanwhile, a single training episode is unlikely to yield such a redundant representation because the optimization process has no reason to accumulate features that do not incrementally improve the training performance.
pmlr-zhang-2023.djvu pmlr-zhang-2023.pdf pmlr-zhang-2023.ps.gz
@article{zhang-2023, title = {Learning useful representations for shifting tasks and distributions}, author = {Zhang, Jianyu and Bottou, L{\'e}on}, booktitle = {International Conference on Machine Learning}, pages = {40830--40850}, year = {2023}, organization = {PMLR}, url = {http://leon.bottou.org/papers/zhang-2023}, }