UCL  IRIS
Institutional Research Information Service
UCL Logo
Please report any queries concerning the funding data grouped in the sections named "Externally Awarded" or "Internally Disbursed" (shown on the profile page) to your Research Finance Administrator. Your can find your Research Finance Administrator at https://www.ucl.ac.uk/finance/research/rs-contacts.php by entering your department
Please report any queries concerning the student data shown on the profile page to:

Email: portico-services@ucl.ac.uk

Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Publication Detail
Non-vacuous generalization bounds at the Im-AGeNet scale: A Pac-Bayesian compression approach
  • Publication Type:
    Conference
  • Authors:
    Zhou W, Veitch V, Austern M, Adams RP, Orbanz P
  • Publication date:
    09/05/2019
  • Pagination:
    0
  • Published proceedings:
    7th International Conference on Learning Representations, ICLR 2019
  • Volume:
    0
  • Status:
    Published
  • Name of conference:
    ICLR 2019 - International Conference on Learning Representations
  • Conference place:
    New Orleans, Louisiana, United States
  • Conference start date:
    06/05/2019
  • Conference finish date:
    09/05/2019
Abstract
© 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be “compressed” to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size that, combined with off-the-shelf compression algorithms, leads to state-of-the-art generalization guarantees. In particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. Additionally, we show that compressibility of models that tend to overfit is limited. Empirical results show that an increase in overfitting increases the number of bits required to describe a trained network.
Publication data is maintained in RPS. Visit https://rps.ucl.ac.uk
 More search options
UCL Researchers
Author
Gatsby Computational Neurosci Unit
University College London - Gower Street - London - WC1E 6BT Tel:+44 (0)20 7679 2000

© UCL 1999–2011

Search by