UCL  IRIS
Institutional Research Information Service
UCL Logo
Please report any queries concerning the funding data grouped in the sections named "Externally Awarded" or "Internally Disbursed" (shown on the profile page) to your Research Finance Administrator. Your can find your Research Finance Administrator at https://www.ucl.ac.uk/finance/research/rs-contacts.php by entering your department
Please report any queries concerning the student data shown on the profile page to:

Email: portico-services@ucl.ac.uk

Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Publication Detail
Two-stage Sampled Learning Theory on Distributions
  • Publication Type:
    Conference
  • Authors:
    Szabo Z, Gretton A, Póczos B, Sriperumbudur B
  • Pagination:
    948, 957
  • Status:
    Published
  • Name of conference:
    International Conference on Artificial Intelligence and Statistics (AISTATS)
  • Conference place:
    San Diego, California, USA
  • Conference start date:
    09/05/2015
  • Conference finish date:
    12/05/2015
  • Keywords:
    consistency, convergence rate, distribution regression, mean embedding, set kernel, two-stage sampling
  • Notes:
    Online proceedings: "http://jmlr.org/proceedings/papers/v38/szabo15.pdf", "http://jmlr.org/proceedings/papers/v38/szabo15-supp.pdf"".
Abstract
We focus on the distribution regression problem: regressing to a real-valued response from a probability distribution. Although there exist a large number of similarity measures between distributions, very little is known about their generalization performance in specific learning tasks. Learning problems formulated on distributions have an inherent two-stage sampled difficulty: in practice only samples from sampled distributions are observable, and one has to build an estimate on similarities computed between sets of points. To the best of our knowledge, the only existing method with consistency guarantees for distribution regression requires kernel density estimation as an intermediate step (which suffers from slow convergence issues in high dimensions), and the domain of the distributions to be compact Euclidean. In this paper, we provide theoretical guarantees for a remarkably simple algorithmic solution to the distribution regression problem: embed the distributions to a reproducing kernel Hilbert space, and learn a ridge regressor from the embeddings to the outputs. Our main contribution is to prove the consistency of this technique in the two-stage sampled setting under mild conditions (on separable, topological domains endowed with kernels). As a special case, we establish the consistency of the classical set kernel [Haussler, 1999; Gartner et. al, 2002] in regression (a 15-year-old open question), and cover more recent kernels on distributions, including those due to [Christmann and Steinwart, 2010].
Publication data is maintained in RPS. Visit https://rps.ucl.ac.uk
 More search options
UCL Researchers
Author
Gatsby Computational Neurosci Unit
University College London - Gower Street - London - WC1E 6BT Tel:+44 (0)20 7679 2000

© UCL 1999–2011

Search by