Please report any queries concerning the funding data grouped in the sections named "Externally Awarded" or "Internally Disbursed" (shown on the profile page) to
your Research Finance Administrator. Your can find your Research Finance Administrator at https://www.ucl.ac.uk/finance/research/rs-contacts.php by entering your department
Please report any queries concerning the student data shown on the profile page to:
Email: portico-services@ucl.ac.uk
Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Email: portico-services@ucl.ac.uk
Help Desk: http://www.ucl.ac.uk/ras/portico/helpdesk
Publication Detail
Scalable transformed additive signal decomposition by non-conjugate Gaussian process inference
-
Publication Type:Conference
-
Authors:Adam V, Hensman J, Sahani M
-
Publisher:IEEE
-
Publication date:10/11/2016
-
Published proceedings:2016 IEEE 26TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP)
-
Series:IEEE International Workshop on Machine Learning for Signal Processing
-
Status:Published
-
Name of conference:26th IEEE International Workshop on Machine Learning for Signal Processing (MLSP)
-
Conference place:Salerno, ITALY
-
Conference start date:13/09/2016
-
Conference finish date:16/09/2016
-
Print ISSN:2161-0363
-
Language:English
-
Keywords:Science & Technology, Technology, Engineering, Electrical & Electronic, Engineering
Abstract
Many functions and signals of interest are formed by the addition of multiple underlying components, often nonlinearly transformed and modified by noise. Examples may be found in the literature on Generalized Additive Models [1] and Underdetermined Source Separation [2] or other mode decomposition techniques. Recovery of the underlying component processes often depends on finding and exploiting statistical regularities within them. Gaussian Processes (GPs) [3] have become the dominant way to model statistical expectations over functions. Recent advances make inference of the GP posterior efficient for large scale datasets and arbitrary likelihoods [4,5]. Here we extend these methods to the additive GP case [6, 7], thus achieving scalable marginal posterior inference over each latent function in settings such as those above.
› More search options
UCL Researchers