A conditional one-output likelihood formulation for multitask Gaussian processes

نویسندگان

چکیده

Multitask Gaussian processes (MTGP) are the process (GP) framework’s solution for multioutput regression problems in which T elements of regressors cannot be considered conditionally independent given observations. Standard MTGP models assume that there exist both a multitask covariance matrix as function an intertask matrix, and noise matrix. These matrices need to approximated by low rank simplification order P reduce number parameters learnt from T2 TP. Here we introduce novel approach simplifies learning reducing it set conditioned univariate GPs without any approximations, therefore completely eliminating select adequate value hyperparameter P. At same time, extending this with hierarchical approximate model, proposed extensions capable recovering after only 2T parameters, avoiding validation model overall complexity well risk overfitting. Experimental results over synthetic real confirm advantages inference its ability accurately recover original signal matrices, achieved performance improvement comparison other state art approaches. We have also integrated standard GP toolboxes, showing is computationally competitive options.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Collaborative Multi-output Gaussian Processes

We introduce the collaborative multi-output Gaussian process (GP) model for learning dependent tasks with very large datasets. The model fosters task correlations by mixing sparse processes and sharing multiple sets of inducing points. This facilitates the application of variational inference and the derivation of an evidence lower bound that decomposes across inputs and outputs. We learn all t...

متن کامل

The Rate of Entropy for Gaussian Processes

In this paper, we show that in order to obtain the Tsallis entropy rate for stochastic processes, we can use the limit of conditional entropy, as it was done for the case of Shannon and Renyi entropy rates. Using that we can obtain Tsallis entropy rate for stationary Gaussian processes. Finally, we derive the relation between Renyi, Shannon and Tsallis entropy rates for stationary Gaussian proc...

متن کامل

Maximum-likelihood Identification of Sampled Gaussian Processes

This work considers sampled data of continuous-domain Gaussian processes. We derive a maximum-likelihood estimator for identifying autoregressive moving average parameters while incorporating the sampling process into the problem formulation. The proposed identification approach introduces exponential models for both the continuous and the sampled processes. We construct a likelihood function f...

متن کامل

Computationally Efficient Convolved Multiple Output Gaussian Processes

Recently there has been an increasing interest in regression methods that deal with multiple outputs. This has been motivated partly by frameworks like multitask learning, multisensor networks or structured output data. From a Gaussian processes perspective, the problem reduces to specifying an appropriate covariance function that, whilst being positive semi-definite, captures the dependencies ...

متن کامل

Empirical Likelihood Approach for Non-Gaussian Locally Stationary Processes

An application of empirical likelihood method to non-Gaussian locally stationary processes is presented. Based on the central limit theorem for locally stationary processes, we calculate the asymptotic distribution of empirical likelihood ratio statistics. It is shown that empirical likelihood method enables us to make inference on various important indices in time series analysis. Furthermore,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neurocomputing

سال: 2022

ISSN: ['0925-2312', '1872-8286']

DOI: https://doi.org/10.1016/j.neucom.2022.08.064