The role of noise in GPs

The role of noise in GPs#

We show the difference between “noisy” and “noiseless” predictions w.r.t. the posterior predictive covariance matrix. When learning a noise model (σn2>0, e.g. using a WhiteKernel component in sklearn), then there are two flavors of that covariance matrix. Borrowing from the GPy library’s naming scheme, we have

  • predict_noiseless: cov(f)

  • predict: cov(f)+σn2I

where cov(f) is the posterior predictive covariance matrix ([RW06] eq. 2.24). These lead to different uncertainty estimates and posterior samples, as will be shown.

Contents