Implementations

Latent([mean_func, cov_func])

Latent Gaussian process.

Marginal([mean_func, cov_func])

Marginal Gaussian process.

LatentKron([mean_func, cov_funcs])

Latent Gaussian process whose covariance is a tensor product kernel.

MarginalKron([mean_func, cov_funcs])

Marginal Gaussian process whose covariance is a tensor product kernel.

MarginalSparse([mean_func, cov_func, approx])

Approximate marginal Gaussian process.

TP([mean_func, cov_func, nu])

Student’s T process prior.

class pymc3.gp.gp.Latent(mean_func=<pymc3.gp.mean.Zero object>, cov_func=<pymc3.gp.cov.Constant object>)

Latent Gaussian process.

The gp.Latent class is a direct implementation of a GP. No additive noise is assumed. It is called “Latent” because the underlying function values are treated as latent variables. It has a prior method and a conditional method. Given a mean and covariance function the function \(f(x)\) is modeled as,

\[f(x) \sim \mathcal{GP}\left(\mu(x), k(x, x')\right)\]

Use the prior and conditional methods to actually construct random variables representing the unknown, or latent, function whose distribution is the GP prior or GP conditional. This GP implementation can be used to implement regression on data that is not normally distributed. For more information on the prior and conditional methods, see their docstrings.

Parameters
cov_func: None, 2D array, or instance of Covariance

The covariance function. Defaults to zero.

mean_func: None, instance of Mean

The mean function. Defaults to zero.

Examples

# A one dimensional column vector of inputs.
X = np.linspace(0, 1, 10)[:, None]

with pm.Model() as model:
    # Specify the covariance function.
    cov_func = pm.gp.cov.ExpQuad(1, ls=0.1)

    # Specify the GP.  The default mean function is `Zero`.
    gp = pm.gp.Latent(cov_func=cov_func)

    # Place a GP prior over the function f.
    f = gp.prior("f", X=X)

...

# After fitting or sampling, specify the distribution
# at new points with .conditional
Xnew = np.linspace(-1, 2, 50)[:, None]

with model:
    fcond = gp.conditional("fcond", Xnew=Xnew)
conditional(name, Xnew, given=None, **kwargs)

Returns the conditional distribution evaluated over new input locations Xnew.

Given a set of function values f that the GP prior was over, the conditional distribution over a set of new points, f_* is

\[f_* \mid f, X, X_* \sim \mathcal{GP}\left( K(X_*, X) K(X, X)^{-1} f \,, K(X_*, X_*) - K(X_*, X) K(X, X)^{-1} K(X, X_*) \right)\]
Parameters
name: string

Name of the random variable

Xnew: array-like

Function input values.

given: dict

Can optionally take as key value pairs: X, y, noise, and gp. See the section in the documentation on additive GP models in PyMC3 for more information.

**kwargs

Extra keyword arguments that are passed to MvNormal distribution constructor.

prior(name, X, reparameterize=True, **kwargs)

Returns the GP prior distribution evaluated over the input locations X.

This is the prior probability over the space of functions described by its mean and covariance function.

\[f \mid X \sim \text{MvNormal}\left( \mu(X), k(X, X') \right)\]
Parameters
name: string

Name of the random variable

X: array-like

Function input values.

reparameterize: bool

Reparameterize the distribution by rotating the random variable by the Cholesky factor of the covariance matrix.

**kwargs

Extra keyword arguments that are passed to distribution constructor.

class pymc3.gp.gp.LatentKron(mean_func=<pymc3.gp.mean.Zero object>, cov_funcs=<pymc3.gp.cov.Constant object>)

Latent Gaussian process whose covariance is a tensor product kernel.

The gp.LatentKron class is a direct implementation of a GP with a Kronecker structured covariance, without reference to any noise or specific likelihood. The GP is constructed with the prior method, and the conditional GP over new input locations is constructed with the conditional method. conditional and method. For more information on these methods, see their docstrings. This GP implementation can be used to model a Gaussian process whose inputs cover evenly spaced grids on more than one dimension. LatentKron is relies on the KroneckerNormal distribution, see its docstring for more information.

Parameters
cov_funcs: list of Covariance objects

The covariance functions that compose the tensor (Kronecker) product. Defaults to [zero].

mean_func: None, instance of Mean

The mean function. Defaults to zero.

Examples

# One dimensional column vectors of inputs
X1 = np.linspace(0, 1, 10)[:, None]
X2 = np.linspace(0, 2, 5)[:, None]
Xs = [X1, X2]
with pm.Model() as model:
    # Specify the covariance functions for each Xi
    cov_func1 = pm.gp.cov.ExpQuad(1, ls=0.1)  # Must accept X1 without error
    cov_func2 = pm.gp.cov.ExpQuad(1, ls=0.3)  # Must accept X2 without error

    # Specify the GP.  The default mean function is `Zero`.
    gp = pm.gp.LatentKron(cov_funcs=[cov_func1, cov_func2])

    # ...

# After fitting or sampling, specify the distribution
# at new points with .conditional
# Xnew need not be on a full grid
Xnew1 = np.linspace(-1, 2, 10)[:, None]
Xnew2 = np.linspace(0, 3, 10)[:, None]
Xnew = np.concatenate((Xnew1, Xnew2), axis=1)  # Not full grid, works
Xnew = pm.math.cartesian(Xnew1, Xnew2)  # Full grid, also works

with model:
    fcond = gp.conditional("fcond", Xnew=Xnew)
conditional(name, Xnew, **kwargs)

Returns the conditional distribution evaluated over new input locations Xnew.

Xnew will be split by columns and fed to the relevant covariance functions based on their input_dim. For example, if cov_func1, cov_func2, and cov_func3 have input_dim of 2, 1, and 4, respectively, then Xnew must have 7 columns and a covariance between the prediction points

cov_func(Xnew) = cov_func1(Xnew[:, :2]) * cov_func1(Xnew[:, 2:3]) * cov_func1(Xnew[:, 3:])

The distribution returned by conditional does not have a Kronecker structure regardless of whether the input points lie on a full grid. Therefore, Xnew does not need to have grid structure.

Parameters
name: string

Name of the random variable

Xnew: array-like

Function input values. If one-dimensional, must be a column vector with shape (n, 1).

**kwargs

Extra keyword arguments that are passed to MvNormal distribution constructor.

prior(name, Xs, **kwargs)

Returns the prior distribution evaluated over the input locations Xs.

Parameters
name: string

Name of the random variable

Xs: list of array-like

Function input values for each covariance function. Each entry must be passable to its respective covariance without error. The total covariance function is measured on the full grid cartesian(*Xs).

**kwargs

Extra keyword arguments that are passed to the KroneckerNormal distribution constructor.

class pymc3.gp.gp.Marginal(mean_func=<pymc3.gp.mean.Zero object>, cov_func=<pymc3.gp.cov.Constant object>)

Marginal Gaussian process.

The gp.Marginal class is an implementation of the sum of a GP prior and additive noise. It has marginal_likelihood, conditional and predict methods. This GP implementation can be used to implement regression on data that is normally distributed. For more information on the prior and conditional methods, see their docstrings.

Parameters
cov_func: None, 2D array, or instance of Covariance

The covariance function. Defaults to zero.

mean_func: None, instance of Mean

The mean function. Defaults to zero.

Examples

# A one dimensional column vector of inputs.
X = np.linspace(0, 1, 10)[:, None]

with pm.Model() as model:
    # Specify the covariance function.
    cov_func = pm.gp.cov.ExpQuad(1, ls=0.1)

    # Specify the GP.  The default mean function is `Zero`.
    gp = pm.gp.Marginal(cov_func=cov_func)

    # Place a GP prior over the function f.
    sigma = pm.HalfCauchy("sigma", beta=3)
    y_ = gp.marginal_likelihood("y", X=X, y=y, noise=sigma)

...

# After fitting or sampling, specify the distribution
# at new points with .conditional
Xnew = np.linspace(-1, 2, 50)[:, None]

with model:
    fcond = gp.conditional("fcond", Xnew=Xnew)
conditional(name, Xnew, pred_noise=False, given=None, **kwargs)

Returns the conditional distribution evaluated over new input locations Xnew.

Given a set of function values f that the GP prior was over, the conditional distribution over a set of new points, f_* is:

\[f_* \mid f, X, X_* \sim \mathcal{GP}\left( K(X_*, X) [K(X, X) + K_{n}(X, X)]^{-1} f \,, K(X_*, X_*) - K(X_*, X) [K(X, X) + K_{n}(X, X)]^{-1} K(X, X_*) \right)\]
Parameters
name: string

Name of the random variable

Xnew: array-like

Function input values. If one-dimensional, must be a column vector with shape (n, 1).

pred_noise: bool

Whether or not observation noise is included in the conditional. Default is False.

given: dict

Can optionally take as key value pairs: X, y, noise, and gp. See the section in the documentation on additive GP models in PyMC3 for more information.

**kwargs

Extra keyword arguments that are passed to MvNormal distribution constructor.

marginal_likelihood(name, X, y, noise, is_observed=True, **kwargs)

Returns the marginal likelihood distribution, given the input locations X and the data y.

This is integral over the product of the GP prior and a normal likelihood.

\[y \mid X,\theta \sim \int p(y \mid f,\, X,\, \theta) \, p(f \mid X,\, \theta) \, df\]
Parameters
name: string

Name of the random variable

X: array-like

Function input values. If one-dimensional, must be a column vector with shape (n, 1).

y: array-like

Data that is the sum of the function with the GP prior and Gaussian noise. Must have shape (n, ).

noise: scalar, Variable, or Covariance

Standard deviation of the Gaussian noise. Can also be a Covariance for non-white noise.

is_observed: bool

Whether to set y as an observed variable in the model. Default is True.

**kwargs

Extra keyword arguments that are passed to MvNormal distribution constructor.

predict(Xnew, point=None, diag=False, pred_noise=False, given=None)

Return the mean vector and covariance matrix of the conditional distribution as numpy arrays, given a point, such as the MAP estimate or a sample from a trace.

Parameters
Xnew: array-like

Function input values. If one-dimensional, must be a column vector with shape (n, 1).

point: pymc3.model.Point

A specific point to condition on.

diag: bool

If True, return the diagonal instead of the full covariance matrix. Default is False.

pred_noise: bool

Whether or not observation noise is included in the conditional. Default is False.

given: dict

Same as conditional method.

predictt(Xnew, diag=False, pred_noise=False, given=None)

Return the mean vector and covariance matrix of the conditional distribution as symbolic variables.

Parameters
Xnew: array-like

Function input values. If one-dimensional, must be a column vector with shape (n, 1).

diag: bool

If True, return the diagonal instead of the full covariance matrix. Default is False.

pred_noise: bool

Whether or not observation noise is included in the conditional. Default is False.

given: dict

Same as conditional method.

class pymc3.gp.gp.MarginalKron(mean_func=<pymc3.gp.mean.Zero object>, cov_funcs=<pymc3.gp.cov.Constant object>)

Marginal Gaussian process whose covariance is a tensor product kernel.

The gp.MarginalKron class is an implementation of the sum of a Kronecker GP prior and additive white noise. It has marginal_likelihood, conditional and predict methods. This GP implementation can be used to efficiently implement regression on data that are normally distributed with a tensor product kernel and are measured on a full grid of inputs: cartesian(*Xs). MarginalKron is based on the KroneckerNormal distribution, see its docstring for more information. For more information on the prior and conditional methods, see their docstrings.

Parameters
cov_funcs: list of Covariance objects

The covariance functions that compose the tensor (Kronecker) product. Defaults to [zero].

mean_func: None, instance of Mean

The mean function. Defaults to zero.

Examples

# One dimensional column vectors of inputs
X1 = np.linspace(0, 1, 10)[:, None]
X2 = np.linspace(0, 2, 5)[:, None]
Xs = [X1, X2]
y = np.random.randn(len(X1)*len(X2))  # toy data
with pm.Model() as model:
    # Specify the covariance functions for each Xi
    cov_func1 = pm.gp.cov.ExpQuad(1, ls=0.1)  # Must accept X1 without error
    cov_func2 = pm.gp.cov.ExpQuad(1, ls=0.3)  # Must accept X2 without error

    # Specify the GP.  The default mean function is `Zero`.
    gp = pm.gp.MarginalKron(cov_funcs=[cov_func1, cov_func2])

    # Place a GP prior over the function f.
    sigma = pm.HalfCauchy("sigma", beta=3)
    y_ = gp.marginal_likelihood("y", Xs=Xs, y=y, sigma=sigma)

    # ...

# After fitting or sampling, specify the distribution
# at new points with .conditional
# Xnew need not be on a full grid
Xnew1 = np.linspace(-1, 2, 10)[:, None]
Xnew2 = np.linspace(0, 3, 10)[:, None]
Xnew = np.concatenate((Xnew1, Xnew2), axis=1)  # Not full grid, works
Xnew = pm.math.cartesian(Xnew1, Xnew2)  # Full grid, also works

with model:
    fcond = gp.conditional("fcond", Xnew=Xnew)
conditional(name, Xnew, pred_noise=False, **kwargs)

Returns the conditional distribution evaluated over new input locations Xnew, just as in Marginal.

Xnew will be split by columns and fed to the relevant covariance functions based on their input_dim. For example, if cov_func1, cov_func2, and cov_func3 have input_dim of 2, 1, and 4, respectively, then Xnew must have 7 columns and a covariance between the prediction points

cov_func(Xnew) = cov_func1(Xnew[:, :2]) * cov_func1(Xnew[:, 2:3]) * cov_func1(Xnew[:, 3:])

The distribution returned by conditional does not have a Kronecker structure regardless of whether the input points lie on a full grid. Therefore, Xnew does not need to have grid structure.

Parameters
name: string

Name of the random variable

Xnew: array-like

Function input values. If one-dimensional, must be a column vector with shape (n, 1).

pred_noise: bool

Whether or not observation noise is included in the conditional. Default is False.

**kwargs

Extra keyword arguments that are passed to MvNormal distribution constructor.

marginal_likelihood(name, Xs, y, sigma, is_observed=True, **kwargs)

Returns the marginal likelihood distribution, given the input locations cartesian(*Xs) and the data y.

Parameters
name: string

Name of the random variable

Xs: list of array-like

Function input values for each covariance function. Each entry must be passable to its respective covariance without error. The total covariance function is measured on the full grid cartesian(*Xs).

y: array-like

Data that is the sum of the function with the GP prior and Gaussian noise. Must have shape (n, ).

sigma: scalar, Variable

Standard deviation of the white Gaussian noise.

is_observed: bool

Whether to set y as an observed variable in the model. Default is True.

**kwargs

Extra keyword arguments that are passed to KroneckerNormal distribution constructor.

predict(Xnew, point=None, diag=False, pred_noise=False)

Return the mean vector and covariance matrix of the conditional distribution as numpy arrays, given a point, such as the MAP estimate or a sample from a trace.

Parameters
Xnew: array-like

Function input values. If one-dimensional, must be a column vector with shape (n, 1).

point: pymc3.model.Point

A specific point to condition on.

diag: bool

If True, return the diagonal instead of the full covariance matrix. Default is False.

pred_noise: bool

Whether or not observation noise is included in the conditional. Default is False.

predictt(Xnew, diag=False, pred_noise=False)

Return the mean vector and covariance matrix of the conditional distribution as symbolic variables.

Parameters
Xnew: array-like

Function input values. If one-dimensional, must be a column vector with shape (n, 1).

diag: bool

If True, return the diagonal instead of the full covariance matrix. Default is False.

pred_noise: bool

Whether or not observation noise is included in the conditional. Default is False.

class pymc3.gp.gp.MarginalSparse(mean_func=<pymc3.gp.mean.Zero object>, cov_func=<pymc3.gp.cov.Constant object>, approx='FITC')

Approximate marginal Gaussian process.

The gp.MarginalSparse class is an implementation of the sum of a GP prior and additive noise. It has marginal_likelihood, conditional and predict methods. This GP implementation can be used to implement regression on data that is normally distributed. The available approximations are:

  • DTC: Deterministic Training Conditional

  • FITC: Fully independent Training Conditional

  • VFE: Variational Free Energy

Parameters
cov_func: None, 2D array, or instance of Covariance

The covariance function. Defaults to zero.

mean_func: None, instance of Mean

The mean function. Defaults to zero.

approx: string

The approximation to use. Must be one of VFE, FITC or DTC.

References

  • Quinonero-Candela, J., and Rasmussen, C. (2005). A Unifying View of Sparse Approximate Gaussian Process Regression.

  • Titsias, M. (2009). Variational Learning of Inducing Variables in Sparse Gaussian Processes.

Examples

# A one dimensional column vector of inputs.
X = np.linspace(0, 1, 10)[:, None]

# A smaller set of inducing inputs
Xu = np.linspace(0, 1, 5)[:, None]

with pm.Model() as model:
    # Specify the covariance function.
    cov_func = pm.gp.cov.ExpQuad(1, ls=0.1)

    # Specify the GP.  The default mean function is `Zero`.
    gp = pm.gp.MarginalSparse(cov_func=cov_func, approx="FITC")

    # Place a GP prior over the function f.
    sigma = pm.HalfCauchy("sigma", beta=3)
    y_ = gp.marginal_likelihood("y", X=X, Xu=Xu, y=y, sigma=sigma)

...

# After fitting or sampling, specify the distribution
# at new points with .conditional
Xnew = np.linspace(-1, 2, 50)[:, None]

with model:
    fcond = gp.conditional("fcond", Xnew=Xnew)
conditional(name, Xnew, pred_noise=False, given=None, **kwargs)

Returns the approximate conditional distribution of the GP evaluated over new input locations Xnew.

Parameters
name: string

Name of the random variable

Xnew: array-like

Function input values. If one-dimensional, must be a column vector with shape (n, 1).

pred_noise: bool

Whether or not observation noise is included in the conditional. Default is False.

given: dict

Can optionally take as key value pairs: X, Xu, y, noise, and gp. See the section in the documentation on additive GP models in PyMC3 for more information.

**kwargs

Extra keyword arguments that are passed to MvNormal distribution constructor.

marginal_likelihood(name, X, Xu, y, noise=None, is_observed=True, **kwargs)

Returns the approximate marginal likelihood distribution, given the input locations X, inducing point locations Xu, data y, and white noise standard deviations sigma.

Parameters
name: string

Name of the random variable

X: array-like

Function input values. If one-dimensional, must be a column vector with shape (n, 1).

Xu: array-like

The inducing points. Must have the same number of columns as X.

y: array-like

Data that is the sum of the function with the GP prior and Gaussian noise. Must have shape (n, ).

noise: scalar, Variable

Standard deviation of the Gaussian noise.

is_observed: bool

Whether to set y as an observed variable in the model. Default is True.

**kwargs

Extra keyword arguments that are passed to MvNormal distribution constructor.

class pymc3.gp.gp.TP(mean_func=<pymc3.gp.mean.Zero object>, cov_func=<pymc3.gp.cov.Constant object>, nu=None)

Student’s T process prior.

The usage is nearly identical to that of gp.Latent. The differences are that it must be initialized with a degrees of freedom parameter, and TP is not additive. Given a mean and covariance function, and a degrees of freedom parameter, the function \(f(x)\) is modeled as,

\[\begin{split}f(X) \sim \mathcal{TP}\left( \mu(X), k(X, X'), \\nu \\right)\end{split}\]
Parameters
cov_funcNone, 2D array, or instance of Covariance

The covariance function. Defaults to zero.

mean_funcNone, instance of Mean

The mean function. Defaults to zero.

nufloat

The degrees of freedom

References

  • Shah, A., Wilson, A. G., and Ghahramani, Z. (2014). Student-t Processes as Alternatives to Gaussian Processes. arXiv preprint arXiv:1402.4306.

conditional(name, Xnew, **kwargs)

Returns the conditional distribution evaluated over new input locations Xnew.

Given a set of function values f that the TP prior was over, the conditional distribution over a set of new points, f_* is

Parameters
name: string

Name of the random variable

Xnew: array-like

Function input values.

**kwargs

Extra keyword arguments that are passed to MvNormal distribution constructor.

prior(name, X, reparameterize=True, **kwargs)

Returns the TP prior distribution evaluated over the input locations X.

This is the prior probability over the space of functions described by its mean and covariance function.

Parameters
name: string

Name of the random variable

X: array-like

Function input values.

reparameterize: bool

Reparameterize the distribution by rotating the random variable by the Cholesky factor of the covariance matrix.

**kwargs

Extra keyword arguments that are passed to distribution constructor.