## PyMC3 passes a random covariance matrix to pm. MvNormal()

I tried using PyMC3 to fit a simple 2D Gaussian model to the observed data.

```
import numpy as np
import pymc3 as pm
n = 10000;
np.random.seed(0)
X = np.random.multivariate_normal([0,0], [[1,0],[0,1]], n);
with pm. Model() as model:
# PRIORS
mu = [pm. Uniform('mux', lower=-1, upper=1),
pm. Uniform('muy', lower=-1, upper=1)]
cov = np.array([[pm. Uniform('a11', lower=0.1, upper=2), 0],
[0, pm. Uniform('a22', lower=0.1, upper=2)]])
# LIKELIHOOD
likelihood = pm. MvNormal('likelihood', mu=mu, cov=cov, observed=X)
with model:
trace = pm.sample(draws=1000, chains=2, tune=1000)
```

Although I can do this by passing `sd`

to `pm. Normal`

does this in one dimension, but I’m passing the covariance matrix to `pm. MvNormal < had some trouble`

Where am I wrong?

### Solution

**The PyMC3 distribution object is not a simple numeric object or numpy array**. Instead, they are nodes in theano computational graphs and often require operations from `pymc3.math.`

or `theano.tensor`

manipulates them. Also, putting PyMC3 objects in a numpy array is unnecessary because they are already multidimensional.

## Original model

In keeping with the intent of the code, the working version should look like this

```
import numpy as np
import pymc3 as pm
import theano.tensor as tt
N = 10000
np.random.seed(0)
X = np.random.multivariate_normal(np.zeros(2), np.eye(2), size=N)
with pm. Model() as model:
# use `shape` argument to define tensor dimensions
mu = pm. Uniform('mu', lower=-1, upper=1, shape=2)
# diagonal values on covariance matrix
a = pm. Uniform('a', lower=0.1, upper=2, shape=2)
# convert vector to a 2x2 matrix with `a` on the diagonal
cov = tt.diag(a)
likelihood = pm. MvNormal('likelihood', mu=mu, cov=cov, observed=X)
```

## Alternative models

I assume that the example you provided is just a toy for communicating questions. But just in case, I will mention that the main advantage of using a multivariate normal distribution (modeling covariances between parameters) is lost when limiting the covariance matrix to diagonals. In addition, the prior theory of covariance matrices is very well developed, so it is worth spending time considering existing solutions. In particular, there are a PyMC3 example using the LKJ prior for covariance matrices

Here’s a simple app of the example in this context:

```
with pm. Model() as model_lkj:
# use `shape` argument to define tensor dimensions
mu = pm. Uniform('mu', lower=-1, upper=1, shape=2)
# LKJ prior for covariance matrix (see example)
packed_L = pm. LKJCholeskyCov('packed_L', n=2,
eta=2., sd_dist=pm. HalfCauchy.dist(2.5))
# convert to (2,2)
L = pm.expand_packed_triangular(2, packed_L)
likelihood = pm. MvNormal('likelihood', mu=mu, chol=L, observed=X)
```