Empirical economic work often involves simulations and drawing random numbers. Drawing random numbers from a multivariate distribution is increasingly becoming standardized in a variety of software packages. Stata, for example, has the drawnorm command which allows sampling from a multivariate normal distribution. SAS has the PROC SIMNORMAL procedure, as well as the RANDNORMAL function in SAS/IML. The R statistical platform has the mvrnorm function. Many of these packages cover other types of distributions in addition to the normal distribution.

The aforementioned functions are highly effective tools. Once in a while, it may also be beneficial to simply draw from a bivariate normal distribution without relying on one of the aforementioned packages. There are actually two ways of doing this on the quick. The first method involves the conditional distribution of a random variable \(X_2\) given \(X_1\). Therefore, a bivariate normal distribution can be simulated by drawing a random variable from the marginal normal distribution and then drawing a second random variable from the conditional normal distribution. The C++11 code fragment below shows how to do this, using the standard library random generator for normal distributions. We start with parameters \(\mu_1\) and \(\sigma_1\) for the mean and standard deviation for variable 1, parameters \(\mu_2\) and \(\sigma_2\) for the mean and standard deviation for variable 2, and \(\rho\in[-1,+1]\) as the correlation coefficient. The code fragment populates vectors v1 and v2.

The code above is simple and quick. There is an alternative approach that underlies the method used for multivariate normal distributions. It requires the spectral decomposition of the variance-covariance matrix \[V=\left[\begin{array}{cc}\sigma_1^2&\rho\sigma_1\sigma_2\\ \rho\sigma_1\sigma_2&\sigma_2^2\end{array}\right]=U \Lambda U^\top \] where \(U\) is the matrix of normalized eigenvectors corresponding to the diagonal matrix of eigenvalues \(\Lambda\). To generate random bivariate normal variables \(\{x_1,x_2\}\) we first generate two random standard normal variables \(\{z_1,z_2\}\) and transform them through the matrix \(\Phi\equiv U\Lambda^{1/2}\) to yield \[\left[\begin{array}{c}x_1\\x_2\end{array}\right]= \left[\begin{array}{c}\mu_1\\ \mu_2\end{array}\right]+ \left[\begin{array}{cc} \phi_{11}&\phi_{12}\\ \phi_{21}&\phi_{22} \end{array}\right]\cdot \left[\begin{array}{c}z_1\\z_2\end{array}\right] \] The four elements of the \(\Phi\) matrix can be calculated explicitly, as eigenvalues and normalized eigenvectors of a two-by-two matrix are not overly complicated. (N.B.: eigenvectors computed by mathematical applications are not normalized—made to be of length one—by default. In deriving the code fragment below, normalization is a crucial step.) The result is the code fragment below that requires the calculation of six auxiliary variables from the elements of the variance-covariance matrix. These six auxiliary variables (G, H, K, J, U, V) are then used to compute the elements of the transformation matrix \(\Phi\). The loop to populate vectors v1 and v2 are then straight-forward. The eigenvalues \(\lambda_1\) and \(\lambda_2\) of the variance-covariance matrix are \[ \begin{array}{lll} \lambda_1=(G+H)/2& \quad G=\sigma_1^2+\sigma_2^2& \quad K=\rho\sigma_1\sigma_2\\ \lambda_2=(G-H)/2& \quad J=\sigma_2^2-\sigma_1^2& \quad H=\sqrt{J^2+4K^2} \end{array}\] Prior to normalization, the eigenvectors are given by \[\left[\begin{array}{cc}2K/(J+H)&2K/(J-H)\\ 1&1\end{array}\right]\] Normalizing each column in this matrix to length 1 and post-multiplying the resulting matrix with the diagonal matrix with the square roots of the two eigenvalues then delivers the code below. It has a few more lines than the first version of code, but is perhaps slightly more elegant.

So is there any benefit in using the more complicated version? Other than the fun of solving the spectral decomposition explicitly for the bivariate case, there is actually some caution needed for that version. There are already three special cases that need to be filtered out: perfect positive and negative correlation, and zero correlation. In all three cases we have problems. When \(\rho=0\) then \(K=0\), and thus \(H=J\) and therefore \(V\) is not defined because the denominator is zero. Other problems arise at the boundaries when \(\rho\) approaches -1 or +1, which may lead to numerical instability in the calculations. So all in all, the better choice is the simpler algorithm. There is then but one question: does it matter in which order one computes the two random variables? If one favours numerical stability, it makes sense to order the variables so that \(\sigma_1>\sigma_2\) so that \(\lambda\) remains small. This prevents small errors in either standard deviation from being magnified through \(\lambda\). A slightly refined version of the above script would swap the variables so that the ordering of the standard deviations is optimal.

Updated on Sunday, March 10, 2019