18.338J/16.394J: The Mathematics of Infinite Random Matrices
Numerical Methods in Random Matrices
Per-Olof Persson
Handout #7, Tuesday, September 28, 2004
1 Largest Eigenvalue Distributions
In this section, the distributions of the largest eigenvalue of matrices in the β-ensembles are studied. His-
tograms are created first by simulation, then by solving the Painlev′e II nonlinear differential equation.
1.1 Simulation
The Gaussian Unitary Ensemble (GUE) is defined as the Hermitian n×n matrices A, where the diagonal
elements xjj and the upper triangular elements xjk = ujk +ivjk are independent Gaussians with zero-mean,
and
braceleftBigg
Var(xjj) = 1, 1 ≤ j ≤ n,
Var(ujk) = Var(vjk) = 12, 1 ≤ j < k ≤ n. (1)
Since a sum of Gaussians is a new Gaussian, an instance of these matrices can be created conveniently in
MATLAB:
A=randn(n)+i*randn(n);
A=(A+A’)/2;
The largest eigenvalue of this matrix is about 2√n. To get a distribution that converges as n → ∞, the
shifted and scaled largest eigenvalue λ′max is calculated as
λ′max = n16 parenleftbigλmax ?2√nparenrightbig. (2)
It is now straight-forward to compute the distribution for λ′max by simulation:
for ii=1:trials
A=randn(n)+i*randn(n);
A=(A+A’)/2;
lmax=max(eig(A));
lmaxscaled=n^(1/6)*(lmax-2*sqrt(n));
% Store lmax
end
% Create and plot histogram
The problem with this technique is that the computational requirements and the memory requirements grow
fast with n, which should be as large as possible in order to be a good approximation of infinity. Just
storing the matrix A requires n2 double-precision numbers, so on most computers today n has to be less
than 104. Furthermore, computing all the eigenvalues of a full Hermitian matrix requires a computing time
proportional to n3. This means that it will take many days to create a smooth histogram by simulation,
even for relatively small values of n.
To improve upon this situation, another matrix can be studied that has the same eigenvalue distribution
as A above. In [1], it was shown that this is true for the following symmetric matrix when β = 2:
Hβ ~ 1√2
?
??
??
??
N(0,2) χ(n?1)β
χ(n?1)β N(0,2) χ(n?2)β
... ... ...
χ2β N(0,2) χβ
χβ N(0,2)
?
??
??
??. (3)
Here, N(0,2) is a zero-mean Gaussian with variance 2, and χd is the square-root of a χ2 distributed number
with d degrees of freedom. Note that the matrix is symmetric, so the subdiagonal and the superdiagonal are
always equal.
This matrix has a tridiagonal sparsity structure, and only 2n double-precision numbers are required to
store an instance of it. The time for computing the largest eigenvalue is proportional to n, either using
Krylov subspace based methods or the method of bisection [2].
In MATLAB, the built-in function eigs can be used, although that requires dealing with the sparse
matrix structure. There is also a large amount of overhead in this function, which results in a relatively poor
performance. Instead, the function maxeig is used below to compute the eigenvalues. This is not a built-in
function in MATLAB, but it can be downloaded from http://www.mit.edu/~persson/mltrid. It is based
on the method of bisection, and requires just two ordinary MATLAB vectors as input, corresponding to the
diagonal and the subdiagonal.
It also turns out that only the first 10n13 components of the eigenvector corresponding to the largest
eigenvalue are significantly greater than zero. Therefore, the upper-left ncutoff by ncutoff submatrix has the
same largest eigenvalue (or at least very close), where
ncutoff ≈ 10n13. (4)
Matrices of size n = 1012 can then easily be used since the computations can be done on a matrix of size
only 10n13 = 105. Also, for these large values of n the approximation χ2n ≈ n is accurate.
A histogram of the distribution for n = 109 can now be created using the code below.
n=1e9;
nrep=1e4;
beta=2;
cutoff=round(10*n^(1/3));
d1=sqrt(n-1:-1:n+1-cutoff)’/2/sqrt(n);
ls=zeros(1,nrep);
for ii=1:nrep
d0=randn(cutoff,1)/sqrt(n*beta);
ls(ii)=maxeig(d0,d1);
end
ls=(ls-1)*n^(2/3)*2;
histdistr(ls,-7:0.2:3)
where the function histdistr below is used to histogram the data. It assumes that the histogram boxes are
equidistant.
function [xmid,H]=histdistr(ls,x)
dx=x(2)-x(1);
H=histc(ls,x);
H=H(1:end-1);
?6 ?4 ?2 0 2 40
0.1
0.2
0.3
0.4
0.5
0.6
Normalized and Scaled Largest Eigenvalue
Probability
β=1
?6 ?4 ?2 0 2 40
0.1
0.2
0.3
0.4
0.5
0.6
Normalized and Scaled Largest Eigenvalue
Probability
β=2
?6 ?4 ?2 0 2 40
0.1
0.2
0.3
0.4
0.5
0.6
Normalized and Scaled Largest Eigenvalue
Probability
β=4
Figure 1: Probability distribution of scaled largest eigenvalue (105 repetitions, n = 109)
H=H/sum(H)/dx;
xmid=(x(1:end-1)+x(2:end))/2;
bar(xmid,H)
grid on
The resulting distribution is shown in Figure 1, together with distributions for β = 1 and β = 4. The
plots also contain solid curves representing the “true solutions” (see next section).
1.2 Painlev′e II
Instead of using simulation to plot the distributions of the largest eigenvalues, it can be computed from the
solution of the Painlev′e II nonlinear differential equation [3]:
q′′ = sq +2q3 (5)
with the boundary condition
q(s) ~ Ai(s), as s →∞. (6)
The probability distribution f2(s), corresponding to β = 2, is then given by
f2(s) = ddsF2(s), (7)
where
F2(s) = exp
parenleftbigg
?
integraldisplay ∞
s
(x?s)q(x)2 dx
parenrightbigg
. (8)
The distributions for β = 1 and β = 4 are the derivatives of
F1(s)2 = F2(s)e?
R∞
s q(x)dx (9)
and
F4
parenleftbigg s
223
parenrightbigg2
= F2(s)
parenleftBigg
e12
R∞
s q(x)dx + e?
1
2
R∞
s q(x)dx
2
parenrightBigg2
. (10)
To solve this numerically using MATLAB, first rewrite (5) as a first-order system:
d
ds
parenleftbiggq
q′
parenrightbigg
=
parenleftbigg q′
sq +2q3
parenrightbigg
. (11)
This can be solved as an initial-value problem starting at s = s0 = sufficiently large positive number, and
integrating along the negative s-axis. The boundary condition (6) then becomes the initial values
braceleftbigg q(s
0) = Ai(s0)
q′(s0) = Ai′(s0). (12)
Although the distributions can be computed from q(s) as a post-processing step, it is most convenient
to add a few variables and equations to the ODE system. When computing F2(s), the quantity I(s) =integraltext
∞
s (x?s)q(x)
2 dx is required. Differentiate this twice to get
I′(s) = ?
integraldisplay ∞
s
q(x)2 dx (13)
and
I′′(s) = q(s)2. (14)
Add these equations and the variables I(s),I′(s) to the ODE system, and the solver will do the integration.
This is not only easier and gives less code, it will also give a much more accurate solution since the same
tolerance requirements are imposed on I(s) as on the solution q(s).
In a similar way, the quantity J(s) = integraltext∞s q(x)dx is needed when computing F1(s) and F4(s). This is
handled by adding the variable J(s) and the equation J′(s) = ?q(s).
The final system now has the form
d
ds
?
??
??
?
q
q′
I
I′
J
?
??
??
?
=
?
??
??
?
q′
sq +2q3
I′
q2
?q
?
??
??
?
(15)
with the initial condition
?
??
??
?
q(s0)
q′(s0)
I(s0)
I′(s0)
J(s0)
?
??
??
?
=
?
??
??
?
Ai(s0)
Ai′(s0)integraltext
∞
s0 (x?s0)Ai(x)
2 dx
Ai(s0)2integraltext
∞
s0 Ai(x)dx
?
??
??
?
. (16)
This problem can be solved in just a few lines of MATLAB code using the built-in Runge-Kutta based ODE
solver ode45. First define the system of equations as an inline function
deq=inline(’[y(2); s*y(1)+2*y(1)^3; y(4); y(1)^2; -y(1)]’,’s’,’y’);
Next specify the integration interval and the desired output times.
s0=5;
sn=-8;
sspan=linspace(s0,sn,1000);
The initial values can be computed as
y0=[airy(s0); airy(1,s0); ...
quadl(inline(’(x-s0).*airy(x).^2’,’x’,’s0’),s0,20,1e-25,0,s0); ...
airy(s0)^2; quadl(inline(’airy(x)’),s0,20,1e-18)];
where the quadl function is used to numerically approximate the integrals in (16). Now, the integration
tolerances can be set and the system integrated:
opts=odeset(’reltol’,1e-13,’abstol’,1e-15);
[s,y]=ode45(deq,sspan,y0,opts);
The five dependent variables are now in the columns of the MATLAB variable y. Using these, F2(s),F1(s),
and F4(s) become
F2=exp(-y(:,3));
F1=sqrt(F2.*exp(-y(:,5)));
F4=sqrt(F2).*(exp(y(:,5)/2)+exp(-y(:,5)/2))/2;
s4=s/2^(2/3);
The probability distributions f2(s),f1(s), and f4(s) could be computed by numerical differentiation:
f2=gradient(F2,s);
f1=gradient(F1,s);
f4=gradient(F4,s4);
but it is more accurate to first do the differentiation symbolically:
f2(s) = ?I′(s)F2(s) (17)
f1(s) = 12F
1(s)
(f2(s) + q(s)F2(s))e?J(s) (18)
f4(s) = 121
34F4(s)
parenleftBig
f2(s)
parenleftBig
2+ eJ(s) + e?J(s)
parenrightBig
+ F2(s)q(s)
parenleftBig
e?J(s) ?eJ(s)
parenrightBigparenrightBig
(19)
and evaluate these expressions:
f2=-y(:,4).*F2;
f1=1/2./F1.*(f2+y(:,1).*F2).*exp(-y(:,5));
f4=1/2^(1/3)/4./F4.*(f2.*(2+exp(y(:,5))+exp(-y(:,5)))+ ...
F2.*y(:,1).*(exp(-y(:,5))-exp(y(:,5))));
Finally, plot the curves:
plot(s,f1,s,f2,s4,f4)
legend(’\beta=1’,’\beta=2’,’\beta=4’)
xlabel(’s’)
ylabel(’f_\beta(s)’,’rotation’,0)
grid on
The result can be seen in Figure 2.
?8 ?6 ?4 ?2 0 2 4 6?0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
s
fβ(s)
β=1
β=2
β=4
Figure 2: The probability distributions f1(s), f2(s), and f4(s), computed using the Painlev′e II solution.
2 Eigenvalue Spacings Distributions
Another quantity with an interesting probability distribution is the spacings of the eigenvalues of random
matrices. It turns out that the eigenvalues are almost uniformly distributed, which means that every random
matrix gives a large number of spacings. The distributions can then be efficiently computed by simulation.
Two other methods are used to compute the spacings distribution – the solution of the Painlev′e V
nonlinear differential equation and the eigenvalues of the Prolate matrix. Finally, the results are compared
with the spacings of the zeros along the critical line of the Riemann zeta function.
2.1 Simulation
As before, the simulations are made with matrices from the Gaussian Unitary Ensemble. The normalized
spacings of the eigenvalues λ1 ≤ λ2 ≤ ... ≤ λN are computed according to
δ′k = λk+1 ?λkpiβ
radicalBig
2βn?λ2k, k ≈ n/2, (20)
where β = 2 for the GUE. The distribution of the eigenvalues is almost uniform, with a slight deviation
at the two ends of the spectrum. Therefore, only half of the eigenvalues are used, and one quarter of the
eigenvalues at each end is discarded.
Again, to allow for a more efficient simulation, the tridiagonal matrix (3) is used instead of the full
Hermitian matrix. In this case, all the eigenvalues are computed, which can be done in a time proportional
to n2. While this could in principle be done using the MATLAB sparse matrix structure and the eigs
function, the more efficient trideig function is used below to compute all the eigenvalues of a symmetric
tridiagonal matrix. It can be downloaded from http://www.mit.edu/~persson/mltrid.
The histogram can now be computed by simulation with the following lines of code. Note that the
function chi2rnd from the Statistics Toolbox is required.
n=1000;
nrep=1000;
beta=2;
ds=zeros(1,nrep*n/2);
for ii=1:nrep
l=trideig(randn(n,1),sqrt(chi2rnd((n-1:-1:1)’*beta)/2));
d=diff(l(n/4:3*n/4))/beta/pi.*sqrt(2*beta*n-l(n/4:3*n/4-1).^2);
ds((ii-1)*n/2+1:ii*n/2)=d;
end
histdistr(ds,0:0.05:5);
The resulting histogram can be found in Figure 3. The figure also shows the expected curve as a solid line.
2.2 Painlev′e V
The probability distribution p(s) for the eigenvalue spacings when β = 2 can be computed with the solution
to the Painlev′e V nonlinear differential equation (see [4] for details):
(tσ′′)2 +4(tσ′ ?σ)parenleftbigtσ′ ?σ +(σ′)2parenrightbig= 0 (21)
with the boundary condition
σ(t) ≈?tpi ?
parenleftbiggt
pi
parenrightbigg2
, as t → 0+. (22)
Then p(s) is given by
p(s) = d
2
ds2E(s) (23)
0 1 2 3 4 5 60
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Normalized Consecutive spacings
Probability
Random Matrix Eigenvalue Distribution
Figure 3: Probability distribution of consecutive spacings of random matrix eigenvalues (1000 repetitions,
n = 1000)
where
E(s) = exp
parenleftbiggintegraldisplay pis
0
σ(t)
t dt
parenrightbigg
. (24)
Explicit differentiation gives
p(s) = 1s2 parenleftbigpisσ′(pis)?σ(pis) + σ(pis)2parenrightbigE(s). (25)
The second-order differential equation (21) can be written as a first-order system of differential equations:
d
dt
parenleftbiggσ
σ′
parenrightbigg
=
parenleftbigg σ′
?2tradicalbig(σ ?tσ′)(tσ′ ?σ +(σ′)2)
parenrightbigg
. (26)
This is solved as an initial-value problem starting at t = t0 =very small positive number. The value t = 0
has to be avoided because of the division by t in the system of equations. This is not a problem, since the
boundary condition (22) provides an accurate value for σ(t0) (as well as E(t0/pi)). The boundary conditions
for the system (26) then become
braceleftBigg
σ(t0) = ?t0pi ?parenleftbigt0piparenrightbig2
σ′(t0) = ?1pi ? 2t0pi . (27)
To be able to compute E(s) using (24), the variable
I(t) =
integraldisplay t
0
σ(t′)
t′ dt
′ (28)
is added to the system, as well as the equation ddtI = σt . The corresponding initial value is
I(t0) ≈
integraldisplay t0
0
parenleftbigg
?1pi ? tpi2
parenrightbigg
dt = ?t0pi ? t
20
2pi2. (29)
Putting it all together, the final system is
d
dt
?
?
σ
σ′
I
?
?=
?
?
σ′
?2tradicalbig(σ ?tσ′)(tσ′ ?σ +(σ′)2)
σ
t
?
? (30)
with boundary condition
?
?
σ(t0)
σ′(t0)
I(t0)
?
?=
?
???
t0
pi ?
parenleftbigt
0
pi
parenrightbig2
?1pi ? 2t0pi
?t0pi ? t202pi2
?
??. (31)
This system is defined as an inline function in MATLAB:
desig=inline([’[y(2); -2/t*sqrt((y(1)-t*y(2))*’ ...
’(t*y(2)-y(1)+y(2)^2)); y(1)/t]’],’t’,’y’);
Specify the integration interval and the desired output times:
t0=1e-12;
tn=16;
tspan=linspace(t0,tn,1000);
Set the initial condition:
y0=[-t0/pi-(t0/pi)^2; -1/pi-2*t0/pi; -t0/pi-t0^2/2/pi^2];
0 5 10 15 20?70
?60
?50
?40
?30
?20
?10
0
t
σ(t)
0 1 2 3 4 5 60
0.2
0.4
0.6
0.8
1
s
E(s), p(s)
Figure 4: Painlev′e V (left), E(s) and p(s) (right).
Finally, set the integration tolerances and call the solver:
opts=odeset(’reltol’,1e-13,’abstol’,1e-14);
[t,y]=ode45(desig,tspan,y0,opts);
The solution components are now in the columns of y. Use these to evaluate E(s) and p(s):
s=t/pi;
E=exp(y(:,3));
p=1./s.^2.*E.*(t.*y(:,2)-y(:,1)+y(:,1).^2);
p(1)=2*s(1); % Fix due to cancellation
A plot of p(s) can be made with the command
plot(s,p)
grid on
and it can be seen in Figure 4. Plots are also shown of E(s) and σ(t).
2.3 The Prolate Matrix
Another method to calculate the distribution of the eigenvalue spacings is to compute the eigenvalues λi of
the operator
f(y) →
integraldisplay 1
?1
Q(x,y)f(y)dy, Q(x,y) = sin((x?y)pit)(x?y)pi . (32)
Then E(2t) =producttexti(1?λi), and p(s) can be computed as before. To do this, first define the infinite symmetric
Prolate matrix:
A∞ =
?
??
a0 a1 ...
a1 a0 ...
... ... ...
?
?? (33)
with a0 = 2w, ak = (sin2piwk)/pik for k = 1,2,..., and 0 < w < 12. A discretization of Q(x,y) is achieved
by setting w = t/n and extracting the upper-left n×n submatrix An of A∞.
Below, the full matrix An is used, and all the eigenvalues are computed in n3 time using the eig function.
However, An commutes with the following symmetric tridiagonal matrix [5], and therefore has the same
eigenvectors:
Tn =
?
??
??
??
α1 β1
β1 α2 β2
... ... ...
βn?2 αn?1 βn?1
βn?1 αn
?
??
??
?? (34)
where
braceleftbigg
αk = parenleftbign+12 ?kparenrightbig2 cos2piw
βk = 12k(n?k). (35)
It is then in principle possible to use the new techniques described in [6] to compute all the eigenvalues and
eigenvectors of Tn in n2 time, and then get the eigenvalues of An by dot products. This is not done in this
example.
The code for computing E(s) is shown below. This time, p(s) is evaluated by numerical differentiation
since no information about the derivative of E(s) is available.
s=0:0.01:5;
n=100;
E0=zeros(size(s));
for ii=1:length(s)
Q=gallery(’prolate’,n,s(ii)/2/n);
E0(ii)=prod(1-eig(Q));
end
p0=gradient(gradient(E0,s),s);
To improve the accuracy in E(s), Richardson extrapolation can be used. This is done as follows, where the
values are assumed to converge as 1/n2:
% ... Compute s and E using Painleve V in previous section
Es=zeros(length(t),0);
E1=zeros(size(s));
for n=20*2.^(0:3)
for ii=1:length(s)
Q=gallery(’prolate’,n,s(ii)/2/n);
E1(ii)=prod(1-eig(Q));
end
Es=[Es,E1];
end
for ii=1:3
max(abs(Es-E(:,ones(1,size(Es,2)))))
Es=Es(:,2:end)+diff(Es,1,2)/(2^(ii+1)-1);
end
max(abs(Es-E))
The errors max0≤s≤5|E1(s)?E(s)| are shown in Table 1, for n = 20,40,80, and 160. The error after all
extrapolations is of the same order as the “exact solution” using Painlev′e V.
2.4 Riemann Zeta Zeros
It has been observed that the zeros of the Riemann zeta function along the critical line Re(z) = 12 behave
similar to the eigenvalues of random matrices in the GUE. Here, the distribution of the scaled spacings of the
N Error 0 Error 1 Error 2 Error 3
20 0.2244
40 0.0561 0.7701
80 0.0140 0.0483 0.5486
160 0.0035 0.0032 0.0323 2.2673
·10?3 ·10?7 ·10?8 ·10?11
Table 1: Difference between Prolate solution E1(s) and Painlev′e V solution E(s) after 0, 1, 2, and 3
Richardson extrapolations.
zeros is compared to the corresponding distribution for eigenvalues computed using the Painlev′e V equation
from the previous chapters.
Define the nth zero γn = nth as
ζ
parenleftbigg1
2 + iγn
parenrightbigg
= 0, 0 < γ1 < γ2 < ... (36)
Compute a normalized spacing:
?γn = γnav spacing near γ
n
= γn ·
bracketleftbigglogγ
n/2pi
2pi
bracketrightbigg
. (37)
Zeros of the Riemann zeta function can be downloaded from [7]. Assuming that the MATLAB variable
gamma contains the zeros, and the variable offset the offset, these two lines compute the consecutive
spacings ?γn+1 ? ?γn and plot the histogram:
delta=diff(gamma)/2/pi.*log((gamma(1:end-1)+offset)/2/pi);
histdistr(delta,0:0.05:5.0);
The result can be found in Figure 5, along with the Painlev′e V distribution. The curves are indeed in good
agreement, although the number of samples here is a little to low to get a perfect match.
References
[1] Ioana Dumitriu and Alan Edelman. Matrix models for beta ensembles. Journal of Mathematical Physics,
2002.
[2] Lloyd N. Trefethen and David Bau. Numerical Linear Algebra. SIAM, 1997.
[3] Craig A. Tracy and Harold Widom. The distribution of the largest eigenvalue in the Gaussian ensembles.
In “Calogero-Moser-Sutherland Models” (Ed. J.F. van Diejen, L. Vinet), CRM Ser. in Math. Phys. (4),
Springer-Verlag, New York, 2000, 461–472.
[4] Craig A. Tracy and Harold Widom. Introduction to random matrices. In ”Geometric and Quantum
Aspects of Integrable Systems” (Ed. G. F. Helminck), Lecture Notes in Physics 424 (1993) 103-130,
Springer-Verlag.
[5] D. Slepian. Prolate spheroidal wave functions, Fourier analysis, and uncertainty V: The discrete case.
Technical Report 57, Bell System Tech. J., 1978.
[6] I. Dhillon. A new algorithm for the symmetric tridiagonal eigenvalue/- eigenvector problem, 1997. PhD.
thesis, University of California, Berkeley.
[7] Andrew Odlyzko. Tables of zeros of the riemann zeta function.
0 1 2 3 4 5 60
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Normalized Consecutive spacings
Probability
Riemann Zeta Zero Distribution
Figure 5: Probability distribution of consecutive spacings of Riemann zeta zeros (30,000 zeros, n ≈
1012,1021,1022)