1
Chapter 6. Electronic Structures
Electrons are the ¡°glue¡± that holds the nuclei together in the chemical bonds of
molecules and ions. Of course, it is the nuclei¡¯s positive charges that bind the electrons to
the nuclei. The competitions among Coulomb repulsions and attractions as well as the
existence of non-zero electronic and nuclear kinetic energies make the treatment of the
full electronic-nuclear Schr?dinger equation an extremely difficult problem. Electronic
structure theory deals with the quantum states of the electrons, usually within the Born-
Oppenheimer approximation (i.e., with the nuclei held fixed). It also addresses the forces
that the electrons¡¯ presence creates on the nuclei; it is these forces that determine the
geometries and energies of various stable structures of the molecule as well as transition
states connecting these stable structures. Because there are ground and excited
electronic states, each of which has different electronic properties, there are different
stable-structure and transition-state geometries for each such electronic state. Electronic
structure theory deals with all of these states, their nuclear structures, and the
spectroscopies (e.g., electronic, vibrational, rotational) connecting them.
I. Theoretical Treatment of Electronic Structure: Atomic and Molecular Orbital
Theory
In Chapter 5¡¯s discussion of molecular structure, I introduced you to the strategies
that theory uses to interpret experimental data relating to such matters, and how and why
2
theory can also be used to simulate the behavior of molecules. In carrying out
simulations, the Born-Oppenheimer electronic energy E(R) as a function of the 3N
coordinates of the N atoms in the molecule plays a central role. It is on this landscape that
one searches for stable isomers and transition states, and it is the second derivative
(Hessian) matrix of this function that provides the harmonic vibrational frequencies of
such isomers. In the present Chapter, I want to provide you with an introduction to the
tools that we use to solve the electronic Schr?dinger equation to generate E(R) and the
electronic wave function ¦·(r|R). In essence, this treatment will focus on orbitals of atoms
and molecules and how we obtain and interpret them.
For an atom, one can approximate the orbitals by using the solutions of the
hydrogenic Schr?dinger equation discussed in the Background Material. Although such
functions are not proper solutions to the actual N-electron Schr?dinger equation (believe
it or not, no one has ever solved exactly any such equation for N > 1) of any atom, they
can be used as perturbation or variational starting-point approximations when one may be
satisfied with qualitatively accurate answers. In particular, the solutions of this one-
electron Hydrogenic problem form the qualitative basis for much of atomic and
molecular orbital theory. As discussed in detail in the Background Material, these orbitals
are labeled by n, l, and m quantum numbers for the bound states and by l and m quantum
numbers and the energy E for the continuum states.
Much as the particle-in-a-box orbitals are used to qualitatively describe pi-
electrons in conjugated polyenes or electronic bands in solids, these so-called hydrogen-
like orbitals provide qualitative descriptions of orbitals of atoms with more than a single
electron. By introducing the concept of screening as a way to represent the repulsive
3
interactions among the electrons of an atom, an effective nuclear charge Zeff can be used
in place of Z in the hydrogenic ¦×n,l,m and En,l formulas of the Background Material to
generate approximate atomic orbitals to be filled by electrons in a many-electron atom.
For example, in the crudest approximation of a carbon atom, the two 1s electrons
experience the full nuclear attraction so Zeff =6 for them, whereas the 2s and 2p electrons
are screened by the two 1s electrons, so Zeff = 4 for them. Within this approximation, one
then occupies two 1s orbitals with Z=6, two 2s orbitals with Z=4 and two 2p orbitals with
Z=4 in forming the full six-electron product wave function of the lowest-energy state of
carbon
¦·(1, 2, ¡, 6) = ¦×1s ¦Á(1) ¦×1sb¦Á(2) ¦×2s ¦Á(3) ¡ ¦×1p(0) ¦Â(6).
However, such approximate orbitals are not sufficiently accurate to be of use in
quantitative simulations of atomic and molecular structure. In particular, their energies do
not properly follow the trends in atomic orbital (AO) energies that are taught in
introductory chemistry classes and that are shown pictorially in Fig. 6.1.
4
Figure 6.1 Energies of Atomic Orbitals as Functions of Nuclear Charge for Neutral
Atoms
For example, the relative energies of the 3d and 4s orbitals are not adequately described
in a model that treats electron repulsion effects in terms of a simple screening factor. So,
now it is time to examine how we can move beyond the screening model and take the
electron repulsion effects, which cause the inter-electronic couplings that render the
Schr?dinger equation insoluble, into account in a more reliable manner.
A. Orbitals
1. The Hartree Description
5
The energies and wave functions within the most commonly used theories of
atomic structure are assumed to arise as solutions of a Schr?dinger equation whose
hamiltonian he(r) possess three kinds of energies:
1. Kinetic energy, whose average value is computed by taking the expectation value of
the kinetic energy operator ¨C h2/2m ?2 with respect to any particular solution ¦ÕJ(r) to the
Schr?dinger equation: KE = <¦ÕJ| ¨C h2/2m ?2 |¦ÕJ>;
2. Coulombic attraction energy with the nucleus of charge Z: <¦ÕJ| -Ze2/r |¦ÕJ>;
3. And Coulomb repulsion energies with all of the n-1 other electrons, which are assumed
to occupy other atomic orbitals (AOs) denoted ¦ÕK, with this energy computed as
¦²K <¦ÕJ(r) ¦ÕK(r¡¯) |(e2/|r-r¡¯|) | ¦ÕJ(r) ¦ÕK(r¡¯)>.
The so-called Dirac notation <¦ÕJ(r) ¦ÕK(r¡¯) |(e2/|r-r¡¯|) | ¦ÕJ(r) ¦ÕK(r¡¯)> is used to
represent the six-dimensional Coulomb integral JJ,K = ¡Ò|¦ÕJ(r)|2 |¦ÕK(r¡¯)|2 (e2/|r-r¡¯) dr dr¡¯ that
describes the Coulomb repulsion between the charge density |¦ÕJ(r)|2 for the electron in ¦ÕJ
and the charge density |¦ÕK(r¡¯)|2 for the electron in ¦ÕK. Of course, the sum over K must be
limited to exclude K=J to avoid counting a ¡°self-interaction¡± of the electron in orbital ¦ÕJ
with itself.
The total energy ¦ÅJ of the orbital ¦ÕJ , is the sum of the above three contributions:
¦ÅJ = <¦ÕJ| ¨C h2/2m ?2 |¦ÕJ> + <¦ÕJ| -Ze2/|r |¦ÕJ>
+ ¦²K <¦ÕJ(r) ¦ÕK(r¡¯) |(e2/|r-r¡¯|) | ¦ÕJ(r) ¦ÕK(r¡¯)>.
6
This treatment of the electrons and their orbitals is referred to as the Hartree-level of
theory. As stated above, when screened hydrogenic AOs are used to approximate the ¦ÕJ
and ¦ÕK orbitals, the resultant ¦ÅJ values do not produce accurate predictions. For example,
the negative of ¦ÅJ should approximate the ionization energy for removal of an electron
from the AO ¦ÕJ. Such ionization potentials (IP s) can be measured, and the measured
values do not agree well with the theoretical values when a crude screening
approximation is made for the AO s.
2. The LACO-Expansion
To improve upon the use of screened hydrogenic AOs, it is most common to
approximate each of the Hartree AOs {¦ÕK} as a linear combination of so-called basis AOs
{¦Ö¦Ì}:
¦ÕJ = ¦²¦Ì CJ,¦Ì ¦Ö¦Ì.
using what is termed the linear-combination-of-atomic-orbitals (LCAO) expansion. In
this equation, the expansion coefficients {CJ,¦Ì} are the variables that are to be determined
by solving the Schr?dinger equation
he ¦ÕJ = ¦ÅJ ¦ÕJ.
After substituting the LCAO expansion for ¦ÕJ into this Schr?dinger equaiton, multiplying
7
on the left by one of the basis AOs ¦Ö¦Í , and then integrating over the coordinates of the
electron in ¦ÕJ, one obtains
¦²¦Ì <¦Ö¦Í| he| ¦Ö¦Ì> CJ,¦Ì = ¦ÅJ ¦²¦Ì <¦Ö¦Í| ¦Ö¦Ì> CJ,¦Ì .
This is a matrix eigenvalue equation in which the ¦ÅJ and {CJ,¦Ì} appear as eigenvalues and
eigenvectors. The matrices <¦Ö¦Í| he| ¦Ö¦Ì> and <¦Ö¦Í| ¦Ö¦Ì> are called the Hamiltonian and
overlap matrices, respectively. An explicit expression for the former is obtained by
introducing the earlier definition of he:
<¦Ö¦Í| he| ¦Ö¦Ì> = <¦Ö¦Í| ¨C h2/2m ?2 |¦Ö¦Ì> + <¦Ö¦Í| -Ze2/|r |¦Ö¦Ì>
+ ¦²K CK,¦Ç CK,¦Ã <¦Ö¦Í(r) ¦Ö¦Ç(r¡¯) |(e2/|r-r¡¯|) | ¦Ö¦Ì(r) ¦Ö¦Ã(r¡¯)>.
An important thing to notice about the form of the matrix Hartree equations is that to
compute the Hamiltonian matrix, one must know the LCAO coefficients {CK,¦Ã} of the
orbitals which the electrons occupy. On the other hand, these LCAO coefficients are
supposed to be found by solving the Hartree matrix eigenvalue equations. This paradox
leads to the need to solve these equations iteratively in a so-called self-consistent field
(SCF) technique. In the SCF process, one inputs an initial approximation to the {CK,¦Ã}
coefficients. This then allows one to form the Hamiltonian matrix defined above. The
Hartree matrix equations ¦²¦Ì <¦Ö¦Í| he| ¦Ö¦Ì> CJ,¦Ì = ¦ÅJ ¦²¦Ì <¦Ö¦Í| ¦Ö¦Ì> CJ,¦Ì are then solved for
¡°new¡± {CK,¦Ã} coefficients and for the orbital energies {¦ÅK}. The new LCAO coefficients
8
of those orbitals that are occupied are then used to form a ¡°new¡± Hamiltonian matrix,
after which the Hartree equations are again solved for another generation of LCAO
coefficients and orbital energies. This process is continued until the orbital energies and
LCAO coefficients obtained in successive iterations do not differ appreciably. Upon such
convergence, one says that a self-consistent field has been realized because the {CK,¦Ã}
coefficients are used to form a Coulomb field potential that details the electron-electron
interactions.
3. AO Basis Sets
a. STOs and GTOs
As noted above, it is possible to use the screened hydrogenic orbitals as the {¦Ö¦Ì}.
However, much effort has been expended at developing alternative sets of functions to
use as basis orbitals. The result of this effort has been to produce two kinds of functions
that currently are widely used.
The basis orbitals commonly used in the LCAO process fall into two primary
classes:
1. Slater-type orbitals (STOs) ¦Ön,l,m (r,¦È,¦Õ) = Nn,l,m,¦Æ Yl,m (¦È,¦Õ) rn-1 e-¦Ær are
characterized by quantum numbers n, l, and m and exponents (which characterize the
orbital¡¯s radial 'size' ) ¦Æ. The symbol Nn,l,m,¦Æ denotes the normalization constant.
2. Cartesian Gaussian-type orbitals (GTOs) ¦Öa,b,c (r,¦È,¦Õ) = N'a,b,c,¦Á xa yb zc exp(-¦Ár2),
are characterized by quantum numbers a, b, and c, which detail the angular shape and
direction of the orbital, and exponents ¦Á which govern the radial 'size'.
For both types of AOs, the coordinates r, ¦È, and ¦Õ refer to the position of the
9
electron relative to a set of axes attached to the nucleus on which the basis orbital is
located. Note that Slater-type orbitals (STO's) are similar to hydrogenic orbitals in the
region close to the nucleus. Specifically, they have a non-zero slope near the nucleus (i.e.,
d/dr(exp(-¦Ær))r=0 = -¦Æ). In contrast, GTOs, have zero slope near r=0 because
d/dr(exp(-¦Ár2))r=0 = 0. We say that STOs display a ¡°cusp¡± at r=0 that is characteristic of
the hydrogenic solutions, whereas GTOs do not.
Although STOs have the proper 'cusp' behavior near nuclei, they are used
primarily for atomic and linear-molecule calculations because the multi-center integrals
<¦Ö¦Ì(1) ¦Ö¦Ê(2)|e2/|r1-r2|| ¦Ö¦Í(1) ¦Ö¦Ã(2)> which arise in polyatomic-molecule calculations (we
will discuss these intergrals later in this Chapter) can not efficiently be evalusted when
STOs are employed. In contrast, such integrals can routinely be computed when GTOs
are used. This fundamental advantage of GTOs has lead to the dominance of these
functions in molecular quantum chemistry.
To overcome the primary weakness of GTO functions (i.e., their radial derivatives
vanish at the nucleus), it is common to combine two, three, or more GTOs, with
combination coefficients which are fixed and not treated as LCAO parameters, into new
functions called contracted GTOs or CGTOs. Typically, a series of radially tight,
medium, and loose GTOs are multiplied by contraction coefficients and summed to
produce a CGTO which approximates the proper 'cusp' at the nuclear center (although no
such combination of GTOs can exactly produce such a cusp because each GTO has zero
slope at r = 0).
Although most calculations on molecules are now performed using Gaussian
orbitals, it should be noted that other basis sets can be used as long as they span enough
10
of the regions of space (radial and angular) where significant electron density resides. In
fact, it is possible to use plane wave orbitals of the form ¦Ö (r,¦È,¦Õ) = N exp[i(kx r sin¦È cos¦Õ
+ ky r sin¦È sin¦Õ + kz r cos¦È)], where N is a normalization constant and kx , ky , and kz are
quantum numbers detailing the momenta of the orbital along the x, y, and z Cartesian
directions. The advantage to using such ¡°simple¡± orbitals is that the integrals one must
perform are much easier to handle with such functions. The disadvantage is that one must
use many such functions to accurately describe sharply peaked charge distributions of,
for example, inner-shell core orbitals.
Much effort has been devoted to developing and tabulating in widely available
locations sets of STO or GTO basis orbitals for main-group elements and transition
metals. This ongoing effort is aimed at providing standard basis set libraries which:
1. Yield predictable chemical accuracy in the resultant energies.
2. Are cost effective to use in practical calculations.
3. Are relatively transferable so that a given atom's basis is flexible enough to be used for
that atom in various bonding environments (e.g., hybridization and degree of ionization).
b. The Fundamental Core and Valence Basis
In constructing an atomic orbital basis, one can choose from among several
classes of functions. First, the size and nature of the primary core and valence basis must
be specified. Within this category, the following choices are common:
1. A minimal basis in which the number of CGTO orbitals is equal to the number of core
and valence atomic orbitals in the atom.
2. A double-zeta (DZ) basis in which twice as many CGTOs are used as there are core
11
and valence atomic orbitals. The use of more basis functions is motivated by a desire to
provide additional variational flexibility so the LCAO process can generate molecular
orbitals of variable diffuseness as the local electronegativity of the atom varies.
3. A triple-zeta (TZ) basis in which three times as many CGTOs are used as the number
of core and valence atomic orbitals (of course, there are quadruple-zeta and higher-zeta
bases also).
Optimization of the orbital exponents (¦Æ¡¯s or ¦Á's) and the GTO-to-CGTO
contraction coefficients for the kind of bases described above have undergone explosive
growth in recent years. The theory group at the Pacific Northwest National Labs (PNNL)
offer a world wide web site from which one can find (and even download in a form
prepared for input to any of several commonly used electronic structure codes) a wide
variety of Gaussian atomic basis sets. This site can be accessed at
http://www.emsl.pnl.gov:2080/forms/basisform.html.
c. Polarization Functions
One usually enhances any core and valence basis set with a set of so-called
polarization functions. They are functions of one higher angular momentum than appears
in the atom's valence orbital space (e.g, d-functions for C, N, and O and p-functions for
H), and they have exponents (¦Æ or ¦Á) which cause their radial sizes to be similar to the
sizes of the valence orbitals ( i.e., the polarization p orbitals of the H atom are similar in
size to the 1s orbital). Thus, they are not orbitals which describe the atom's valence
orbital with one higher l-value; such higher-l valence orbitals would be radially more
diffuse.
12
The primary purpose of polarization functions is to give additional angular
flexibility to the LCAO process in forming bonding orbitals between pairs of valence
atomic orbitals. This is illustrated in Fig. 6.2 where polarization dpi orbitals on C and O
are seen to contribute to formation of the bonding pi orbital of a carbonyl group by
allowing polarization of the carbon atom's ppi orbital toward the right and of the oxygen
atom's ppi orbital toward the left.
13
Figure 6.2 Oxygen and Carbon Form a pi Bond That Uses the Polarization Functions on
Each Atom
Polarization functions are essential in strained ring compounds because they provide the
angular flexibility needed to direct the electron density into regions between bonded
atoms, but they are also important in unstrained compounds when high accuracy is
required.
C O
C O C O
C O
C O
Carbon ppi and dpi orbitals combining to form
a bent pi orbital
Oxygen ppi and dpi orbitals combining to form
a bent pi orbital
pi bond formed from C and O bent (polarized) AOs
14
d. Diffuse Functions
When dealing with anions or Rydberg states, one must further augment the AO
basis set by adding so-called diffuse basis orbitals. The valence and polarization
functions described above do not provide enough radial flexibility to adequately describe
either of these cases. The PNNL web site data base cited above offers a good source for
obtaining diffuse functions appropriate to a variety of atoms.
Once one has specified an atomic orbital basis for each atom in the molecule, the
LCAO-MO procedure can be used to determine the C¦Ì,i coefficients that describe the
occupied and virtual (i.e., unoccupied) orbitals. It is important to keep in mind that the
basis orbitals are not themselves the SCF orbitals of the isolated atoms; even the proper
atomic orbitals are combinations (with atomic values for the C¦Ì,i coefficients) of the basis
functions. The LCAO-MO-SCF process itself determines the magnitudes and signs of the
C¦Í,i . In particular, it is alternations in the signs of these coefficients allow radial nodes to
form.
4. The Hartree-Fock Apprxoimation
Unfortunately, the Hartree approximation discsussed above ignores an important
property of electronic wave functions- their permutational antisymmetry. The full
Hamiltonian
H = ¦²j {- h2/2m ?2j - Ze2/rj} + 1/2 ¦²j,k e2/|rj-rk|
15
is invariant (i.e., is left unchanged) under the operation Pi,j in which a pair of electrons
have their labels (i, j) permuted. We say that H commutes with the permutation operator
Pi,j. This fact implies that any solution ¦· to H¦· = E¦· must also be an eigenfunction of Pi,j
Because permutation operators are idempotent, which means that if one applies P twice,
one obtains the identity P P = 1, it can be seen that the eigenvalues of P must be either +1
or ¨C1. That is, if P¦· = c¦·, then P P ¦· = cc ¦·, but PP = 1 means that cc = 1, so c = +1 or
¨C1.
As a result of H commuting with electron permutation operators and of the
idempotency of P, the eigenfunctions ¦· must either be odd or even under the application
of any such permutation. Particles whose wave functions are even under P are called
Bose particles or Bosons,; those for which ¦· is odd are called Fermions. Electrons
belong to the latter class of particles.
The simple spin-orbital product function used in Hartree theory
¦· = ¦°k=1,N ¦Õk
does not have the proper permutational symmetry. For example, the Be atom function
¦· = 1s¦Á(1) 1s¦Â(2) 2s¦Á(3) 2s¦Â(4) is not odd under the interchange of the labels of
electrons¡¯3 and 4; instead one obtains 1s¦Á(1) 1s¦Â(2) 2s¦Á(4) 2s¦Â(3). However, such
products of spin-orbitals (i.e., orbitals multiplied by ¦Á or ¦Â spin functions) can be made
into properly antisymmetric functions by forming the determinant of an NxN matrix
whose row index labels the spin orbital and whose column index labels the electrons. For
example, the Be atom function 1s¦Á(1) 1s¦Â(2) 2s¦Á(3) 2s¦Â(4) produces the 4x4 matrix
16
whose determinant is shown below
Clearly, if one were to interchange any columns of this determinant, one changes the sign
of the function. Moreover, if a determinant contains two or more rows that are identical
(i.e., if one attempts to form such a function having two or more spin-orbitals equal), it
vanishes. This is how such antisymmetric wave functions embody the Pauli exclusion
principle.
A convenient way to write such a determinant is as follows:
¦²P (-1)p ¦ÕP1 (1) ¦ÕP2(2) ¡ ¦ÕPN(N),
where the sum is over all N! permutations of the N spin-orbitals and the notation (-1)p
means that a ¨C1 is affixed to any permutation that involves an odd number of pairwise
interchanges of spin-orbitals and a +1 sign is given to any that involves an even number.
To properly normalize such a determinental wave function, one must multiply it by
(N!)-1/2. So, the final result is that wave functions of the form
1s¦Á(1) 1s ¦Á (2) 1s¦Á(3) 1s¦Á(4)
1s¦Â (1) 1 s ¦Â (2) 1s¦Â(3) 1s¦Â(4)
2s¦Á(1) 2 s ¦Á (2) 2s¦Á(3) 2s¦Á(4)
2s¦Â (1) 2 s ¦Â (2) 2s¦Â(3) 2s¦Â(4)
17
¦· = (N!)-1/2 ¦²P (-1)p ¦ÕP1 (1) ¦ÕP2(2) ¡ ¦ÕPN(N)
have the proper permutational antisymmetry. Note that such functions consist of as sum
of N! factors, all of which have exactly the same number of electrons occupying the same
number of spin orbitals; the only difference among the N! terms involves which electron
occupies which spin-orbital. For example, in the 1s¦Á2s¦Á function appropriate to the
excited state of He, one has
¦· = (2)-1/2 {1s¦Á(1) 2s¦Á(2) ¨C 2s¦Á(1) 1s¦Á(2)}
This function is clearly odd under the interchange of the labels of the two electrons, yet
each of its two components has one electron is a 1s¦Á spin-orbital and another electron in
a 2s¦Á spin-orbital.
Although having to make ¦· antisymmetric appears to complicate matters
significantly, it turns out that the Schr?dinger equation appropriate to the spin-orbitals in
such an antisymmetrized product wave function is nearly the same as the Hartree
Schr?dnger equation treated earlier. In fact, the resultant equation is
he ¦ÕJ = {¨C h2/2m ?2 -Ze2/r + ¦²K <¦ÕK(r¡¯) |(e2/|r-r¡¯|) | ¦ÕK(r¡¯)>} ¦ÕJ(r)
- ¦²K <¦ÕK(r¡¯) |(e2/|r-r¡¯|) | ¦ÕJ(r¡¯)> ¦ÕK(r)} = ¦ÅJ ¦ÕJ(r).
18
In this expression, which is known as the Hartree-Fock equation, the same kinetic and
nuclear attraction potentials occur as in the Hartree equation. Moreover, the same
Coulomb potential
¦²K ¡Ò ¦ÕK(r¡¯) e2/|r-r¡¯| ¦ÕK(r¡¯) dr¡¯ = ¦²K <¦ÕK(r¡¯)|e2/|r-r¡¯| |¦ÕK(r¡¯)> = ¦²K JK (r)
appears. However, one also finds a so-called exchange contribution to the Hartree-Fock
potential that is equal to ¦²L <¦ÕL(r¡¯) |(e2/|r-r¡¯|) | ¦ÕJ(r¡¯)> ¦ÕL(r) and is often written in short-
hand notation as ¦²L KL ¦ÕJ(r). Notice that the Coulomb and exchange terms cancel for the
L=J case; this causes the artificial self-interaction term JL ¦ÕL(r) that can appear in the
Hartree equations (unless one explicitly eliminates it) to automatically cancel with the
exchange term KL ¦ÕL(r) in the Hartree-Fock equations.
When the LCAO expansion of each Hartree-Fock (HF) spin-orbital is substituted
into the above HF Schr?dinger equation, a matrix equation is again obtained:
¦²¦Ì <¦Ö¦Í |he| ¦Ö¦Ì> CJ,¦Ì = ¦ÅJ ¦²¦Ì <¦Ö¦Í|¦Ö¦Ì> CJ,¦Ì
where the overlap integral <¦Ö¦Í|¦Ö¦Ì> is as defined earlier, and the he matrix element is
<¦Ö¦Í| he| ¦Ö¦Ì> = <¦Ö¦Í| ¨C h2/2m ?2 |¦Ö¦Ì> + <¦Ö¦Í| -Ze2/|r |¦Ö¦Ì>
+ ¦²K,¦Ç,¦Ã CK,¦Ç CK,¦Ã [<¦Ö¦Í(r) ¦Ö¦Ç(r¡¯) |(e2/|r-r¡¯|) | ¦Ö¦Ì(r) ¦Ö¦Ã(r¡¯)>
- <¦Ö¦Í(r) ¦Ö¦Ç(r¡¯) |(e2/|r-r¡¯|) | ¦Ö¦Ã(r) ¦Ö¦Ì (r¡¯)>].
19
Clearly, the only difference between this expression and the corresponding result of
Hartree theory is the presence of the last term, the exchange integral. The SCF interative
procedure used to solve the Hartree equations is again used to solve the HF equations.
Next, I think it is useful to reflect on the physical meaning of the Coulomb and
exchange interactions between pairs of orbitals. For example, the Coulomb integral J1,2 =
¡Ò |¦Õ1(r)|2 e2/|r-r¡¯| ¦Õ2(r¡¯)|2 dr dr¡¯ appropriate to the two orbitals shown in Fig. 6.3 represents
the Coulombic repulsion energy e2/|r-r¡¯| of two charge densities, |¦Õ1|2 and |¦Õ2|2, integrated
over all locations r and r¡¯ of the two electrons.
Figure 6.3 An s and a p Orbital and Their Overlap Region
¦Õ1(r)
¦Õ2(r')
Overlap region
20
In contrast, the exchange integral K1,2 = ¡Ò ¦Õ1(r) ¦Õ2(r¡¯) e2/|r-r¡¯| ¦Õ2(r) ¦Õ1(r¡¯) dr dr¡¯
can be thought of as the Coulombic repulsion between two electrons whose coordinates r
and r¡¯ are both distributed throughout the ¡°overlap region¡± ¦Õ1 ¦Õ2. This overlap region is
where both ¦Õ1 and ¦Õ2 have appreciable magnitude, so exchange integrals tend to be
significant in magnitude only when the two orbitals involved have substantial regions of
overlap.
Finally, a few words are in order about one of the most computer time-consuming
parts of any Hartree-Fock calculation (or those discussed later)- the task of evaluating
and transforming the two-electron integrals <¦Ö¦Í(r) ¦Ö¦Ç(r¡¯) |(e2/|r-r¡¯|) | ¦Ö¦Ì(r) ¦Ö¦Ã(r¡¯)>. Even
when M GTOs are used as basis functions, the evaluation of M4/8 of these integrals poses
a major hurdle. For example, with 500 basis orbitals, there will be of the order of 7.8 x109
such integrals. With each integral requiring 2 words of disk storage, this would require at
least 1.5 x104 Mwords of disk storage. Even in the era of modern computers that possess
100 Gby disks, this is a significant requirement. One of the more important technical
advances that is under much current development is the efficient calculation of such
integrals when the product functions ¦Ö¦Í(r) ¦Ö¦Ì(r) and ¦Ö¦Ã(r¡¯) ¦Ö¦Ç(r¡¯) that display the
dependence on the two electrons¡¯ coordinates r and r¡¯ are spatially distant. In particular,
multipolar expansions of these product functions are used to obtain more efficient
approximations to their integrals when these functions are far apart. Moreover, such
expansions offer a reliable way to ¡°ignore¡± (i.e., approximate as zero) many integrals
whose product functions are sufficiently distant. Such approaches show considerable
promise for reducing the M4/8 two-electron integral list to one whose size scales much
21
less strongly with the size of the AO basis.
a. Koopmans¡¯ Theorem
The HF-SCF equations he ¦Õi = ¦Åi ¦Õi imply that the orbital enegies ¦Åi can be
written as:
¦Åi = < ¦Õi | he | ¦Õi > = < ¦Õi | T + V | ¦Õi > + ¦²j(occupied) < ¦Õi | Jj - Kj | ¦Õi >
= < ¦Õi | T + V | ¦Õi > + ¦²j(occupied) [ Ji,j - Ki,j ],
where T + V represents the kinetic (T) and nuclear attraction (V) energies, respectively.
¦³hus, ¦Åi is the average value of the kinetic energy plus Coulombic attraction to the nuclei
for an electron in ¦Õi plus the sum over all of the spin-orbitals occupied in ¦· of Coulomb
minus exchange interactions.
If ¦Õi is an occupied spin-orbital, the j = i term [ Ji,i - Ki,i] disappears in the above
sum and the remaining terms in the sum represent the Coulomb minus exchange
interaction of ¦Õi wit all of the N-1 other occupied spin-orbitals. If ¦Õi is a virtual spin-
orbital, this cancellation does not occur because the sum over j does not include j = i. So,
one obtains the Coulomb minus exchange interaction of ¦Õi with all N of the occupied
spin-orbitals in ¦·. Hence the energies of occupied orbitals pertain to interactions
appropriate to a total of N electrons, while the energies of virtual orbitals pertain to a
system with N+1 electrons.
Let us consider the following model of the detachment or attachment of an
22
electron in an N-electron system.
1. In this model, both the parent molecule and the species generated by adding or
removing an electron are treated at the single-determinant level.
2. The Hartree-Fock orbitals of the parent molecule are used to describe both species. It is
said that such a model neglects 'orbital relaxation' (i.e., the reoptimization of the spin-
orbitals to allow them to become appropriate to the daughter species).
Within this model, the energy difference between the daughter and the parent can
be written as follows (¦Õk represents the particular spin-orbital that is added or removed):
for electron detachment:
EN-1 - EN = - ¦Åk ;
and for electron attachment:
EN - EN+1 = - ¦Åk .
So, within the limitations of the HF, frozen-orbital model, the ionization
potentials (IPs) and electron affinities (EAs) are given as the negative of the occupied and
virtual spin-orbital energies, respectively. This statement is referred to as Koopmans¡¯
theorem; it is used extensively in quantum chemical calculations as a means of estimating
IPs and EAs and often yields results that are qualitatively correct (i.e., ± 0.5 eV).
b. Orbital Energies and the Total Energy
23
The total HF-SCF electronic energy can be written as:
E = ¦²i(occupied) < ¦Õi | T + V | ¦Õi > + ¦²i>j(occupied) [ Ji,j - Ki,j ]
and the sum of the orbital energies of the occupied spin-orbitals is given by:
¦²i(occupied) ¦Åi = ¦²i(occupied) < ¦Õi | T + V | ¦Õi > + ¦²i,j(occupied) [Ji,j - Ki,j ].
These two expressions differ in a very important way; the sum of occupied orbital
energies double counts the Coulomb minus exchange interaction energies. Thus, within
the Hartree-Fock approximation, the sum of the occupied orbital energies is not equal to
the total energy. This finding teaches us that we can not think of the total electronic
energy of a given orbital occupation in terms of the orbital energies alone. We need to
also keep track of the inter-electron Coulomb and exchange energies.
5. Molecular Orbitals
Before moving on to discuss methods that go beyond the HF model, it is
appropriate to examine some of the computational effort that goes into carrying out an
SCF calculation on molecules. The primary differences that appear when molecules
rather than atoms are considered are
i. The electronic Hamiltonian he contains not only one nuclear-attraction Coulomb
potential ¦²j Ze2/rj but a sum of such terms, one for each nucleus in the molecule:
24
¦²a ¦²j Zae2/|rj-Ra|, whose locations are denoted Ra.
ii. One has AO basis functions of the type discussed above located on each nucleus
of the molecule. These functions are still denoted ¦Ö¦Ì(r-Ra), but their radial and angular
dependences involve the distance and orientation of the electron relative to the particular
nucleus on which the AO is located.
Other than these two changes, performing a SCF calculation on a molecule (or molecular
ion) proceeds just as in the atomic case detailed earlier. Let us briefly review how this
iterative process occurs.
Once atomic basis sets have been chosen for each atom, the one- and two-electron
integrals appearing in the h¦Å and overlap matrices must be evaluated. There are numerous
highly efficient computer codes that allow such integrals to be computed for s, p, d, f, and
even g, h, and i basis functions. After executing one of these 'integral packages' for a
basis with a total of M functions, one has available (usually on the computer's hard disk)
of the order of M2/2 one-electron (< ¦Ö¦Ì | he | ¦Ö¦Í > and < ¦Ö¦Ì | ¦Ö¦Í >) and M4/8 two-
electron (< ¦Ö¦Ì ¦Ö¦Ä | ¦Ö¦Í ¦Ö¦Ê >) integrals. When treating extremely large atomic orbital
basis sets (e.g., 500 or more basis functions), modern computer programs calculate the
requisite integrals but never store them on the disk. Instead, their contributions to the
<¦Ö¦Ì |he|¦Ö¦Í> matrix elements are accumulated 'on the fly' after which the integrals are
discarded.
a. Shapes, Sizes, and Energies of Orbitals
Each molecular spin-orbital (MO) that results from solving the HF SCF equations
25
for a molecule or molecular ion consists of a sum of components involving all of the
basis AOs:
¦Õj = ¦²¦Ì Cj,¦Ì ¦Ö¦Ì.
In this expression, the Cj,¦Ì are referred to as LCAO-MO coefficients because they tell us
how to linearly combine AOs to form the MOs. Because the AOs have various angular
shapes (e.g., s, p, or d shapes) and radial extents (i.e., different orbital exponents), the
MOs constructed from them can be of different shapes and radial sizes. Let¡¯s look at a
few examples to see what I mean.
The first example arises when two H atoms combine to form the H2 molecule. The
valence AOs on each H atom are the 1s AOs; they combine to form the two valence MOs
(¦Ò and ¦Ò*) depicted in Fig. 6.4.
26
Figure 6. 4 Two 1s Hydrogen Atomic Orbitals Combine to Form a Bonding and
Antibonding Molecular Orbital
The bonding MO labeled ¦Ò has LCAO-MO coefficients of equal sign for the two 1s AOs,
as a result of which this MO has the same sign near the left H nucleus (A) as near the
right H nucleus (B). In contrast, the antibonding MO labeled ¦Ò* has LCAO-MO
coefficients of different sign for the A and B 1s AOs. As was the case in the Hückel or
tight-binding model outlined in the Background Material, the energy splitting between
the two MOs depends on the overlap <¦Ö1sA|¦Ö1sB> between the two AOs.
An analogous pair of bonding and antibonding MOs arises when two p orbitals
overlap ¡°sideways¡± as in ethylene to form pi and pi* MOs which are illustrated in Fig. 6.5.
27
Figure 6. 5 Two ppi Atomic Orbitals on Carbon Atoms Combine to Form a Bonding and
Antibonding Molecular Orbital
The shapes of these MOs clearly are dictated by the shapes of the AOs that comprise
them and the relative signs of the LCAO-MO coefficients that relate the MOs to
AOs. For the pi MO, these coefficients have the same sign on the left and right atoms; for
the pi* MO, they have opposite signs.
I should stress that the signs and magnitudes of the LCAO-MO coefficients arise as
eigenvectors of the HF SCF matrix eigenvalue equation:
¦²¦Ì <¦Ö¦Í|he| ¦Ö¦Ì> Cj,¦Ì = ¦Åj ¦²¦Ì<¦Ö¦Í|¦Ö¦Ì> Cj,¦Ì
28
It is a characteristic of such eigenvalue problems for the lower energy eigenfunctions to
have fewer nodes than the higher energy solutions as we learned from several examples
that we solved in the Background Material.
Another thing to note about the MOs shown above is that they will differ in their
quantitative details, but not in their overall shapes, when various functional groups are
attached to the ethylene molecule¡¯s C atoms. For example, if electron withdrawing
groups such as Cl, OH or Br are attached to one of the C atoms, the attractive potential
experience by a pi electron near that C atom will be enhanced. As a result, the bonding
MO will have larger LCAO-MO coefficients Ck,¦Ì belonging to the ¡°tighter¡± basis AOs ¦Ö¦Ì
on this C atom. This will make the bonding pi MO more radially compact in this region of
space, although its nodal character and gross shape will not change. Alternatively, an
electron donating group such as H3C- or t-butyl attached to one of the C centers will
cause the pi MO to be more diffuse (by making its LCAO-MO coefficients for more
diffuse basis AOs larger).
In addition to MOs formed primarily of AOs of one type (i.e., for H2 it is primarily s-
type orbitals that form the ¦Ò and ¦Ò* MOs; for ethylene¡¯s pi bond, it is primarily the C 2p
AOs that contribute), there are bonding and antibonding MOs formed by combining
several AOs. For example, the four equivalent C-H bonding MOs in CH4 shown in Fig. 6.
6 each involve C 2s and 2p as well as H 1s basis AOs.
29
Figure 6. 6 The Four C-H Bonds in Methane
The energies of the MOs depend on two primary factors: the energies of the AOs
from which the MOs are constructed and the overlap between these AOs. The pattern in
energies for valence MOs formed by combining pairs of first-row atoms to form homo-
nuclear diatomic molecules is shown in Fig. 6. 7.
30
Figure 6.7 Energies of the Valence Molecular Orbitals in Homonuclear Diatomics
Involving First-Row Atoms
In this figure, the core MOs formed from the 1s AOs are not shown, but only those MOs
formed from 2s and 2p AOs appear. The clear trend toward lower orbital energies as one
moves from left to right is due primarily to the trends in orbital energies of the constituent
AOs. That is, F being more electronegative than N has a lower-energy 2p orbital than
does N.
b. Bonding, Anti-bonding, Non-bonding, and Rydberg Orbitals
As noted above, when valence AOs combine to form MOs, the relative signs of the
combination coefficients determine, along with the AO overlap magnitudes, the MO¡¯s
energy and nodal properties. In addition to the bonding and antibonding MOs discussed
and illustrated earlier, two other kinds of MOs are important to know about.
Non-bonding MOs arise, for example, when an orbital on one atom is not directed
toward and overlapping with an orbital on a neighboring atom. For example, the lone pair
orbitals on H2O or on the oxygen atom of H2C=O are non-bonding orbitals. They still are
described in the LCAO-MO manner, but their C¦Ì,i coefficients do not contain dominant
contributions from more than one atomic center.
Finally, there is a type of orbital that all molecules possess but that is ignored in
most elementary discussions of electronic structure. All molecules have so-called
Rydberg orbitals. These orbitals can be thought of as large diffuse orbitals that describe
the regions of space an electron would occupy if it were in the presence of the
31
corresponding closed-shell molecular cation. Two examples of such Rydberg orbitals are
shown in Fig. 6.8. On the left, we see the Rydberg orbital of NH4 and on the right, that of
H3N-CH3. The former species can be thought of as a closed-shell ammonium cation NH4+
around which a Rydberg orbital resides. The latter is protonated methyl amine with its
Rydberg orbital.
H
H H
H
Figure 6.8 Rydberg Orbitals of NH4+ and of Protonated Methyl Amine
B. Deficiencies in the Single Determinant Model
To achieve reasonable chemical accuracy (e.g., ± 5 kcal/mole) in electronic structure
calculations, one can not describe the wave function ¦· in terms of a single determinant.
The reason such a wave function is inadequate is because the spatial probability density
functions are not correlated. This means the probability of finding one electron at position
32
r is independent of where the other electrons are, which is absurd because the electrons¡¯
mutual Coulomb repulsion causes them to ¡°avoid¡± one another. This mutual avoidance is
what we call electron correlation because the electrons¡¯ motions, as reflected in their
spatial probability densities, are correlated (i.e., inter-related). Let us consider a simple
example to illustrate this problem with single determinant functions. The |1s¦Á(r) 1s¦Â(r¡¯)|
determinant, when written as
|1s¦Á(r) 1s¦Â(r¡¯)| = 2-1/2{1s¦Á(r) 1s¦Â(r¡¯) - 1s¦Á(r¡¯) 1s¦Â(r)}
can be multiplied by itself to produce the 2-electron spin- and spatial- probability density:
P(r, r¡¯) = 1/2{[1s¦Á(r) 1s¦Â(r¡¯)]2 + [1s¦Á(r¡¯) 1s¦Â(r)]2 -1s¦Á(r) 1s¦Â(r¡¯) 1s¦Á(r¡¯) 1s¦Â(r)
- 1s¦Á(r¡¯) 1s¦Â(r) 1s¦Á(r) 1s¦Â(r¡¯)}.
If we now integrate over the spins of the two electrons and make use of
<¦Á|¦Á> = <¦Â|¦Â> = 1, and <¦Á|¦Â> = <¦Â|¦Á> = 0,
we obtain the following spatial (i.e., with spin absent) probability density:
P(r,r¡¯) = |1s(r)|2 |1s(r¡¯)|2.
33
This probability, being a product of the probability density for finding one electron at r
times the density of finding another electron at r¡¯, clearly has no correlation in it. That is,
the probability of finding one electron at r does not depend on where (r¡¯) the other
electron is. This product form for P(r,r¡¯) is a direct result of the single-determinant form
for ¦·, so this form must be wrong if electron correlation is to be accounted for.
1. Electron Correlation
Now, we need to ask how ¦· should be written if electron correlation effects are to
be taken into account. As we now demonstrate, it turns out that one can account for
electon avoidance by taking ¦· to be a combination of two or more determinants that
differ by the promotion of two electrons from one orbital to another orbital. For example,
in describing the pi2 bonding electron pair of an olefin or the ns2 electron pair in alkaline
earth atoms, one mixes in doubly excited determinants of the form (pi*)2 or np2 ,
respectively.
Briefly, the physical importance of such doubly-excited determinants can be made
clear by using the following identity involving determinants:
C1 | ..¦Õ¦Á ¦Õ¦Â..| - C2 | ..¦Õ'¦Á ¦Õ'¦Â..|
= C1/2 { | ..( ¦Õ - x¦Õ')¦Á ( ¦Õ + x¦Õ')¦Â..| - | ..( ¦Õ - x¦Õ')¦Â ( ¦Õ + x¦Õ')¦Á..| },
where
34
x = (C2/C1)1/2 .
This allows one to interpret the combination of two determinants that differ from one
another by a double promotion from one orbital (¦Õ) to another (¦Õ') as equivalent to a
singlet coupling (i.e., having ¦Á¦Â-¦Â¦Á spin function) of two different orbitals (¦Õ - x¦Õ') and
(¦Õ + x¦Õ') that comprise what are called polarized orbital pairs. In the simplest
embodiment of such a configuration interaction (CI) description of electron correlation,
each electron pair in the atom or molecule is correlated by mixing in a configuration state
function (CSF) in which that electron pair is "doubly excited" to a correlating orbital.
In the olefin example mentioned above, the two non-orthogonal polarized orbital
pairs involve mixing the pi and pi* orbitals to produce two left-right polarized orbitals as
depicted in Fig. 6.9:
left polarized right polarized
pi ?xpi
?
pi + xpi
?
pi
?
pi
Figure 6. 9 Left and Right Polarized Orbitals of an Olefin
35
In this case, one says that the pi2 electron pair undergoes left-right correlation when the
(pi*)2 determinant is mixed into the CI wave function.
In the alkaline earth atom case, the polarized orbital pairs are formed by mixing
the ns and np orbitals (actually, one must mix in equal amounts of px, py , and pz orbitals
to preserve overall 1S symmetry in this case), and give rise to angular correlation of the
electron pair. Such a pair of polarized orbitals is shown in Fig. 6.10.
2s and 2p
z
2s + a 2p
z
2s - a 2p
z
Figure 6.10 Angularly Polarized Orbital Pairs
More specifically, the following four determinants are found to have the largest
amplitudes in ¦·:
36
¦· ? C1 |1s22s2 | - C2 [|1s22px2 | +|1s22py2 | +|1s22pz2 |].
The fact that the latter three terms possess the same amplitude C2 is a result of the
requirement that a state of 1S symmetry is desired. It can be shown that this function is
equivalent to:
¦· ? 1/6 C1 |1s¦Á1s¦Â{[(2s-a2px)¦Á(2s+a2px)¦Â - (2s-a2px)¦Â(2s+a2px)¦Á]
+[(2s-a2py)¦Á(2s+a2py)¦Â - (2s-a2py)¦Â(2s+a2py)¦Á]
+[(2s-a2pz)¦Á(2s+a2pz)¦Â - (2s-a2pz)¦Â(2s+a2pz)¦Á] |,
where a = 3C2/C1 .
Here two electrons occupy the 1s orbital (with opposite, ¦Á and ¦Â spins), and are
thus not being treated in a correlated manner, while the other pair resides in 2s/2p
polarized orbitals in a manner that instantaneously correlates their motions. These
polarized orbital pairs (2s ± a 2px,y, or z) are formed by combining the 2s orbital with
the 2px,y, or z orbital in a ratio determined by C2/C1.
This ratio C2/C1 can be shown using perturbation theory to be proportional to the
magnitude of the coupling <1s22s2 |H|1s22p2 > between the two configurations
involved and inversely proportional to the energy difference [<1s22s2H|1s22s2> -
<1s22p2|H|1s22p2>] between these configurations. In general, configurations that have
similar Hamiltonian expectation values and that are coupled strongly give rise to strongly
mixed (i.e., with large |C2/C1| ratios) polarized orbital pairs.
37
In each of the three equivalent terms in the alkaline earth wave function, one of
the valence electrons moves in a 2s+a2p orbital polarized in one direction while the other
valence electron moves in the 2s-a2p orbital polarized in the opposite direction. For
example, the first term [(2s-a2px)¦Á(2s+a2px)¦Â - (2s-a2px)¦Â(2s+a2px)¦Á] describes one
electron occupying a 2s-a2px polarized orbital while the other electron occupies the
2s+a2px orbital. The electrons thus reduce their Coulomb repulsion by occupying
different regions of space; in the SCF picture 1s22s2, both electrons reside in the same 2s
region of space. In this particular example, the electrons undergo angular correlation to
'avoid' one another.
The use of doubly excited determinants is thus seen as a mechanism by which ¦·
can place electron pairs, which in the single-configuration picture occupy the same
orbital, into different regions of space (i.e., each one into a different member of the
polarized orbital pair) thereby lowering their mutual Coulombic repulsion. Such electron
correlation effects are extremely important to include if one expects to achieve
chemically meaningful accuracy (i.e., ± 5 kcal/mole).
2. Essential Configuration Interaction
There are occasions in which the inclusion of two or more determinants in ¦· is
essential to obtaining even a qualitatively correct description of the molecule¡¯s electronic
structure. In such cases, we say that we are including essential correlation effects. To
illustrate, let us consider the description of the two electrons in a single covalent bond
between two atoms or fragments that we label X and Y. The fragment orbitals from
38
which the bonding ¦Ò and antibonding ¦Ò* MOs are formed we will label sX and sY,
respectively.
Several spin- and spatial- symmetry adapted 2-electron determinants can be
formed by placing two electrons into the ¦Ò and ¦Ò* orbitals. For example, to describe the
singlet determinant corresponding to the closed-shell ¦Ò2 orbital occupancy, a single
Slater determinant
1¦² (0) = |¦Ò¦Á ¦Ò¦Â| = (2)-1/2 { ¦Ò¦Á(1) ¦Ò¦Â(2) - ¦Ò¦Â(1) ¦Ò¦Á(2) }
suffices. An analogous expression for the (¦Ò*)2 determinant is given by
1¦²** (0) = | ¦Ò*¦Á¦Ò*¦Â | = (2)?1/2 { ¦Ò*¦Á (1) ¦Ò*¦Â (2) - ¦Ò*¦Á (2) ¦Ò*¦Â (1) }.
Also, the MS = 1 component of the triplet state having ¦Ò¦Ò* orbital occupancy can be
written as a single Slater determinant:
3¦²* (1) = |¦Ò¦Á ¦Ò*¦Á| = (2)-1/2 { ¦Ò¦Á(1) ¦Ò* ¦Á(2) - ¦Ò* ¦Á(1) ¦Ò¦Á(2) },
as can the MS = -1 component of the triplet state
3¦²*(-1) = |¦Ò¦Â ¦Ò*¦Â| = (2)-1/2 { ¦Ò¦Â(1) ¦Ò* ¦Â(2) - ¦Ò* ¦Â(1) ¦Ò¦Â(2) }.
39
However, to describe the singlet and MS = 0 triplet states belonging to the ¦Ò¦Ò*
occupancy, two determinants are needed:
1¦²* (0) = 1
2 [ ]?¦Ò¦Á¦Ò*¦Â? - ?¦Ò¦Â¦Ò*¦Á?
is the singlet and
3¦²*(0) = 1
2 [ ]?¦Ò¦Á¦Ò*¦Â? + ?¦Ò¦Â¦Ò*¦Á?
is the triplet. In each case, the spin quantum number S, its z-axis projection MS , and the
¦« quantum number are given in the conventional 2S+1¦«(MS) term symbol notation.
As the distance R between the X and Y fragments is changed from near its
equilibrium value of Re and approaches infinity, the energies of the ¦Ò and ¦Ò* orbitals
vary in a manner well known to chemists as depicted in Fig. 6.11 if X and Y are identical.
40
E
RRe
*
¦Ò
u
¦Ò =
¦Ò¦Ò
g
=
Y
s
X
s ,
Figure 6.11 Orbital Correlation Diagram Showing Two ¦Ò-Type Orbitals Combining to
Form a Bonding and an Antibonding Molecular Orbital.
If X and Y are not identical, the sx and sy orbitals still combine to form a bonding
¦Ò and an antibonding ¦Ò* orbital. The energies of these orbitals, for R values ranging
from near Re to R¡ú¡Þ, are depicted in Fig. 6.12 for the case in which X is more
electronegative than Y.
41
R
e
E
R
*¦Ò
¦Ò
s
Y
s
X
Figure 6.12 Orbital Correlation Diagram For ¦Ò-Type Orbitals in the Heteronuclear Case
The energy variation in these orbital energies gives rise to variations in the
energies of the six determinants listed above. As R ¡ú ¡Þ, the determinants¡¯ energies are
difficult to "intuit" because the ¦Ò and ¦Ò* orbitals become degenerate (in the homonuclear
case) or nearly so (in the X ¡Ù Y case). To pursue this point and arrive at an energy
ordering for the determinants that is appropriate to the R ¡ú ¡Þ region, it is useful to
express each such function in terms of the fragment orbitals sx and sy that comprise ¦Ò and
¦Ò*. To do so, the LCAO-MO expressions for ¦Ò and ¦Ò*,
¦Ò = C [sx + z sy]
and
¦Ò* = C* [z sx - sy],
42
are substituted into the Slater determinant definitions given above. Here C and C* are the
normalization constants. The parameter z is 1.0 in the homonuclear case and deviates
from 1.0 in relation to the sx and sy orbital energy difference (if sx lies below sy, then z <
1.0; if sx lies above sy, z > 1.0).
Let us examine the X=Y case to keep the analysis as simple as possible. The
process of substituting the above expressions for ¦Ò and ¦Ò* into the Slater determinants
that define the singlet and triplet functions can be illustrated as follows for the 1¦²(0) case:
1¦²(0) = ?¦Ò¦Á ¦Ò¦Â? = C2 ? (sx + sy) ¦Á(sx + sy) ¦Â?
= C2 [?sx ¦Á sx ¦Â? + ?sy ¦Á sy ¦Â? + ?sx ¦Á sy ¦Â? + ?sy ¦Á sx ¦Â?]
The first two of these atomic-orbital-based Slater determinants (?sx ¦Á sx ¦Â?
and ?sy ¦Á sy ¦Â?) are called "ionic" because they describe atomic orbital occupancies,
which are appropriate to the R ¡ú ¡Þ region that correspond to X ?? + X and X + X ??
valence bond structures, while ?sx ¦Á sy ¦Â? and ?sy ¦Á sx ¦Â? are called "covalent" because
they correspond to X? + X? structures.
In similar fashion, the remaining five determinant functions may be expressed in
terms of fragment-orbital-based Slater determinants. In so doing, use is made of the
antisymmetry of the Slater determinants | ¦Õ1 ¦Õ2 ¦Õ3 | = - | ¦Õ1 ¦Õ3 ¦Õ2 |, which implies that
any determinant in which two or more spin-orbitals are identical vanishes | ¦Õ1 ¦Õ2 ¦Õ2 | =
43
- | ¦Õ1 ¦Õ2 ¦Õ2 | = 0. The result of decomposing the MO-based determinats into their
fragment-orbital components is as follows:
1¦²** (0) = ?¦Ò*¦Á ¦Ò*¦Â?
= C*2 [ ?sx ¦Á sx ¦Â? + ?sy ¦Á sy ¦Â?
? ?sx ¦Á sy ¦Â? ? ?sy ¦Á sx ¦Â?]
1¦²* (0) = 1
2 [ ]?¦Ò¦Á ¦Ò*¦Â? - ?¦Ò¦Â ¦Ò*¦Á?
= CC* 2 [?sx ¦Á sx ¦Â? ? ?sy ¦Á sy ¦Â?]
3¦²* (1) = ?¦Ò¦Á ¦Ò*¦Á?
= CC* 2?sy ¦Á sx ¦Á?
3¦²* (0) = 1
2 [ ]?¦Ò¦Á ¦Ò*¦Â? + ?¦Ò¦Â ¦Ò*¦Á?
=CC* 2 [?sy ¦Á sx ¦Â? ? ?sx ¦Á sy ¦Â?]
3¦²* (-1) = ?¦Ò¦Á ¦Ò*¦Á?
= CC* 2?sy ¦Â sx ¦Â?
These decompositions of the six valence determinants into fragment-orbital or
valence bond components allow the R = ¡Þ energies of these states to specified. For
44
example, the fact that both 1¦² and 1¦²** contain 50% ionic and 50% covalent structures
implies that, as R ¡ú ¡Þ, both of their energies will approach the average of the covalent
and ionic atomic energies 1/2 [E (X?) + E (X?) + E (X) + E ( X?? ) ]. The 1¦²* energy
approaches the purely ionic value E (X)+ E (X?? ) as R ¡ú ¡Þ. The energies of 3¦²*(0),
3¦²*(1) and 3¦²*(-1) all approach the purely covalent value E (X?) + E (X? ) as R¡ú¡Þ.
The behaviors of the energies of the six valence determinants as R varies are
depicted in Fig. 6.13 for situations in which the homolytic bond cleavage is energetically
favored (i.e., for which E (X?) + E (X?) < E (X) +E ( X?? )).
R
e
E
1
¦²
? ?
R
1
¦²
1
¦²
?
?
¦²
3 E(Y) + E(X:)
1/2 [E(X?) + E(Y?) + E(Y) + E(X:)]
E(X?) + E(Y?)
Figure 6. 13 Configuration Correlation Diagram Showing How the Determinants¡¯
Energies Vary With R
45
It is essential to realize that the energies of the determinants do not represent the
energies of the true electronic states. For R-values at which the determinant energies are
separated widely, the true state energies are rather well approximated by individual 1¦²
determinant energies; such is the case near Re.
However, at large R, the situation is very different, and it is in such cases that
what we term essential configuration interaction occurs. Specifically, for the X=Y
example, the 1¦² and 1¦²** determinants undergo essential CI coupling to form a pair of
states of 1¦² symmetry (the 1¦²* CSF cannot partake in this CI mixing because it is of
ungerade symmetry; the 3¦²* states can not mix because they are of triplet spin
symmetry). The CI mixing of the 1¦² and 1¦²** determinants is described in terms of a 2x2
secular problem
??
??
??
???1¦²?H?1¦²? ?1¦²?H?1¦²**?
?1¦²**?H?1¦²? ?1¦²**?¦§?1¦²**? ??
??
??
??A
B = E ??
??
??
??A
B
The diagonal entries are the determinants¡¯ energies depicted in Fig. 6.13. The off-
diagonal coupling matrix elements can be expressed in terms of an exchange integral
between the ¦Ò and ¦Ò* orbitals:
?1¦²?H?1¦²**? = ??¦Ò¦Á ¦Ò¦Â?H??¦Ò*¦Á ¦Ò*¦Â?? = ?¦Ò¦Ò? 1r12 ? ¦Ò*¦Ò*? = ¦ª¦Ò¦Ò*
At R ¡ú ¡Þ, where the 1¦² and 1¦²** determinants are degenerate, the two solutions to the
above CI matrix eigenvalue problem are:
46
E
+
_ =1/2 [ E (X?) + E (X?) + E (X)+ E (X?? ) ] -+ ?¦Ò¦Ò ? 1r12 ? ¦Ò* ¦Ò*?
with respective amplitudes for the 1¦² and 1¦²** CSFs given by
A
+-
= ± 12 ; B
+-
= -+ 12 .
The first solution thus has
¦·? = 12 [?¦Ò¦Á ¦Ò¦Â? - ?¦Ò*¦Á ¦Ò*¦Â?]
which, when decomposed into atomic orbital components, yields
¦·? = 12 [ ?sx¦Á sy¦Â? - ?sx¦Â sy¦Á?].
The other root has
¦·+ = 12 [?¦Ò¦Á ¦Ò¦Â? + ?¦Ò*¦Á ¦Ò*¦Â?]
= 12 [ ?sx¦Á sx¦Â? + ?sy ¦Á sy¦Â?].
47
So, we see that 1¦² and 1¦²**, which both contain 50% ionic and 50% covalent parts,
combine to produce ¦·_ which is purely covalent and ¦·+ which is purely ionic.
The above essential CI mixing of 1¦² and 1¦²** as R ¡ú ¡Þ qualitatively alters the
energy diagrams shown above. Descriptions of the resulting valence singlet and triplet ¦²
states are given in Fig. 6.14 for homonuclear situations in which covalent products lie
below the ionic fragments.
??1
¦²
E
R
1
¦²
?
?
¦²
3
1
¦²
E(Y) + E(X:)
E(X?) + E(Y?)
Figure 6.14 State Correlation Diagram Showing How the Energies of the States,
Comprised of Combinations of Determinants, Vary With R
48
3. Various Approaches to Electron Correlation
There are numerous procedures currently in use for determining the 'best' wave
function that is usually expressed in the form:
¦· = ¦²I CI ¦µI,
where ¦µI is a spin-and space- symmetry-adapted configuration state function (CSF) that
consists of one or more determinants | ¦ÕI1 ¦ÕI2 ¦ÕI3 ... ¦ÕIN| combined to produce the
desired symmetry. In all such wave functions, there are two kinds of parameters that need
to be determined- the CI coefficients and the LCAO-MO coefficients describing the ¦ÕIk
in terms of the AO basis functions . The most commonly employed methods used to
determine these parameters include:
a. The CI Method
In this approach, the LCAO-MO coefficients are determined first usually via a
single-configuration SCF calculation. The CI coefficients are subsequently determined by
making the expectation value < ¦· | H | ¦· > / < ¦· | ¦· > variationally stationary.
The CI wave function is most commonly constructed from spin- and spatial-
symmetry adapted combinations of determinants called configuration state functions
(CSFs) ¦µJ that include:
49
1. The so-called reference CSF that is the SCF wave function used to generate the
molecular orbitals ¦Õi .
2. CSFs generated by carrying out single, double, triple, etc. level 'excitations' (i.e.,
orbital replacements) relative to the reference CSF. CI wave functions limited to include
contributions through various levels of excitation are denoted S (singly), D (doubly),
SD (singly and doubly), SDT (singly, doubly, and triply) excited.
The orbitals from which electrons are removed can be restricted to focus attention
on correlations among certain orbitals. For example, if excitations out of core orbitals are
excluded, one computes a total energy that contains no core correlation energy. The
number of CSFs included in the CI calculation can be large. CI wave functions including
5,000 to 50,000 CSFs are routine, and functions with one to several billion CSFs are
within the realm of practicality.
The need for such large CSF expansions can be appreciated by considering (i) that
each electron pair requires at least two CSFs to form polarized orbital pairs, (ii) there are
of the order of N(N-1)/2 = X electron pairs for a molecule containing N electrons, hence
(iii) the number of terms in the CI wave function scales as 2X. For a molecule containing
ten electrons, there could be 245 = 3.5 x1013 terms in the CI expansion. This may be an
over estimate of the number of CSFs needed, but it demonstrates how rapidly the number
of CSFs can grow with the number of electrons.
The Hamiltonian matrix elements HI,J between pairs of CSFs are, in practice,
evaluated in terms of one- and two- electron integrals over the molecular orbitals. Prior to
forming the HI,J matrix elements, the one- and two- electron integrals, which can be
computed only for the atomic (e.g., STO or GTO) basis, must be transformed to the
50
molecular orbital basis. This transformation step requires computer resources
proportional to the fifth power of the number of basis functions, and thus is one of the
more troublesome steps in most configuration interaction calculations. Further details of
such calculations are beyond the scope of this text, but are treated in my QMIC text.
b. Perturbation Theory
This method uses the single-configuration SCF process to determine a set of
orbitals {¦Õi} . Then, with a zeroth-order Hamiltonian equal to the sum of the N electrons¡¯
Fock operators H0 = ¦²i=1,N he(i), perturbation theory is used to determine the CI
amplitudes for the other CSFs. The M?ller-Plesset perturbation (MPPT) procedure is a
special case in which the above sum of Fock operators is used to define H0. The
amplitude for the reference CSF is taken as unity and the other CSFs' amplitudes are
determined by using H-H0 as the perturbation.
In the MPPT method, once the reference CSF is chosen and the SCF orbitals
belonging to this CSF are determined, the wave function ¦· and energy E are determined
in an order-by-order manner. The perturbation equations determine what CSFs to include
through any particular order. This is one of the primary strengths of this technique; it
does not require one to make further choices, in contrast to the CI treatment where one
needs to choose which CSFs to include.
For example, the first-order wave function correction ¦·1 is:
¦·1 = - ¦²i<j,m<n [< i,j |1/r12| m,n > -< i,j |1/r12| n,m >][ ¦Åm-¦Åi +¦Ån-¦Åj]-1 | ¦µi,jm,n >,
51
where the SCF orbital energies are denoted ¦Åk and ¦µi,jm,n represents a CSF that is
doubly excited (¦Õi and ¦Õj are replaced by ¦Õm and ¦Õn) relative to the SCF wave function
¦µ. Only doubly excited CSFs contribute to the first-order wave function; the fact that the
contributions from singly excited configurations vanish in ¦·1 is known at the Brillouin
theorem.
The energy E is given through second order as:
E = ESCF - ¦²i<j,m<n | < i,j | 1/r12 | m,n > -< i,j | 1/r12 | n,m > |2/[ ¦Åm-¦Åi +¦Ån -¦Åj ].
Both ¦· and E are expressed in terms of two-electron integrals < i,j | 1/r12 | m,n > (that are
sometimes denoted <i,j|k,l>) coupling the virtual spin-orbitals ¦Õm and ¦Õn to the spin-
orbitals from which electrons were excited ¦Õi and ¦Õj as well as the orbital energy
differences [ ¦Åm-¦Åi +¦Ån -¦Åj ] accompanying such excitations. Clearly, major contributions
to the correlation energy are made by double excitations into virtual orbitals ¦Õm ¦Õn with
large < i,j | 1/r12 | m,n > integrals and small orbital energy gaps [¦Åm-¦Åi +¦Ån -¦Åj]. In higher
order corrections, contributions from CSFs that are singly, triply, etc. excited relative to
¦µ appear, and additional contributions from the doubly excited CSFs also enter. The
various orders of MPPT are usually denoted MPn (e.g.., MP2 means second-order
MPPT).
c. The Coupled-Cluster Method
52
As noted above, when the Hartree-Fock wave function ¦·0 is used as the zeroth-
order starting point in a perturbation expansion, the first (and presumably most
important) corrections to this function are the doubly-excited determinants. In early
studies of CI treatments of electron correlation, it was also observed that double
excitations had the largest CJ coefficients after the SCF wave function, which has the
very largest CJ. Moreover, in CI studies that included single, double, triple, and quadruple
level excitations relative to the dominant SCF determinant, it was observed that
quadruple excitations had the next largest CJ amplitudes after the double excitations. And,
very importantly, it was observed that the amplitudes Cabcdmnpq of the quadruply excited
CSFs ¦µabcdmnpq could be very closely approximated as products of the amplitudes Cabmn
Ccdpq of the doubly excited CSFs ¦µabmn and ¦µcdpq. This observation prompted workers to
suggest that a more compact and efficient expansion of the correlated wave function
might be realized by writing ¦· as:
¦· = exp(T) ¦µ,
where ¦µ is the SCF determinant and the operator T appearing in the exponential is taken
to be a sum of operators
T = T1 + T2 + T3 + ¡ + TN
that create single (T1), double (T2), etc. level excited CSFs when acting on ¦µ. This way of
writing ¦· is called the coupled-cluster (CC) form for ¦·.
53
In any practical calculation, this sum of Tn operators would be truncated to keep
the calculation practical. For example, if excitation operators higher than T3 were
neglected, then one would use T ¡Ö T1 + T2 + T3. However, even when T is so truncated,
the resultant ¦· would contain excitations of higher order. For example, using the
truncation just introduced, we would have
¦· = (1 + T1 + T2 + T3 + 1/2 (T1 + T2 + T3) (T1 + T2 + T3) + 1/6 (T1 + T2 + T3)
(T1 + T2 + T3) (T1 + T2 + T3) + ¡) ¦µ.
This function contains single excitations (in T1¦µ), double excitations (in T2¦µ and in
T1T1¦µ), triple excitations (in T3¦µ, T2T1¦µ, T1T2¦µ, and T1T1T1¦µ), and quadruple
excitations in a variety of terms including T3 T1¦µ and T2 T2¦µ, as well as even higher level
excitations. By the design of this wave function, the quandruple excitations T2 T2¦µ will
have amplitudes given as products of the amplitudes of the double excitations T2¦µ just as
were found by earlier CI workers to be most important. Hence, in CC theory, we say that
quadruple excitations include "unlinked" products of double excitations arising from the
T2 T2 product; the quadruple excitations arising from T4¦µ would involve linked terms and
would have amplitudes that are not products of double-excitation amplitudes.
After writing ¦· in terms of an exponential operator, one is faced with determining
the amplitudes of the various single, double, etc. excitations generated by the T operator
acting on ¦µ. This is done by writing the Schr?dinger equation as:
54
H exp(T) ¦µ = E exp(T) ¦µ,
and then multiplying on the left by exp(-T) to obtain:
exp(-T) H exp(T) ¦µ = E ¦µ.
The CC energy is then calculated by multiplying this equation on the left by ¦µ* and
integrating over the coordinates of all the electrons:
<¦µ| exp(-T) H exp(T) ¦µ> = E.
In practice, the combination of operators appearing in this expression is rewritten and
dealt with as follows:
E = <¦µ| T + [H,T] + 1/2 [[H,T],T] + 1/6 [[[H,T],T],T] + 1/24 [[[[H,T],T],T],T] |¦µ>;
this so-called Baker-Campbell-Hausdorf expansion of the exponential operators can be
shown truncate exactly after the fourth power term shown here. So, once the various
operators and their amplitudes that comprise T are known, E is computed using the above
expression that involves various powers of the T operators.
The equations used to find the amplitudes (e.g., those of the T2 operator ¦²a,b,m,n
tabmn Tabmn, where the tabmn are the amplitudes and Tabmn are the excitation operators) of the
55
various excitation level are obtained by multiplying the above Schr?dinger equation on
the left by an excited determinant of that level and integrating. For example, the equation
for the double-excitations is:
0 = <¦µ¦Á¦Âmn| T + [H,T] + 1/2 [[H,T],T] + 1/6 [[[H,T],T],T] + 1/24 [[[[H,T],T],T],T] |¦µ>.
The zero arises from the fact that <¦µabmn|¦µ> = 0; that is, the determinants are
orthonormal. The number of such equations is equal to the number of doubly excited
determinants ¦µabmn, which is equal to the number of unknown tabmn amplitudes. So, the
above quartic equations must be solved to determine the amplitudes appearing in the
various TJ operators. Then, as noted above, once these amplitudes are known, the energy
E can be computed using the earlier quartic equation.
Clearly, the CC method contains additional complexity as a result of the
exponential expansion form of the wave function ¦·. However, it is this way of writing ¦·
that allows us to automatically build in the fact that products of double excitations are the
dominant contributors to quadruple excitations (and T2 T2 T2 is the dominant component
of six-fold excitations, not T6). In fact, the CC method is today the most accurate tool that
we have for calculating molecular electronic energies and wave functions.
d. The Density Functional Method
56
These approaches provide alternatives to the conventional tools of quantum
chemistry which move beyond the single-configuration picture by adding to the wave
function more configurations whose amplitudes they each determine in their own way.
As noted earlier, these conventional approaches can lead to a very large number of CSFs
in the correlated wave function, and, as a result, a need for extraordinary computer
resources.
The density functional approaches are different . Here one solves a set of orbital-
level equations
[ - h2/2me ?2 - ¦²a Zae2/|r-Ra| + ??¦Ñ(r')e2/|r-r'|dr'
+ U(r)] ¦Õi = ¦Åi ¦Õi
in which the orbitals {¦Õi} 'feel' potentials due to the nuclear centers (having charges Za),
Coulombic interaction with the total electron density ¦Ñ(r'), and a so-called exchange-
correlation potential denoted U(r'). The particular electronic state for which the
calculation is being performed is specified by forming a corresponding density ¦Ñ(r').
Before going further in describing how DFT calculations are carried out, let us examine
the origins underlying this theory.
The so-called Hohenberg-Kohn theorem states that the ground-state electron
density ¦Ñ(r) describing an N-electron system uniquely determines the potential V(r) in
the molecule¡¯s electronic Hamiltonian
57
H = ¦²j {-h2/2me ?j2 + V(rj) + e2/2 ¦²k¡Ùj 1/rj,k },
and, because H determines the ground-state energy and wave function of the system, the
ground-state density ¦Ñ(r) therefore determines the ground-state properties of the system.
The fact that ¦Ñ(r) determines V(r) is important because it is V(r) that specifies where the
nuclei are located.
The proof of this theorem proceeds as follows:
a. ¦Ñ(r) determines the number of electrons N because ¡Ò ¦Ñ(r) d3r = N.
b. Assume that there are two distinct potentials (aside from an additive constant that
simply shifts the zero of total energy) V(r) and V¡¯(r) which, when used in H and H¡¯,
respectively, to solve for a ground state produce E0, ¦· (r) and E0¡¯, ¦·¡¯(r) that have the
same one-electron density: ¡Ò |¦·|2 dr2 dr3 ... drN = ¦Ñ(r)= ¡Ò |¦·¡¯|2 dr2 dr3 ... drN .
c. If we think of ¦·¡¯ as trial variational wave function for the Hamiltonian H, we know
that
E0 < <¦·¡¯|H|¦·¡¯> = <¦·¡¯|H¡¯|¦·¡¯> + ¡Ò ¦Ñ(r) [V(r) - V¡¯(r)] d3r = E0¡¯ + ¡Ò ¦Ñ(r) [V(r) - V¡¯(r)] d3r.
d. Similarly, taking ¦· as a trial function for the H¡¯ Hamiltonian, one finds that
E0¡¯ < E0 + ¡Ò ¦Ñ(r) [V¡¯(r) - V(r)] d3r.
e. Adding the equations in c and d gives
E0 + E0¡¯ < E0 + E0¡¯,
a clear contradiction unless the electronic state of interest is degenearate.
58
Hence, there cannot be two distinct potentials V and V¡¯ that give the same non-
degenerate ground-state ¦Ñ(r). So, the ground-state density ¦Ñ(r) uniquely determines N
and V, and thus H, and therefore ¦· and E0. Furthermore, because ¦· determines all
properties of the ground state, then ¦Ñ(r), in principle, determines all such properties. This
means that even the kinetic energy and the electron-electron interaction energy of the
ground-state are determined by ¦Ñ(r). It is easy to see that ¡Ò ¦Ñ(r) V(r) d3r = V[¦Ñ] gives the
average value of the electron-nuclear (plus any additional one-electron additive potential)
interaction in terms of the ground-state density ¦Ñ(r). However, how are the kinetic energy
T[¦Ñ] and the electron-electron interaction Vee[¦Ñ] energy expressed in terms of ¦Ñ?
The main difficulty with DFT is that the Hohenberg-Kohn theorem shows the
ground-state values of T, Vee , V, etc. are all unique functionals of the ground-state ¦Ñ (i.e.,
that they can, in principle, be determined once ¦Ñ is given), but it does not tell us what
these functional relations are.
To see how it might make sense that a property such as the kinetic energy, whose
operator -h2 /2me ?2 involves derivatives, can be related to the electron density, consider a
simple system of N non-interacting electrons moving in a three-dimensional cubic ¡°box¡±
potential. The energy states of such electrons are known to be
E = (h2/8meL2) (nx2 + ny2 +nz2 ),
where L is the length of the box along the three axes, and nx , ny , and nz are the quantum
numbers describing the state. We can view nx2 + ny2 +nz2 = R2 as defining the squared
radius of a sphere in three dimensions, and we realize that the density of quantum states
59
in this space is one state per unit volume in the nx , ny , nz space. Because nx , ny , and nz
must be positive integers, the volume covering all states with energy less than or equal to
a specified energy E = (h2/8meL2) R2 is 1/8 the volume of the sphere of radius R:
¦µ(E) = 1/8 (4pi/3) R3 = (pi/6) (8meL2E/h2)3/2 .
Since there is one state per unit of such volume, ¦µ(E) is also the number of states with
energy less than or equal to E, and is called the integrated density of states. The number
of states g(E) dE with energy between E and E+dE, the density of states, is the derivative
of ¦µ:
g(E) = d¦µ/dE = (pi/4) (8meL2/h2)3/2 E1/2 .
If we calculate the total energy for N electrons that doubly occupy all of states having
energies up to the so-called Fermi energy (i.e., the energy of the highest occupied
molecular orbital HOMO), we obtain the ground-state energy:
E
0
= 2 g(E)EdE
0
E
F
¡Ò = (8pi/5) (2me/h
2)3/2 L3 E
F
5/2.
The total number of electrons N can be expressed as
N = 2 g(E)dE
0
E
F
¡Ò = (8pi/3) (2me/h
2)3/2 L3 E
F
3/2,
60
which can be solved for EF in terms of N to then express E0 in terms of N instead of in
terms of EF:
E0 = (3h2/10me) (3/8pi)2/3 L3 (N/L3)5/3 .
This gives the total energy, which is also the kinetic energy in this case because the
potential energy is zero within the ¡°box¡±, in terms of the electron density ¦Ñ (x,y,z) =
(N/L3). It therefore may be plausible to express kinetic energies in terms of electron
densities ¦Ñ(r), but it is by no means clear how to do so for ¡°real¡± atoms and molecules
with electron-nuclear and electron-electron interactions operative.
In one of the earliest DFT models, the Thomas-Fermi theory, the kinetic energy of
an atom or molecule is approximated using the above kind of treatment on a ¡°local¡± level.
That is, for each volume element in r space, one assumes the expression given above to
be valid, and then one integrates over all r to compute the total kinetic energy:
TTF[¦Ñ] = ¡Ò (3h2/10me) (3/8pi)2/3 [¦Ñ(r)]5/3 d3r = CF ¡Ò [¦Ñ(r)]5/3 d3r ,
where the last equality simply defines the CF constant. Ignoring the correlation and
exchange contributions to the total energy, this T is combined with the electron-nuclear V
and Coulombic electron-electron potential energies to give the Thomas-Fermi total
energy:
E0,TF [¦Ñ] = CF ¡Ò [¦Ñ(r)]5/3 d3r + ¡Ò V(r) ¦Ñ(r) d3r + e2/2 ¡Ò ¦Ñ(r) ¦Ñ(r¡¯)/|r-r¡¯| d3r d3r¡¯,
61
This expression is an example of how E0 is given as a local density functional
approximation (LDA). The term local means that the energy is given as a functional (i.e.,
a function of ¦Ñ) which depends only on ¦Ñ(r) at points in space but not on ¦Ñ(r) at more
than one point in space or on spatial derivatives of ¦Ñ(r).
Unfortunately, the Thomas-Fermi energy functional does not produce results that
are of sufficiently high accuracy to be of great use in chemistry. What is missing in this
theory are a. the exchange energy and b. the electronic correlation energy. Moreover, the
kinetic energy is treated only in the approximate manner described.
Dirac was able to address the exchange energy for the 'uniform electron gas' (N
Coulomb interacting electrons moving in a uniform positive background charge whose
magnitude balances the charge of the N electrons). If the exact expression for the
exchange energy of the uniform electron gas is applied on a local level, one obtains the
commonly used Dirac local density approximation to the exchange energy:
Eex,Dirac[¦Ñ] = - Cx ¡Ò [¦Ñ(r)]4/3 d3r,
with Cx = (3/4) (3/pi)1/3. Adding this exchange energy to the Thomas-Fermi total energy
E0,TF [¦Ñ] gives the so-called Thomas-Fermi-Dirac (TFD) energy functional.
Because electron densities vary rather strongly spatially near the nuclei,
corrections to the above approximations to T[¦Ñ] and Eex.Dirac are needed. One of the more
commonly used so-called gradient-corrected approximations is that invented by Becke,
and referred to as the Becke88 exchange functional:
62
Eex(Becke88) = Eex,Dirac[¦Ñ] -¦Ã ¡Òx2 ¦Ñ4/3 (1+6 ¦Ã x sinh-1(x))-1 dr,
where x =¦Ñ-4/3 |?¦Ñ|, and ¦Ã is a parameter chosen so that the above exchange energy can
best reproduce the known exchange energies of specific electronic states of the inert gas
atoms (Becke finds ¦Ã to equal 0.0042). A common gradient correction to the earlier T[¦Ñ]
is called the Weizsacker correction and is given by
¦ÄTWeizsacker = (1/72)( h /me) ¡Ò |?¦Ñ(r)|2/¦Ñ(r) dr.
Although the above discussion suggests how one might compute the ground-state
energy once the ground-state density ¦Ñ(r) is given, one still needs to know how to obtain
¦Ñ. Kohn and Sham (KS) introduced a set of so-called KS orbitals obeying the following
equation:
{¨Ch2/2m ?2 + V(r) + e2/2 ¡Ò ¦Ñ(r¡¯)/|r-r¡¯| dr¡¯ + Uxc(r) }¦Õj = ¦Åj ¦Õj ,
where the so-called exchange-correlation potential Uxc (r) = ¦ÄExc[¦Ñ]/¦Ä¦Ñ(r) could be
obtained by functional differentiation if the exchange-correlation energy functional Exc[¦Ñ]
were known. KS also showed that the KS orbitals {¦Õj} could be used to compute the
density ¦Ñ by simply adding up the orbital densities multiplied by orbital occupancies nj:
¦Ñ(r) = ¦²j nj |¦Õj(r)|2
63
(here nj =0,1, or 2 is the occupation number of the orbital ¦Õj in the state being studied)
and that the kinetic energy should be calculated as
T = ¦²j nj <¦Õj(r)| ¨Ch2/2m ?2 |¦Õj(r)>
The same investigations of the idealized 'uniform electron gas' that identified the
Dirac exchange functional found that the correlation energy (per electron) could also be
written exactly as a function of the electron density ¦Ñ of the system, but only in two
limiting cases- the high-density limit (large ¦Ñ) and the low-density limit. There still exists
no exact expression for the correlation energy even for the uniform electron gas that is
valid at arbitrary values of ¦Ñ. Therefore, much work has been devoted to creating
efficient and accurate interpolation formulas connecting the low- and high- density
uniform electron gas . One such expression is
EC[¦Ñ] = ¡Ò ¦Ñ(r) ¦Åc(¦Ñ) dr,
where
64
¦Åc(¦Ñ) = A/2{ln(x/X) + 2b/Q tan-1(Q/(2x+b)) -bx0/X0 [ln((x-x0)2/X)
+2(b+2x0)/Q tan-1(Q/(2x+b))
is the correlation energy per electron. Here x = rs1/2 , X=x2 +bx+c, X0 =x02 +bx0+c and
Q=(4c - b2)1/2, A = 0.0621814, x0= -0.409286, b = 13.0720, and c = 42.7198. The
parameter rs is how the density ¦Ñ enters since 4/3 pirs3 is equal to 1/¦Ñ; that is, rs is the radius
of a sphere whose volume is the effective volume occupied by one electron.
A reasonable approximation to the full Exc[¦Ñ] would contain the Dirac (and perhaps
gradient corrected) exchange functional plus the above EC[¦Ñ], but there are many
alternative approximations to the exchange-correlation energy functional. Currently,
many workers are doing their best to ¡°cook up¡± functionals for the correlation and
exchange energies, but no one has yet invented functionals that are so reliable that most
workers agree to use them.
To summarize, in implementing any DFT, one usually proceeds as follows:
1. An atomic orbital basis is chosen in terms of which the KS orbitals are to be expanded.
2. Some initial guess is made for the LCAO-KS expansion coefficients Cj,a: ¦Õj = ¦²a Cj,a ¦Öa.
3. The density is computed as ¦Ñ(r) = ¦²j nj |¦Õj(r)|2 . Often, ¦Ñ(r) itself is expanded in an
atomic orbital basis, which need not be the same as the basis used for the ¦Õj, and the
expansion coefficients of ¦Ñ are computed in terms of those of the ¦Õj . It is also common to
use an atomic orbital basis to expand ¦Ñ1/3(r) which, together with ¦Ñ, is needed to evaluate
65
the exchange-correlation functional¡¯s contribution to E0.
4. The current iteration¡¯s density is used in the KS equations to determine the
Hamiltonian {¨Ch2/2m ?2 + V(r) + e2/2 ¡Ò ¦Ñ(r¡¯)/|r-r¡¯| dr¡¯ + Uxc(r) }whose ¡°new¡±
eigenfunctions {¦Õj} and eigenvalues {¦Åj} are found by solving the KS equations.
5. These new ¦Õj are used to compute a new density, which, in turn, is used to solve a new
set of KS equations. This process is continued until convergence is reached (i.e., until the
¦Õj used to determine the current iteration¡¯s ¦Ñ are the same ¦Õj that arise as solutions on the
next iteration.
6. Once the converged ¦Ñ(r) is determined, the energy can be computed using the earlier
expression
E [¦Ñ] = ¦²j nj <¦Õj(r)| ¨Ch2/2m ?2|¦Õj(r)>+ ¡ÒV(r) ¦Ñ(r) dr + e2/2¡Ò¦Ñ(r)¦Ñ(r¡¯)/|r-r¡¯|dr dr¡¯+ Exc[¦Ñ].
e. Energy Difference Methods
In addition to the methods discussed above for treating the energies and wave
functions as solutions to the electronic Schr?dinger equation, there exists a family of
tools that allow one to compute energy differences ¡°directly¡± rather than by finding the
energies of pairs of states and subsequently subtracting them. Various energy differences
can be so computed: differences between two electronic states of the same molecule (i.e.,
electronic excitation energies ?E), differences between energy states of a molecule and
the cation or anion formed by removing or adding an electron (i.e., ionization potentials
(IPs) and electron affinities (EAs)).
Because of space limitations, we will not be able to elaborate much further on
66
these methods. However, it is important to stress that:
1. These so-called Greens function or propagator methods utilize essentially the same
input information (e.g., atomic orbital basis sets) and perform many of the same
computational steps (e.g., evaluation of one- and two- electron integrals, formation of a
set of mean-field molecular orbitals, transformation of integrals to the MO basis, etc.) as
do the other techniques discussed earlier.
2. These methods are now rather routinely used when ?E, IP, or EA information is
sought.
The basic ideas underlying most if not all of the energy-difference methods are:
1. One forms a reference wave function ¦· (this can be of the SCF, MPn, CI, CC, DFT,
etc. variety); the energy differences are computed relative to the energy of this function.
2. One expresses the final-state wave function ¦·¡¯ (i.e., that describing the excited, cation,
or anion state) in terms of an operator ? acting on the reference ¦·: ¦·¡¯ = ? ¦·. Clearly,
the ? operator must be one that removes or adds an electron when one is attempting to
compute IPs or EAs, respectively.
3. One writes equations which ¦· and ¦·¡¯ are expected to obey. For example, in the early
development of these methods, the Schr?dinger equation itself was assumed to be
obeyed, so H¦· = E ¦· and H¦·¡¯ = E¡¯ ¦·¡¯ are the two equations.
4. One combines ?¦· = ¦·¡¯ with the equations that ¦· and ¦·¡¯ obey to obtain an equation
that ? must obey. In the above example, one (a) uses ?¦· = ¦·¡¯ in the Schr?dinger
equation for ¦·¡¯, (b) allows ? to act from the left on the Schr?dinger equation for ¦·, and
(c) subtracts the resulting two equations to achieve (H? - ? H) ¦· = (E¡¯ - E) ? ¦·, or, in
commutator form [H,?] ¦· = ?E ? ¦·.
67
5. One can, for example, express ¦· in terms of a superposition of configurations ¦· = ¦²J
CJ ¦µJ whose amplitudes CJ have been determined from a CI or MPn calculation and
express ? in terms of operators {OK} that cause single-, double-, etc. level excitations
(for the IP (EA) cases, ? is given in terms of operators that remove (add), remove and
singly excite (add and singly excite, etc.) electrons): ? = ¦²K DK OK .
6. Substituting the expansions for ¦· and for ? into the equation of motion (EOM) [H,?]
¦· = ?E ? ¦·, and then projecting the resulting equation on the left against a set of
functions (e.g., {OK¡¯ |¦·>}) gives a matrix eigenvalue-eigenvector equation
¦²K < OK¡¯¦·| [H,OK] ¦·> DK = ?E ¦²K < OK¡¯¦·|OK¦·> DK
to be solved for the DK operator coefficients and the excitation (or IP or EA) energies ?E.
Such are the working equations of the EOM (or Greens function or propagator) methods.
In recent years, these methods have been greatly expanded and have reached a
degree of reliability where they now offer some of the most accurate tools for studying
excited and ionized states. In particular, the use of time dependent variational principles
have allowed a much more rigorous development of equations for energy differences and
non-linear response properties. In addition, the extension of the EOM theory to include
coupled-cluster reference functions now allows one to compute excitation and ionization
energies using some of the most accurate ab initio tools.
f. The Slater-Condon Rules
68
To form Hamiltonian matrix elements HK,L between any pair of Slater
determinants, one uses the so-called Slater-Condon rules. These rules express all non-
vanishing matrix elements involving either one- or two- electron operators. One-electron
operators are additive and appear as
F = ¦²i f(i);
two-electron operators are pairwise additive and appear as
G = ¦²ij g(i,j)).
The Slater-Condon rules give the matrix elements between two determinants
| > = |¦Õ1¦Õ2¦Õ3... ¦ÕN|
and
| '> = |¦Õ'1¦Õ'2¦Õ'3...¦Õ'N|
for any quantum mechanical operator that is a sum of one- and two- electron operators (F
+ G). It expresses these matrix elements in terms of one-and two-electron integrals
involving the spin-orbitals that appear in | > and | '> and the operators f and g.
As a first step in applying these rules, one must examine | > and | '> and determine
by how many (if any) spin-orbitals | > and | '> differ. In so doing, one may have to
69
reorder the spin-orbitals in one of the determinants to achieve maximal coincidence with
those in the other determinant; it is essential to keep track of the number of permutations
( Np) that one makes in achieving maximal coincidence. The results of the Slater-Condon
rules given below are then multiplied by (-1)Np to obtain the matrix elements between the
original | > and | '>. The final result does not depend on whether one chooses to permute
| > or | '>.
The Hamiltonian is, of course, a specific example of such an operator; the electric
dipole operator ¦²i eri and the electronic kinetic energy - h2/2me¦²i?i2 are examples of
one-electron operators (for which one takes g = 0); the electron-electron coulomb
interaction ¦²i>j e2/rij is a two-electron operator (for which one takes f = 0).
Once maximal coincidence has been achieved, the Slater-Condon (SC) rules
provide the following prescriptions for evaluating the matrix elements of any operator F
+ G containing a one-electron part F = ¦²i f(i) and a two-electron part G = ¦²ij g(i,j).:
(i) If | > and | '> are identical, then
< | F + G | > = ¦²i < ¦Õi | f | ¦Õi > +¦²i>j [< ¦Õi¦Õj | g | ¦Õi¦Õj > - < ¦Õi¦Õj | g | ¦Õj¦Õi > ],
where the sums over i and j run over all spin-orbitals in | >;
(ii) If | > and | '> differ by a single spin-orbital mismatch ( ¦Õp ¡Ù ¦Õ'p ),
< | F + G | '> = < ¦Õp | f | ¦Õ'p > +¦²j [< ¦Õp¦Õj | g | ¦Õ'p¦Õj > - < ¦Õp¦Õj | g | ¦Õj¦Õ'p >],
70
where the sum over j runs over all spin-orbitals in | > except ¦Õp ;
(iii) If | > and | '> differ by two spin-orbitals ( ¦Õp ¡Ù ¦Õ'p and ¦Õq ¡Ù ¦Õ'q),
< | F + G | '> = < ¦Õp ¦Õq | g | ¦Õ'p ¦Õ'q > - < ¦Õp ¦Õq | g | ¦Õ'q ¦Õ'p >
(note that the F contribution vanishes in this case);
(iv) If | > and | '> differ by three or more spin orbitals, then
< | F + G | '> = 0;
(v) For the identity operator I, the matrix elements < | I | '> = 0 if | > and | '> differ by one
or more spin-orbitals (i.e., the Slater determinants are orthonormal if their spin-orbitals
are).
Recall that each of these results is subject to multiplication by a factor of (-1)Np to
account for possible ordering differences in the spin-orbitals in | > and | '>.
In these expressions,
< ¦Õi | f | ¦Õj >
71
is used to denote the one-electron integral
¡Ò ¦Õ*i(r) f(r) ¦Õj(r) dr
and
< ¦Õi¦Õj | g | ¦Õk¦Õl >
(or in short hand notation < i j| k l >)represents the two-electron integral
¡Ò ¦Õ*i(r) ¦Õ*j(r') g(r,r') ¦Õk(r)¦Õl(r') drdr'.
The notation < i j | k l> introduced above gives the two-electron integrals for the
g(r,r') operator in the so-called Dirac notation, in which the i and k indices label the spin-
orbitals that refer to the coordinates r and the j and l indices label the spin-orbitals
referring to coordinates r'. The r and r' denote r,¦È,¦Õ,¦Ò and r',¦È',¦Õ',¦Ò' (with ¦Ò and ¦Ò' being
the ¦Á or ¦Â spin functions).
If the operators f and g do not contain any electron spin operators, then the spin
integrations implicit in these integrals (all of the ¦Õi are spin-orbitals, so each ¦Õ is
accompanied by an ¦Á or ¦Â spin function and each ¦Õ* involves the adjoint of one of the ¦Á
or ¦Â spin functions) can be carried out as <¦Á|¦Á> =1, <¦Á|¦Â> =0, <¦Â|¦Á> =0, <¦Â|¦Â> =1,
thereby yielding integrals over spatial orbitals.
72
g. Atomic Units
The electronic Hamiltonian that appears throughout this text is commonly
expressed in the literature and in other texts in so-called atomic units (aus). In that form,
it is written as follows:
He = ¦²j { ( - 1/2 ) ?j2 - ¦²a Za/rj,a } + ¦²j<k 1/rj,k .
Atomic units are introduced to remove all of the h , e, and me factors from the
Schr?dinger equation.
To effect the unit transformation that results in the Hamiltonian appearing as
above, one notes that the kinetic energy operator scales as rj-2 whereas the Coulomb
potentials scale as rj-1 and as rj,k-1. So, if each of the Cartesian coordinates of the
electrons and nuclei were expressed as a unit of length a0 multiplied by a dimensionless
length factor, the kinetic energy operator would involve terms of the form
( - h2/2(a0)2me ) ?j2 , and the Coulomb potentials would appear as Zae2/(a0)rj,a and
e2/(a0)rj,k , with the rj,a and rj,k factors now referring to the dimensionless coordinates. A
factor of e2/a0 (which has units of energy since a0 has units of length) can then be
removed from the Coulomb and kinetic energies, after which the kinetic energy terms
appear as ( - h2/2(e2a0)me ) ?j2 and the potential energies appear as Za/rj,a and 1/rj,k.
Then, choosing a0 = h2/e2me changes the kinetic energy terms into -1/2 ?j2; as a result,
the entire electronic Hamiltonian takes the form given above in which no e2, me, or h2
73
factors appear. The value of the so-called Bohr radius a0 = h2/e2me turns out to be 0.529
?, and the so-called Hartree energy unit e2/a0, which factors out of He, is 27.21 eV or
627.51 kcal/mol.
C. Molecules Embedded in Condensed Media
Often one wants to model the behavior of a molecule or ion that is not isolated as
it might be in a gas-phase experiment. When one attempts to describe a system that is
embedded, for example, in a crystal lattice, in a liquid or a glass, one has to have some
way to treat both the effects of the surrounding ¡°medium¡± on the molecule of interest and
the motions of the medium¡¯s constituents. In so-called quantum mechanics- molecular
mechanics (QM-MM) approaches to this problem, one treats the molecule or ion of
interest using the electronic structure methods outlined earlier in this Chapter, but with
one modification. The one-electron component of the Hamiltonian, which contains the
electron-nuclei Coulomb potential ¦²a,i (-Zae2/|ri ¨C Ra|), is modified to also contain a term
that describes the potential energy of interaction of the electrons and nuclei with the
surrounding medium. In the simplest such models, this solvation potential depends only
on the dielectric constant of the surroundings. In more sophisticated models, the
surroundings are represented by a collection of (fractional) point charges that may also be
attributed with local dipole moments and polarizabilities that allow them to respond to
changes in the internal charge distribution of the molecule or ion. The locations of such
partial charges and the magnitudes of their dipoles and polarizabilities are determined to
make the resultant solvation potential reproduce known (from experiment or other
74
simulations) solvation characteristics (e.g., solvation energy, radial distribution functions)
in a variety of calibration cases.
In addition to describing how the surroundings affect the Hamiltonian of the
molecule or ion of interest, one needs to describe the motions or spatial distributions of
the medium¡¯s constituent atoms or molecules. This is usually done within a purely
classical treatment of these degrees of freedom. That is, if equilibrium properties of the
solvated system are to be simulated, then Monte-Carlo (MC) sampling (this subject is
treated in Chapter 7) of the surrounding medium¡¯s coordinates is used. Within such a MC
sampling, the potential energy of the entire system is calculated as a sum of two parts:
i. the electronic energy of the solute molecule or ion, which contains the interaction
energy of the molecule¡¯s electrons and nuclei with the surrounding medium, plus
ii. the intra-medium potential energy, which is taken to be of a simple molecular
mechanics (MM) force field character (i.e., to depend on inter-atomic distances and
internal angles in an analytical and easily computed manner).
If, alternatively, dynamical characteristics of the solvated species are to be simulated, a
classical molecular dynamics (MD) treatment is used. In this approach, the solute-
medium and internal-medium potential energies are handled in the same way as in the
MC case but where the time evolution of the medium¡¯s coordinates are computed using
the MD techniques discussed in Chapter 7.
D. High-End Methods for Treating Electron Correlation
75
Although their detailed treatment is beyond the scope of this text, it is important
to appreciate that new approaches are always under development in all areas of
theoretical chemistry. In this Section, I want to introduce you to two tools that are
proving to offer the highest precision in the treatment of electron correlation energies.
These are the so-called quantum Monte-Carlo and r1,2- approaches to this problem.
1. Quantum Monte-Carlo
In this method, one first re-writes the time dependent Schr?dinger equation
i h d¦·/dt = - h 2/2me ¦²j ?j2 ¦· + V ¦·
for negative imaginary values of the time variable t (i.e., one simply replaces t by -i¦Ó).
This gives
d¦·/d¦Ó = h /2me ¦²j ?j2 ¦· - (V/ h ) ¦·,
which is analogous to the well-known diffusion equation
dC/dt = D ?2C + S C.
76
The re-written Schr?dinger equation can be viewed as a diffusion equation in the 3N
spatial coordinates of the N electrons with a diffusion coefficient D that is related to the
electrons' mass me by
D = h /2me.
The so-called source and sink term S in the diffusion equation is related to the electron-
nuclear and electron-electron Coulomb potential energies denoted V:
S = - V.
In regions of space where V is large and negative (i.e., where the potential is highly
attractive), V is large and negative, so S is large and positive. This causes the
concentration C of the diffusing material to accumulate in such regions. Likewise, where
V is positive, C will decrease. Clearly by recognizing ¦· as the "concentration" variable in
this analogy, one understands that ¦· will accumulate where V is negative and will decay
where V is positive, as one expects.
So far, we see that the "trick" of taking t to be negative and imaginary causes the
electronic Schr?dinger equation to look like a 3N-dimensional diffusion equation. Why is
this useful and why does this trick "work"? It is useful because, as we see in Chapter 7,
Monte-Carlo methods are highly efficient tools for solving certain equations; it turns out
that the diffusion equation is one such case. So, the Monte-Carlo approach can be used to
77
solve the imaginary-time dependent Schr?dinger equation even for systems containing
many electrons. But, what does this imaginary time mean?
To understand the imaginary time trick, let us recall that any wave function
(e.g., the trial wave function with which one begins to use Monte-Carlo methods to
propagate the diffusing ¦· function) ¦µ can be written in terms of the exact eigenfuctions
{¦·K} of the Hamiltonian
H = - h 2/2me ¦²j ?j2 + V
as follows:
¦µ = ¦²K CK ¦·K.
If the Monte-Carlo method can, in fact be used to propagate forward in time such a
function but with t = -i¦Ó, then it will, in principle, generate the following function at such
an imaginary time:
¦µ = ¦²K CK ¦·K exp(-iEKt/h) = ¦²K CK ¦·K exp(-EK¦Ó/h).
As ¦Ó increases, the relative amplitudes {CK exp(-EK¦Ó/h)} of all states but the lowest state
(i.e., that with smallest EK) will decay compared to the amplitude C0 exp(-E0¦Ó/h) of the
lowest state. So, the time-propagated wave function will, at long enough ¦Ó, be dominated
by its lowest-energy component. In this way, the quantum Monte-Carlo propagation
78
method can generate a wave function in 3N dimensions that approaches the ground-state
wave function.
It has turned out that this approach, which avoids tackles the N-electron
correlation problem "head-on", has proven to yield highly accurate energies and wave
functions that display the proper cusps near nuclei as well as the negative cusps (i.e., the
wave function vanishes) whenever two electrons' coordinates approach one another.
Finally, it turns out that by using a "starting function" ¦µ of a given symmetry and radial
nodal structure, this method can be extended to converge to the lowest-energy state of the
chosen symmetry and nodal structure. So, the method can be used on excited states also.
In the next Chapter, you will learn how the Monte-Carlo tools can be used to simulate the
behavior of many-body systems (e.g., the N-electron system we just discussed) in a
highly efficient and easily parallelized manner.
2. The r1,2 Method
In this approach to electron correlation, one employs a trial variational wave
function that contains components that depend on the inter-electron distances ri,j
explicitly. By so doing, one does not rely on the polarized orbital pair approach
introduced earlier in this Chapter to represent all of the correlations among the electrons.
An example of such an explicitly correlated wave function is:
¦× = |¦Õ1 ¦Õ2 ¦Õ3 ¡¦ÕN| (1 + a ¦²i<j ri,j)
79
which consists of an antisymmetrized product of N spin-orbitals multiplied by a factor
that is symmetric under interchange of any pair of electrons and contains the electron-
electron distances in addition to a single variational parameter a. Such a trial function is
said to contain linear-r1.2 correlation factors. Of course, it is possible to write many other
forms for such an explicitly correlated trial function. For example, one could use:
¦× = |¦Õ1 ¦Õ2 ¦Õ3 ¡¦ÕN| exp(-a ¦²i<j ri,j))
as a trial function. Both the linear and the exponential forms have been used in
developing this tool of quantum chemistry. Because the integrals that must be evaluated
when one computes the Hamiltonian expectation value <¦×|H|¦×> are most
computationally feasible (albeit still very taxing) when the linear form is used, this
particular parameterization is currently the most widely used.
Both the r1,2- and quantum Monte-Carlo methods currently are used when one
wishes to obtain the absolute highest precision in an electronic structure calculation. The
computational requirements of both of these methods are very high, so, at present, they
can only be used on species containing fewer than ca. 100 electrons. However, with the
power and speed of computers growing as fast as they are, it is likely that these high-end
methods will be more and more widely used as time goes by.
80
II. Experimental Probes of Electronic Structure
A. Visible and Ultraviolet Spectroscopy
Visible and ultraviolet spectroscopies are used to study transitions between states
of a molecule or ion in which the electrons¡¯ orbital occupancy changes. We call these
electronic transitions, and they usually require light in the 5000 cm-1 to 100,000
cm-1 regime. When such transitions occur, the initial and final states generally differ in
their electronic, vibrational, and rotational energies because any change to the electrons'
orbital occupancy will induce changes in the vibrational and rotational character.
Excitations of inner-shell and core orbital electrons may require even higher energy
photons as would excitations that eject an electron. The interpretation of all such
spectroscopic data relies heavily on theory as this Section is designed to illustrate.
1. The Electronic Transition Dipole and Use of Point Group Symmetry
The interaction of electomagnetic radiation with a molecule's electrons and nuclei
can be treated using perturbation theory. Because this is not a text specializing in
spectroscopy, we will not go into this derivation here. If you are interested in seeing this
treatment, my QMIC text covers it in some detail as do most books on molecular
spectroscopy. The result is a standard expression
81
Ri,f = (2pi/h2) g(¦Øf,i) | E0 ? <¦µf | ¦Ì | ¦µi> |2
for the rate of photon absorption between initial ¦µi and final ¦µf states. In this equation,
g(¦Ø) is the intensity of the photon source at the frequency ¦Ø, ¦Øf,i is the frequency
corresponding to the transition under study, and E0 is the electric field vector of the
photon field. The vector ¦Ì is the electric dipole moment of the electrons and nuclei in the
molecule.
Because each of these wave functions is a product of an electronic ¦×e, a
vibrational and a rotational function, we realize that the electronic integral appearing in
this rate expression involves
<¦×ef | ¦Ì | ¦×ei> = ¦Ìf,i (R),
a transition dipole matrix element between the initial ¦×ei and final ¦×ef electronic wave
functions. This element is a function of the internal vibrational coordinates of the
molecule, and is a vector locked to the molecule's internal axis frame.
Molecular point-group symmetry can often be used to determine whether a
particular transition's dipole matrix element will vanish and, as a result, the electronic
transition will be "forbidden" and thus predicted to have zero intensity. If the direct
product of the symmetries of the initial and final electronic states ¦×ei and ¦×ef do not
match the symmetry of the electric dipole operator (which has the symmetry of its x, y,
and z components; these symmetries can be read off the right most column of the
82
character tables), the matrix element will vanish.
For example, the formaldehyde molecule H2CO has a ground electronic state that
has 1A1 symmetry in the C2v point group. Its pi ==> pi* singlet excited state also has 1A1
symmetry because both the pi and pi* orbitals are of b1 symmetry. In contrast, the lowest
n ==> pi* (these orbitals are shown in Fig. 6.15) singlet excited state is of 1A2 symmetry
because the highest energy oxygen centered non-bonding orbital is of b2 symmetry and
the pi* orbital is of b1 symmetry, so the Slater determinant in which both the n and pi*
orbitals are singly occupied has its symmetry dictated by the b2 x b1 direct product,
which is A2.
Figure 6.15 Electronic Transition From the Non-bonding n orbital to the antibonding pi*
Orbital of Formaldehyde
83
The pi ==> pi* transition thus involves ground (1A1) and excited (1A1) states
whose direct product (A1 x A1) is of A1 symmetry. This transition thus requires that the
electric dipole operator possess a component of A1 symmetry. A glance at the C2v point
group's character table shows that the molecular z-axis is of A1 symmetry. Thus, if the
light's electric field has a non-zero component along the C2 symmetry axis (the
molecule's z-axis), the pi ==> pi* transition is predicted to be allowed. Light polarized
along either of the molecule's other two axes cannot induce this transition.
In contrast, the n ==> pi* transition has a ground-excited state direct product of B2
x B1 = A2 symmetry. The C2v 's point group character table shows that the electric dipole
operator (i.e., its x, y, and z components in the molecule-fixed frame) has no component
of A2 symmetry; thus, light of no electric field orientation can induce this n ==> pi*
transition. We thus say that the n ==> pi* transition is forbidden.
The above examples illustrate one of the most important applications of visible-
UV spectroscopy. The information gained in such experiments can be used to infer the
symmetries of the electronic states and hence of the orbitals occupied in these states. It is
in this manner that this kind of experiment probes electronic structures.
2. The Franck-Condon Factors
Beyond such electronic symmetry analysis, it is also possible to derive vibrational
selection rules for electronic transitions that are allowed. It is conventional to expand
84
¦Ìf,i (R) in a power series about the equilibrium geometry of the initial electronic state
(since this geometry is characteristic of the molecular structure prior to photon
absorption):
¦Ìf,i(R) = ¦Ìf,i(Re) + ¦²a ?¦Ìf,i/?Ra (Ra - Ra,e) + ....
The first term in this expansion, when substituted into the integral over the vibrational
coordinates, gives ¦Ìf,i(Re) <¦Övf | ¦Övi> , which has the form of the electronic transition
dipole multiplied by the "overlap integral" between the initial and final vibrational wave
functions. The ¦Ìf,i(Re) factor was discussed above; it is the electronic transition integral
evaluated at the equilibrium geometry of the absorbing state. Symmetry can often be used
to determine whether this integral vanishes, as a result of which the transition will be
"forbidden".
The vibrational overlap integrals <¦Övf | ¦Övi> do not necessarily vanish because
¦Övf and ¦Övi are eigenfunctions of different vibrational Hamiltonians. ¦Övf is an
eigenfunction whose potential energy is the final electronic state's energy surface; ¦Övi has
the initial electronic state's energy surface as its potential. The squares of these <¦Övf | ¦Övi>
integrals, which are what eventually enter into the transition rate expression Ri,f =
(2pi/h2) g(¦Øf,i) | E0 ? <¦µf | ¦Ì | ¦µi> |2, are called "Franck-Condon factors". Their relative
magnitudes play strong roles in determining the relative intensities of various vibrational
"bands" (i.e., peaks) within a particular electronic transition's spectrum. In Fig. 6.16, I
show two potential energy curves and illustrate the kinds of absorption (and emission)
85
transitions that can occur when the two electronic states have significantly different
geometries.
Figure 6.16 Absorption From One Initial State to One Final State Followed by Relaxation
and Then Emission From the Lowest State of the Upper Surface.
Whenever an electronic transition causes a large change in the geometry (bond
lengths or angles) of the molecule, the Franck-Condon factors tend to display the
characteristic "broad progression" shown in Fig. 6.17 when considered for one initial-
state vibrational level vi and various final-state vibrational levels vf:
86
vf= 0 1 2 3 4 5 6
|<¦Ö
i
|¦Ö
f
>|
2
Final state vibrational Energy (Evf)
Figure 6.17 Broad Franck-Condon Progression Characteristic of Large Geometry Change
Notice that as one moves to higher vf values, the energy spacing between the states (Evf -
Evf-1) decreases; this, of course, reflects the anharmonicity in the excited-state vibrational
potential. For the above example, the transition to the vf = 2 state has the largest Franck-
Condon factor. This means that the overlap of the initial state's vibrational wave function
¦Övi is largest for the final state's ¦Övf function with vf = 2.
As a qualitative rule of thumb, the larger the geometry difference between the
initial- and final- state potentials, the broader will be the Franck-Condon profile (as
shown in Fig. 6.17) and the larger the vf value for which this profile peaks. Differences in
harmonic frequencies between the two states can also broaden the Franck-Condon
profile.
87
If the initial and final states have very similar geometries and frequencies along
the mode that is excited when the particular electronic excitation is realized, the type of
Franck-Condon profile shown in Fig. 6.18 may result:
vf= 0 1 2 3 4 5 6
|<¦Ö
i
|¦Ö
f
>|
2
Final state vibrational Energy (E
vf
)
Figure 6.18 Franck-Condon Profile Characteristic of Small Geometry Change
Another feature that is important to emphasize is the relation between absorption
and emission when the two states¡¯ energy surfaces have different equilibrium geometries
or frequencies. Subsequent to photon absorption to form an excited electronic state but
prior to photon emission, the molecule usually undergoes collisions with other nearby
molecules. This, of course, is especially true in condensed-phase experiments. These
collisions cause the excited molecule to lose much of its vibrational and rotational
energy, thereby ¡°relaxing¡± it to lower levels on the excited electronic surface. This
relaxation process is illustrated in Fig. 6.19.
88
Figure 6.19 Absorption Followed by Relaxation to Lower Vibrational Levels of the
Upper State.
Subsequently, the electronically excited molecule can undergo photon emission (also
called fluorescence) to return to its ground electronic state as shown in Fig. 6.20.
Figure 6.20 Fluorescence From Lower Levels of the Upper Surface
89
The Franck-Condon principle discussed earlier also governs the relative intensities of the
various vibrational transitions arising in such emission processes. Thus, one again
observes a set of peaks in the emission spectrum as shown in Fig. 6.21.
Figure 6.21 Absorption and Emission Spectra With the Latter Red Shifted
There are two differences between the lines that occur in emission and in absorption.
First, the emission lines are shifted to the red (i.e., to lower energy or longer wavelength)
because they occur at transition energies connecting the lowest vibrational level of the
upper electronic state to various levels of the lower state. In contrast, the absorption lines
connect the lowest vibrational level of the ground state to various levels of the upper
state. These relationships are shown in Figure 6.22.
90
Figure 6.22 Absorption to High States on the Upper Surface, Relaxation, and Emission
From Lower States of the Upper Surface
The second difference relates to the spacings among the vibrational lines. In emission,
these spacings reflect the energy spacings between vibrational levels of the ground state,
whereas in absorption they reflect spacings between vibrational levels of the upper state.
The above examples illustrate how vibrationally resolved visible-UV absorption
and emission spectra can be used to gain valuable information about
a. the vibrational energy level spacings of the upper and ground electronic states (these
spacings, in turn, reflect the strengths of the bonds existing in these states),
b. the change in geometry accompanying the ground-to-excited state electronic
transition as reflected in the breadth of the Franck-Condon profiles (these changes
also tell us about the bonding changes that occur as the electronic transition occurs).
So, again we see how visible-UV spectroscopy can be used to learn about the electronic
structure of molecules in various electronic states.
91
3. Time Correlation Function Expressions for Transition Rates
The above so-called "golden-rule" expression for the rates of photon-induced
transitions are written in terms of the initial and final electronic/vibrational/rotational
states of the molecule. There are situations in which these states simply can not be
reliably known. For example, the higher vibrational states of a large polyatomic molecule
or the states of a molecule that strongly interacts with surrounding solvent molecules are
such cases. In such circumstances, it is possible to recast the golden rule formula into a
form that is more amenable to introducing specific physical models that lead to additional
insights.
Specifically, by using so-called equilibrium averaged time correlation functions, it
is possible to obtain rate expressions appropriate to a large number of molecules that exist
in a distribution of initial states (e.g., for molecules that occupy many possible rotational
and perhaps several vibrational levels at room temperature). As we will soon see, taking
this route to expressing spectroscopic transition rates also allows us to avoid having to
know each vibrational-rotational wave function of the two electronic states involved; this
is especially useful for large molecules or molecules in condensed media where such
knowledge is likely not available.
To begin re-expressing the spectroscopic transition rates, the expression obtained
earlier
Ri,f = (2pi/h2) g(¦Øf,i) | E0 ? <¦µf | ¦Ì | ¦µi> |2 ,
92
appropriate to transitions between a particular initial state ¦µi and a specific final state ¦µf,
is rewritten as
Ri,f = (2pi/h2) ??
g(¦Ø) | E0 ? <¦µf | ¦Ì | ¦µi> |2 ¦Ä(¦Øf,i - ¦Ø) d¦Ø .
Here, the ¦Ä(¦Øf,i - ¦Ø) function is used to specifically enforce the "resonance condition"
which states that the photons' frequency ¦Ø must be resonant with the transition frequency
¦Øf,i . The following integral identity can be used to replace the ¦Ä-function:
¦Ä(¦Øf,i - ¦Ø) = 12pi ??
-¡Þ
¡Þ
exp[i(¦Øf,i - ¦Ø)t] dt
by a form that is more amenable to further development. Then, the state-to-state rate of
transition becomes:
Ri,f = (1/h2)
?ÿ
ÿ?
g(¦Ø) | E0 ? <¦µf | ¦Ì | ¦µi>|2 ??
-¡Þ
¡Þ
exp[i(¦Øf,i - ¦Ø)t] dt d¦Ø .
If this expression is then multiplied by the equilibrium probability ¦Ñi that the
molecule is found in the state ¦µi and summed over all such initial states and summed
93
over all final states ¦µf that can be reached from ¦µi with photons of energy h ¦Ø, the
equilibrium averaged rate of photon absorption by the molecular sample is obtained:
Req.ave. = (1/h2)¦²f,i ¦Ñi
?ÿ
ÿ?
g(¦Ø) | E0 ? <¦µf | ¦Ì | ¦µi>|2 ??
-¡Þ
¡Þ
exp[i(¦Øf,i - ¦Ø)t] dt d¦Ø .
This expression is appropriate for an ensemble of molecules that can be in various initial
states ¦µi with probabilities ¦Ñi. The corresponding result for transitions that originate in a
particular state (¦µi) but end up in any of the "allowed" (by energy and selection rules)
final states reads:
Ri = (1/h2)¦²f ¦Ñi
?ÿ
ÿ?
g(¦Ø) | E0 ? <¦µf | ¦Ì | ¦µi>|2 ??
-¡Þ
¡Þ
exp[i(¦Øf,i - ¦Ø)t] dt d¦Ø .
As we discuss in Chapter 7, for an ensemble in which the number of molecules, the
temperature, and the system volume are specified, ¦Ñi takes the form:
¦Ñi = gi exp(-Ei0/kT)/Q
94
where Q is the partition function of the molecules and gi is the degeneracy of the state ¦µi
whose energy is Ei0. If you are unfamiliar with partition functions and do not want to
simply ¡°trust me¡± in the analysis of time correlation functions that we am about to
undertake, I suggest you interrupt your study of Chapter 6 and read up through Section
I.C of Chapter 7 at this time.
In the above expression for Req.ave., a double sum occurs. Writing out the
elements that appear in this sum in detail, one finds:
¦²i, f ¦Ñi E0 ? <¦µi | ¦Ì | ¦µf> E0 ? <¦µf | ¦Ì | ¦µi> expi(¦Øf,i)t.
In situations in which one is interested in developing an expression for the intensity
arising from transitions to all allowed final states, the sum over these final states can be
carried out explicitly by first writing
<¦µf | ¦Ì | ¦µi> expi(¦Øf,i)t = <¦µf |exp(iHt/h) ¦Ì exp(-iHt/h)| ¦µi>
and then using the fact that the set of states {¦µk} are complete and hence obey
¦²k |¦µk><¦µk| = 1.
The result of using these identities as well as the Heisenberg definition of the time-
dependence of the dipole operator
95
¦Ì(t) = exp(iHt/h) ¦Ì exp(-iHt/h),
is:
¦²i ¦Ñi <¦µi | E0 ? ¦Ì E0 ? ¦Ì (t) | ¦µi> .
In this form, one says that the time dependence has been reduce to that of an equilibrium
averaged (i.e., as reflected in the ¦²i ¦Ñi <¦µi | | ¦µi> expression) time correlation function
involving the component of the dipole operator along the external electric field at t = 0,
( E0 ? ¦Ì ) and this component at a different time t, (E0 ? ¦Ì (t)).
If ¦Øf,i is positive (i.e., in the photon absorption case), the above expression will
yield a non-zero contribution when multiplied by exp(-i ¦Øt) and integrated over positive
¦Ø- values. If ¦Øf,i is negative (as for stimulated photon emission), this expression will
contribute, when multiplied by exp(-i ¦Øt), for negative ¦Ø-values. In the latter situation, ¦Ñi
is the equilibrium probability of finding the molecule in the (excited) state from which
emission will occur; this probability can be related to that of the lower state ¦Ñf by
¦Ñexcited = ¦Ñlower exp[ - (E0excited - E0lower)/kT ]
= ¦Ñlower exp[ - h¦Ø/kT ].
96
The absorption and emission cases can be combined into a single expression for
the net rate of photon absorption by recognizing that the latter process leads to photon
production, and thus must be entered with a negative sign. The resultant expression for
the net rate of decrease of photons is:
Req.ave.net = (1/h2) ¦²i ¦Ñi
????g(¦Ø) <¦µi | (E0 ? ¦Ì ) E0 ? ¦Ì (t) | ¦µi> (1 - exp(- h ¦Ø/kT) ) exp(-i¦Øt) d¦Ø dt.
It is convention to introduce the so-called "line shape" function I (¦Ø):
I (¦Ø) = ¦²i ¦Ñi ?? <¦µi | (E0 ? ¦Ì ) E0 ? ¦Ì (t) | ¦µi> exp(-i¦Øt) dt
in terms of which the net photon absorption rate is
Req.ave.net = (1/h2) (1 - exp(- h ¦Ø/kT) ) ?? g(¦Ø) I (¦Ø) d¦Ø .
The function
C (t) = ¦²i ¦Ñi <¦µi | (E0 ? ¦Ì ) E0 ? ¦Ì (t) | ¦µi>
97
is called the equilibrium averaged time correlation function of the component of the
electric dipole operator along the direction of the external electric field E0. Its Fourier
transform is I (¦Ø), the spectral line shape function. The convolution of I (¦Ø) with the light
source's g (¦Ø) function, multiplied by (1 - exp(-h ¦Ø/kT) ), the correction for stimulated
photon emission, gives the net rate of photon absorption.
Although the correlation function expression for the photon absorption rate is
equivalent to the state-to-state expression from which it was derived, we notice that
a. C(t) does not contain explicit reference to the final-state wave functions ¦µf; instead,
b. C(t) requires us to describe how the dipole operator changes with time.
That is, in the time correlation framework, one is allowed to use models of the time
evolution of the system to describe the spectra. This is especially appealing for large
complex molecules and molecules in condensed media because, for such systems, it
would be hopeless to attempt to find the final-state wave functions, but it is reasonable
(albeit challenging) to model the system¡¯s time evolution. It turns out that a very wide
variety of spectroscopic and thermodynamic properties (e.g., light scattering intensities,
diffusion coefficients, and thermal conductivity) can be expressed in terms of molecular
time correlation functions. The Statistical Mechanics test by McQuarrie has a good
treatment of many of these cases. Let¡¯s now examine how such time evolution issues are
used within the correlation function framework for the specific photon absorption case.
4. Line Broadening Mechanisms
98
If the rotational motion of the system¡¯s molecules is assumed to be entirely
unhindered (e.g., by any environment or by collisions with other molecules), it is
appropriate to express the time dependence of each of the dipole time correlation
functions listed above in terms of a "free rotation" model. For example, when dealing
with diatomic molecules, the electronic-vibrational-rotational C(t) appropriate to a
specific electronic-vibrational transition becomes:
C(t) = (qr qv qe qt)-1 ¦²J (2J+1) exp(- h2J(J+1)/(8pi2IkT)) exp(- h¦Ívibvi /kT)
gie <¦ÕJ | E0 ? ¦Ìi,f(Re) E0 ? ¦Ìi,f(Re,t) |¦ÕJ> |<¦Öiv | ¦Öfv>|2
exp(i [h¦Ívib] t + i?Ei,f t/h).
Here,
qr = (8pi2IkT/h2)
is the rotational partition function (I being the molecule's moment of inertia
I = ¦ÌRe2, and h2J(J+1)/(8pi2I) the molecule's rotational energy for the state with quantum
number J and degeneracy 2J+1),
qv = exp(-h¦Ívib/2kT) (1-exp(-h¦Ívib/kT))-1
99
is the vibrational partition function (¦Ívib being the vibrational frequency), gie is the
degeneracy of the initial electronic state,
qt = (2pimkT/h2)3/2 V
is the translational partition function for the molecules of mass m moving in volume V,
and ?Ei,f is the adiabatic electronic energy spacing. The origins of such partition
functions are treated in Chapter 7.
The functions <¦ÕJ | E0 ? ¦Ìi,f(Re) E0 ? ¦Ìi,f(Re,t) |¦ÕJ> describe the time evolution of
the electronic transition dipole vector for the rotational state J. In a "free-rotation" model,
this function is taken to be of the form:
<¦ÕJ | E0 ? ¦Ìi,f(Re) E0 ? ¦Ìi,f(Re,t) |¦ÕJ>
= <¦ÕJ | E0 ? ¦Ìi,f(Re) E0 ? ¦Ìi,f(Re) |¦ÕJ> Cos ¦ØJ,
where ¦ØJ is the rotational frequency (in cycles per second) for rotation of the molecule in
the state labeled by J. This oscillatory time dependence, combined with the exp(i [h¦Ívib] t
+ i?Ei,f t/h) time dependence arising from the electronic and vibrational factors, produce,
when this C(t) function is Fourier transformed to generate I(¦Ø), a series of ¦Ä-function
"peaks¡±. The intensities of these peaks are governed by the
100
(qr qv qe qt)-1 ¦²J (2J+1) exp(- h2J(J+1)/(8pi2IkT)) exp(- h¦Ívibvi /kT) gie
Boltzmann population factors as well as by the |<¦Öiv | ¦Öfv>|2 Franck-Condon factors and
the <¦ÕJ | E0 ? ¦Ìi,f(Re) E0 ? ¦Ìi,f(Re,0) |¦ÕJ> terms.
This same analysis can be applied to the pure rotation and vibration-rotation C(t)
time dependences with analogous results. In the former, ¦Ä-function peaks are predicted to
occur at
¦Ø = ± ¦ØJ
and in the latter at
¦Ø = ¦Øfv,iv ± ¦ØJ ;
with the intensities governed by the time independent factors in the corresponding
expressions for C(t).
In experimental measurements, such sharp ¦Ä-function peaks are, of course, not
observed. Even when very narrow band width laser light sources are used (i.e., for which
g(¦Ø) is an extremely narrowly peaked function), spectral lines are found to possess finite
widths. Let us now discuss several sources of line broadening, some of which will relate
to deviations from the "unhindered" rotational motion model introduced above.
a. Doppler Broadening
101
In the above expressions for C(t), the averaging over initial rotational, vibrational,
and electronic states is explicitly shown. There is also an average over the translational
motion implicit in all of these expressions. Its role has not (yet) been emphasized because
the molecular energy levels, whose spacings yield the characteristic frequencies at which
light can be absorbed or emitted, do not depend on translational motion. However, the
frequency of the electromagnetic field experienced by moving molecules does depend on
the velocities of the molecules, so this issue must now be addressed.
Elementary physics classes express the so-called Doppler shift of a wave's
frequency induced by relative movement of the light source and the molecule as follows:
¦Øobserved = ¦Ønominal (1 + vz/c)-1 ¡Ö ¦Ønominal (1 - vz/c + ...).
Here, ¦Ønominal is the frequency of the unmoving light source seen by unmoving
molecules, vz is the velocity of relative motion of the light source and molecules, c is the
speed of light, and ¦Øobserved is the Doppler shifted frequency (i.e., the frequency seen by
the molecules). The second identity is obtained by expanding, in a power series, the (1 +
vz/c)-1 factor, and is valid in truncated form when the molecules are moving with speeds
significantly below the speed of light.
For all of the cases considered earlier, a C(t) function is subjected to Fourier
transformation to obtain a spectral lineshape function I(¦Ø), which then provides the
essential ingredient for computing the net rate of photon absorption. In this Fourier
transform process, the variable ¦Ø is assumed to be the frequency of the electromagnetic
102
field experienced by the molecules. The above considerations of Doppler shifting then
lead one to realize that the correct functional form to use in converting C(t) to I(¦Ø) is:
I(¦Ø) = ??C(t) exp(-it¦Ø(1-vz/c)) dt ,
where ¦Ø is the nominal frequency of the light source.
As stated earlier, within C(t) there is also an equilibrium average over
translational motion of the molecules. For a gas-phase sample undergoing random
collisions and at thermal equilibrium, this average is characterized by the well-known
Maxwell-Boltzmann velocity distribution:
(m/2pikT)3/2 exp(-m (vx2+vy2+vz2)/2kT) dvx dvy dvz.
Here m is the mass of the molecules and vx, vy, and vz label the velocities along the lab-
fixed Cartesian coordinates.
Defining the z-axis as the direction of propagation of the light's photons and
carrying out the averaging of the Doppler factor over such a velocity distribution, one
obtains:
??
-¡Þ
¡Þ
exp(-it¦Ø(1-vz/c)) (m/2pikT)3/2 exp(-m (vx2+vy2+vz2)/2kT) dvx dvy dvz
103
= exp(-i¦Øt) ??
-¡Þ
¡Þ
(m/2pikT)1/2 exp(i¦Øtvz/c) exp(-mvz2/2kT) dvz
= exp(-i¦Øt) exp(- ¦Ø2t2kT/(2mc2)).
This result, when substituted into the expressions for C(t), yields expressions identical to
those given for the three cases treated above but with one modification. The translational
motion average need no longer be considered in each C(t); instead, the earlier expressions
for C(t) must each be multiplied by a factor exp(- ¦Ø2t2kT/(2mc2)) that embodies the
translationally averaged Doppler shift. The spectral line shape function I(¦Ø) can then be
obtained for each C(t) by simply Fourier transforming:
I(¦Ø) = ??
-¡Þ
¡Þ
exp(-i¦Øt) C(t) dt .
When applied to the rotation, vibration-rotation, or electronic-vibration-rotation
cases within the "unhindered" rotation model treated earlier, the Fourier transform
involves integrals of the form:
??
-¡Þ
¡Þ
exp(-i¦Øt) exp(- ¦Ø2t2kT/(2mc2))exp(i(¦Øfv,iv + ?Ei,f/h ± ¦ØJ)t) dt .
104
This integral would arise in the electronic-vibration-rotation case; the other two cases
would involve integrals of the same form but with the ?Ei,f/h absent in the vibration-
rotation situation and with ¦Øfv,iv + ?Ei,f/h missing for pure rotation transitions. All such
integrals can be carried out analytically and yield:
2mc2pi
¦Ø2kT exp[ -(¦Ø-¦Øfv,iv - ?Ei,f/h ± ¦ØJ)2 mc2/(2¦Ø2kT)].
The result is a series of Gaussian "peaks" in ¦Ø-space, centered at:
¦Ø = ¦Øfv,iv + ?Ei,f/h ± ¦ØJ
with widths (¦Ò) determined by
¦Ò2 = ¦Ø2kT/(mc2),
given the temperature T and the mass of the molecules m. The hotter the sample, the
faster the molecules are moving on average, and the broader is the distribution of Doppler
shifted frequencies experienced by these molecules. The net result then of the Doppler
effect is to produce a line shape function that is similar to the "unhindered" rotation
model's series of ¦Ä-functions but with each ¦Ä-function peak broadened into a Gaussian
shape.
105
If spectra can be obtained to accuracy sufficient to determine the Doppler width
of the spectral lines, such knowledge can be used to estimate the temperature of the
system. This can be useful when dealing with systems that can not be subjected to
alternative temperature measurements. For example, the temperatures of stars can be
estimated (if their velocity relative to the earth is known) by determining the Doppler
shifts of emission lines from them. Alternatively, the relative speed of a star from the
earth may be determined if its temperature is known. As another example, the
temperature of hot gases produced in an explosion can be probed by measuring Doppler
widths of absorption or emission lines arising from molecules in these gases.
b. Pressure Broadening
To include the effects of collisions on the rotational motion part of any of the
above C(t) functions, one must introduce a model for how such collisions change the
dipole-related vectors that enter into C(t). The most elementary model used to address
collisions applies to gaseous samples which are assumed to undergo unhindered
rotational motion until struck by another molecule at which time a "kick" is applied to the
dipole vector and after which the molecule returns to its unhindered rotational movement.
The effects of such infrequent collision-induced kicks are treated within the so-
called pressure broadening (sometimes called collisional broadening) model by
modifying the free-rotation correlation function through the introduction of an
exponential damping factor exp( -|t|/¦Ó):
106
<¦ÕJ | E0 ? ¦Ìi,f(Re) E0 ? ¦Ìi,f(Re,0) |¦ÕJ> Cos h J(J+1) t4piI
? <¦ÕJ | E0 ? ¦Ìi,f(Re) E0 ? ¦Ìi,f(Re,0) |¦ÕJ> Cos h J(J+1) t4piI exp( -|t|/¦Ó).
This damping function's time scale parameter ¦Ó is assumed to characterize the average
time between collisions and thus should be inversely proportional to the collision
frequency. Its magnitude is also related to the effectiveness with which collisions cause
the dipole function to deviate from its unhindered rotational motion (i.e., related to the
collision strength). In effect, the exponential damping causes the time correlation
function <¦ÕJ | E0 ? ¦Ìi,f(Re) E0 ? ¦Ìi,f(Re,t) |¦ÕJ> to "lose its memory" and to decay to zero.
This "memory" point of view is based on viewing <¦ÕJ | E0 ? ¦Ìi,f(Re) E0 ? ¦Ìi,f(Re,t) |¦ÕJ>
as the projection of E0 ? ¦Ìi,f(Re,t) along its t = 0 value E0 ? ¦Ìi,f(Re,0) as a function of
time t.
Introducing this additional exp( -|t|/¦Ó) time dependence into C(t) produces, when
C(t) is Fourier transformed to generate I(¦Ø), integrals of the form
??
-¡Þ
¡Þ
exp(-i¦Øt)exp(-|t|/¦Ó)exp(-¦Ø2t2kT/(2mc2))exp(i(¦Øfv,iv+?Ei,f/h ± ¦ØJ)t)dt .
In the limit of very small Doppler broadening, the (¦Ø2t2kT/(2mc2)) factor can be ignored
(i.e., exp(-¦Ø2t2kT/(2mc2)) set equal to unity), and
107
??
-¡Þ
¡Þ
exp(-i¦Øt)exp(-|t|/¦Ó)exp(i(¦Øfv,iv+?Ei,f/h ± ¦ØJ)t)dt
results. This integral can be performed analytically and generates:
1
4pi {
1/¦Ó
(1/¦Ó)2+ (¦Ø-¦Øfv,iv-?Ei,f/h ± ¦ØJ)2 +
1/¦Ó
(1/¦Ó)2+ (¦Ø+¦Øfv,iv+?Ei,f/h ± ¦ØJ)2 },
a pair of Lorentzian peaks in ¦Ø-space centered again at
¦Ø = ± [¦Øfv,iv+?Ei,f/h ± ¦ØJ].
The full width at half height of these Lorentzian peaks is 2/¦Ó. One says that the individual
peaks have been pressure or collisionally broadened.
When the Doppler broadening can not be neglected relative to the collisional
broadening, the above integral
??
-¡Þ
¡Þ
exp(-i¦Øt)exp(-|t|/¦Ó)exp(-¦Ø2t2kT/(2mc2))exp(i(¦Øfv,iv+?Ei,f/h ± ¦ØJ)t)dt
is more difficult to perform. Nevertheless, it can be carried out and again produces a
series of peaks centered at
108
¦Ø = ¦Øfv,iv+?Ei,f/h ± ¦ØJ
but whose widths are determined both by Doppler and pressure broadening effects. The
resultant line shapes are thus no longer purely Lorentzian nor Gaussian (which are
compared in Fig. 6.23 for both functions having the same full width at half height and the
same integrated area), but have a shape that is called a Voight shape.
Intensity
¦Ø
Gaussian
(Doppler)
Lorentzian
Figure 6.23 Typical Forms of Gaussian and Lorentzian Peaks
Experimental measurements of line widths that allow one to extract widths
originating from collisional broadening provide information (through ¦Ó) on the frequency
of collisions and the ¡°strength¡± of these collisions. By determining ¦Ó at a series of gas
densities, one can separate the collision-frequency dependence and determine the strength
of the individual collisions (meaning how effective each collision is in reorienting the
molecule¡¯s dipole vector).
c. Rotational Diffusion Broadening
109
Molecules in liquids and very dense gases undergo such frequent collisions with
the other molecules that the mean time between collisions is short compared to the
rotational period for their unhindered rotation. As a result, the time dependence of the
dipole-related correlation functions can no longer be modeled in terms of free rotation
that is interrupted by (infrequent) collisions and Dopler shifted. Instead, a model that
describes the incessant buffeting of the molecule's dipole by surrounding molecules
becomes appropriate. For liquid samples in which these frequent collisions cause the
dipole to undergo angular motions that cover all angles (i.e., in contrast to a frozen glass
or solid in which the molecule's dipole would undergo strongly perturbed pendular
motion about some favored orientation), the so-called rotational diffusion model is often
used.
In this picture, the rotation-dependent part of C(t) is expressed as:
<¦ÕJ | E0 ? ¦Ìi,f(Re) E0 ? ¦Ìi,f(Re,t) |¦ÕJ>
= <¦ÕJ | E0 ? ¦Ìi,f(Re) E0 ? ¦Ìi,f(Re,0) |¦ÕJ> exp( -2Drot|t|),
where Drot is the rotational diffusion constant whose magnitude details the time
decay in the averaged value of E0 ? ¦Ìi,f(Re,t) at time t with respect to its value at time t =
0; the larger Drot, the faster is this decay.
As with pressure broadening, this exponential time dependence, when subjected
to Fourier transformation, yields:
110
??
-¡Þ
¡Þ
exp(-i¦Øt)exp(-2Drot|t|)exp(-¦Ø2t2kT/(2mc2))exp(i(¦Øfv,iv+?Ei,f/h ± ¦ØJ)t)dt .
Again, in the limit of very small Doppler broadening, the (¦Ø2t2kT/(2mc2)) factor can be
ignored (i.e., exp(-¦Ø2t2kT/(2mc2)) set equal to unity), and
??
-¡Þ
¡Þ
exp(-i¦Øt)exp(-2Drot|t|)exp(i(¦Øfv,iv+?Ei,f/h ± ¦ØJ)t)dt
results. This integral can be evaluated analytically and generates:
1
4pi {
2Drot
(2Drot)2+ (¦Ø-¦Øfv,iv-?Ei,f/h ± ¦ØJ)2
+ 2Drot(2Drot)2+ (¦Ø+¦Øfv,iv+?Ei,f/h ± ¦ØJ)2 },
a pair of Lorentzian peaks in ¦Ø-space centered again at
¦Ø = ±[¦Øfv,iv+?Ei,f/h ± ¦ØJ].
The full width at half height of these Lorentzian peaks is 4Drot. In this case, one says that
the individual peaks have been broadened via rotational diffusion. In such cases,
111
experimental measurement of line widths yield valuable information about how fast the
molecule is rotationally diffusing in its condensed environment.
d. Lifetime or Heisenberg Homogeneous Broadening
Whenever the absorbing species undergoes one or more processes that depletes its
numbers, we say that it has a finite lifetime. For example, a species that undergoes
unimolecular dissociation has a finite lifetime as does an excited state of a molecule that
decays by spontaneous emission of a photon. Any process that depletes the absorbing
species contributes another source of time dependence for the dipole time correlation
functions C(t) discussed above. This time dependence is usually modeled by appending,
in a multiplicative manner, a factor exp(-|t|/¦Ó). This, in turn modifies the line shape
function I(¦Ø) in a manner much like that discussed when treating the rotational diffusion
case:
??
-¡Þ
¡Þ
exp(-i¦Øt)exp(-|t|/¦Ó)exp(-¦Ø2t2kT/(2mc2))exp(i(¦Øfv,iv+?Ei,f/h ± ¦ØJ)t)dt .
Not surprisingly, when the Doppler contribution is small, one obtains:
1
4pi {
1/¦Ó
(1/¦Ó)2+ (¦Ø-¦Øfv,iv-?Ei,f/h ± ¦ØJ)2
+ 1/¦Ó(1/¦Ó)2+ (¦Ø+¦Øfv,iv+?Ei,f/h ± ¦ØJ)2 }.
112
In these Lorentzian lines, the parameter ¦Ó describes the kinetic decay lifetime of the
molecule. One says that the spectral lines have been lifetime or Heisenberg broadened by
an amount proportional to 1/¦Ó. The latter terminology arises because the finite lifetime of
the molecular states can be viewed as producing, via the Heisenberg uncertainty relation
?E?t > h, states whose energy is "uncertain" to within an amount ?E.
e. Site Inhomogeneous Broadening
Among the above line broadening mechanisms, the pressure, rotational diffusion,
and lifetime broadenings are all of the homogeneous variety. This means that each and
every molecule in the sample is affected in exactly the same manner by the broadening
process. For example, one does not find some molecules with short lifetimes and others
with long lifetimes in the Heisenberg case; the entire ensemble of molecules is
characterized by a single lifetime.
In contrast, Doppler broadening is inhomogeneous in nature because each
molecule experiences a broadening that is characteristic of its particular velocity vz. That
is, the fast molecules have their lines broadened more than do the slower molecules.
Another important example of inhomogeneous broadening is provided by so-called site
broadening. Molecules imbedded in a liquid, solid, or glass do not, at the instant of their
photon absorption, all experience exactly the same interactions with their surroundings.
The distribution of instantaneous "solvation" environments may be rather "narrow" (e.g.,
in a highly ordered solid matrix) or quite "broad" (e.g., in a liquid at high temperature or
in a super-critical liquid). Different environments produce different energy level
113
splittings ¦Ø = ¦Øfv,iv+?Ei,f/h ± ¦ØJ (because the initial and final states are "solvated"
differently by the surroundings) and thus different frequencies at which photon
absorption can occur. The distribution of energy level splittings causes the sample to
absorb at a range of frequencies as illustrated in Fig. 6.24 where homogeneous and
inhomogeneous line shapes are compared.
(a) (b)
Homogeneous (a) and inhomogeneous (b) band shapes having
inhomogeneous width ?¦Í , and homogeneous width ?¦Í .
INH H
Figure 6.24 Illustration of Homogeneous Band Showing Absorption at Several
Concentrations and of Inhomogeneous Band Showing Absorption at One Concentration
by Numerous Sub-populations
The spectral line shape function I(¦Ø) is therefore further broadened when site
inhomogeneity is present and significant. These effects can be modeled by convolving
the kind of I(¦Ø) function that results from Doppler, lifetime, rotational diffusion, and
pressure broadening with a Gaussian distribution P(?E) that describes the
inhomogeneous distribution of energy level splittings:
114
I(¦Ø) = ??I0(¦Ø;?E) P(?E) d?E .
Here I0(¦Ø;?E) is a line shape function such as those described earlier each of which
contains a set of frequencies (e.g., ¦Øfv,iv+?Ei,f/h ± ¦ØJ +?E = ¦Ø + ?E/h) at which
absorption or emission occurs and P(?E) is a Gaussian probability function describing the
inhomogeneous broadening of the energy splitting ?E.
A common experimental test to determine whether inhomogeneous broadening is
significant involves hole burning. In such experiments, an intense light source (often a
laser) is tuned to a frequency ¦Øburn that lies within the spectral line being probed for
inhomogeneous broadening. Then, with the intense light source constantly turned on, a
second tunable light source is used to scan through the profile of the spectral line, and an
absorption spectrum is recorded. Given an absorption profile as shown in Fig. 6.25 in the
absence of the intense burning light source:
¦Ø
Intensity
Figure 6.25 Absorption Profile in the Absence of Hole Burning
115
one expects to see a profile such as that shown in Fig. 6.26 if inhomogeneous broadening
is operative.
¦Ø
Intensity
Figure 6.26 Absorption Profile With Laser Turned On to Burn a Hole
The interpretation of the change in the absorption profile caused by the bright
light source proceeds as follows:
(i) In the ensemble of molecules contained in the sample, some molecules will absorb at
or near the frequency of the bright light source ¦Øburn; other molecules (those whose
environments do not produce energy level splittings that match ¦Øburn) will not absorb at
this frequency.
(ii) Those molecules that do absorb at ¦Øburn will have their transition saturated by the
intense light source, thereby rendering this frequency region of the line profile
transparent to further absorption.
(iii) When the "probe" light source is scanned over the line profile, it will induce
absorptions for those molecules whose local environments did not allow them to be
116
saturated by the ¦Øburn light. The absorption profile recorded by this probe light source's
detector thus will match that of the original line profile, until
(iv) the probe light source's frequency matches ¦Øburn, upon which no absorption of the
probe source's photons will be recorded because molecules that absorb in this frequency
regime have had their transition saturated.
(v) Hence, a "hole" will appear in the absorption spectrum recorded by the probe light
source's detector in the region of ¦Øburn.
Unfortunately, the technique of hole burning does not provide a fully reliable
method for identifying inhomogeneously broadened lines. If a hole is observed in such a
burning experiment, this provides ample evidence, but if one is not seen, the result is not
definitive. In the latter case, the transition may not be strong enough (i.e., may not have a
large enough "rate of photon absorption") for the intense light source to saturate the
transition to the extent needed to form a hole.
B. Photoelectron Spectroscopy
Photoelectron spectroscopy (PES) is a special kind of electronic spectroscopy. It
uses visible or UV light to excite a molecule or ion to a final state in which an electron is
ejected. In effect, it induces transitions to final states in which an electron has been
promoted to an unbound or so-called continuum orbital. Most PES experiments are
carried out using a fixed-frequency light source (usually a laser). This source¡¯s photons,
when absorbed, eject electrons whose intensity and kinetic energies KE are then
117
measured. Subtracting the electrons¡¯ KE from the photon¡¯s energy h¦Í gives the binding
energy BE of the electron:
BE = h¦Í - KE.
If the sample subjected to the PES experiment has molecules in a variety of initial states
(e.g., two electronic states or various vibrational-rotational levels of the ground electronic
state) having various binding energies BEk, one will observe a series of ¡°peaks¡±
corresponding to electrons ejected with a variety of kinetic energies KEk as Fig. 6.27
illustrates and as the energy-balance condition requires:
BEk = h¦Í - KEk.
The peak of electrons detected with the highest kinetic energy came from the highest-
lying state of the parent, while those with low kinetic energy came from the lowest-
energy state of the parent.
118
Figure 6.27 Photoelectron Spectrum Showing Absorption From Two States of the Parent
By examining the spacings between these peaks, one learns about the spacings between
the energy levels of the parent species that has been subjected to electron loss.
Alternatively, if the parent species exists primarily in its lowest state but the
daughter species produced when an electron is removed from the parent has excited
(electronic, vibration-rotation) states, one can observe a different progression of peaks. In
this case, the electrons with highest kinetic energy arise from transitions leading to the
lowest-energy state of the daughter as Fig. 6.28 illustrates. In that figure, the lower
energy surface belongs to the parent and the upper curve to the daughter.
119
0
5000
10000
15000
20000
25000
30000
E in cm-1
R in Angstroms
Detaching
Light Source
Kinetic Energies of
Detached Electrons
Figure 6.28 Photoelectron Events Showing Detachment From One State of the Parent to
Several States of the Daughter
An example of experimental photodetachment data is provided in Fig. 6.29 showing the
intensity of electrons detected when Cu2- loses an electron vs. the kinetic energy of the
ejected electrons.
120
Figure 6.29 Photoelectron Spectrum of Cu2-. The Peaks Belong to a Franck-Condon
Vibrational Progression of Neutral Cu2
The peak at a kinetic energy of ca. 1.54 eV, corresponding to a binding energy of 1.0 eV,
arises from Cu2- in v=0 losing an electron to produce Cu2 in v=0. The most intense peak
corresponds to a v=0 to v=4 transition. As in the visible-UV spectroscopy case, Franck-
Condon factors involving the overlap of the Cu2- and Cu2 vibrational wave functions
govern the relative intensities of the PES peaks.
Another example is given in Fig. 6.30 where the photodetachment spectrum of
H2C=C- (the anion of the carbene vinylidene) appears.
121
Figure 6.30 Photoelectron Spectrum of H2C=C- Showing Detachments to Two Electronic
States of the Neutral
In this spectrum, the peaks having electron binding energies near 0.5 eV correspond to
transitions in which ground-state H2C=C- in v=0 is detached to produce ground-state
(1A1) H2C=C in various v levels. The spacings between this group of peaks relate to the
spacings in vibrational states of this 1A1 electronic state. The series of peaks with binding
energies near 2.5 eV correspond to transitions in which H2C=C- is detached to produce
H2C=C in its 3B2 excited electronic state. The spacings between peaks in this range relate
to spacings in vibrational states of this 3B2 state. The spacing between the peaks near 0.5
122
eV and those near 2.5 eV relate to the energy difference between the 3B2 and 1A1
electronic states of the neutral H2C=C.
Because PES offers a direct way to measure energy differences between anion and
neutral or neutral and cation state energies, it is a powerful and widely used means of
determining molecular electron affinities (EAs) and ionization potentials (IPs). Because
IPs and EAs relate, via Koopmans¡¯ theorem, to orbital energies, PES is thus seen to be a
way to measure orbital energies. Its vibrational envelopes also offer a good way to probe
vibrational energy level spacings, and hence the bonding strengths.
C. Probing Continuum Orbitals
There is another type of spectroscopy that can be used to directly probe the orbitals
of a molecule that lie in the continuum (i.e., at energies higher than that of the parent
neutral). I ask that you reflect back on our discussion of tunneling and of resonance states
that can occur when an electron experiences both attractive and repulsive potentials. In
such cases, there exists a special energy at which the electron can be trapped by the
attractive potential and have to tunnel through the repulsive barrier to tunnel and
eventually escape. It is these kinds of situations that this spectroscopy probes.
This experiment is called electron-transmission spectroscopy (ETS). In such an
experiment a beam of electrons having a known intensity I0 and narrowly defined range
of kinetic energies E is allowed to pass through a sample (usually gaseous) of thickness
L. The intensity I of electrons observed to pass through the sample and arrive at a
123
detector lying along the incident beam¡¯s direction is monitored, as are the kinetic energies
of these electrons E¡¯. Such an experiment is described in qualitative form in Fig. 6.31.
Figure 6.31 Prototypical Electron Transmission Spectrum Setup
If the molecules in the sample have a resonance orbital whose energy is close to the
kinetic energy E of the colliding electrons, it is possible for an electron from the beam to
be captured into such an orbital and to exist in this orbital for a considerable time. Of
course, in the absence of any collisions or other processes to carry away excess energy,
this anion will re-emit an electron at a later time. Hence, such anions are called
metastable and their electronic states are called resonance states. If the captured electron
remains in this orbital for a length of time comparable to or longer than the time it takes
for the nascent molecular anion to undergo vibrational or rotational motion, various
events can take place before the electron is re-emitted:
i. some bond lengths or angles can change (this will happen if the orbital occupied by
the beam¡¯s electron has bonding or antibonding character) so, when the electron is
Intensity ofIncident Electron
Beam I0
Intensity ofTransmitted
Electron Beam I
Sample of thickness L
124
subsequently emitted, the neutral molecule is left with a change in vibrational
energy;
ii. the molecule may rotate, so when the electron is ejected, it is not emitted in the same
direction as the incident beam.
In the former case, one observes electrons emitted with energies E¡¯ that differ from that
of the incident beam by amounts related to the internal vibrational energy levels of the
anion. In the latter, one sees a reduction in the intensity of the beam that is transmitted
directly through the sample and electrons that are scattered away from this direction.
Such an ETS spectrum is shown in Fig. 6.32 for a gaseous sample of CO2 molecules.
In this spectrum, the energy of the transmitted beam¡¯s electrons is plotted on the
horizontal axis and the derivative of the intensity of the transmitted beam is plotted on the
vertical axis. It is common to plot such derivatives in ETS-type experiments to allow the
variation of the signal with energy to be more clearly identified.
125
Figure 6.32 ETS Spectrum (plotted in derivative form) of CO2-
The energy at which the signal passes through zero then represents the energy at which a
¡°peak¡± in the spectrum would be observed; that is, the energy of the virtual orbital. In this
ETS spectrum of CO2, the oscillations that appear within the one spectral feature
displayed correspond to stretching and bending vibrational levels of the metastable CO2-
anion. It is the bending vibration that is primarily excited because the beam electron
enters the LUMO of CO2, which is an orbital of the form shown in Fig. 6.33.
126
Figure 6.33 Antibonding pi* Orbital of CO2 Holding the Excess Electron in CO2-
Occupancy of this antibonding pi* orbital, causes both C-O bonds to lengthen and the
O-C-O angle to bend away from 180 deg. The bending allows the antibonding nature of
this orbital to be reduced.
Other examples of ETS spectra are shown in Fig. 6.34.
O C O
127
Figure 6.34 ETS Spectra of Several Molecules
128
Here, again a derivative spectrum is shown, and the vertical lines have been added to
show where the derivative passes through zero, which is where the ETS signal would
have a ¡°peak¡±. These maxima correspond to electrons entering various virtual pi* orbitals
of the uracil and DNA base molecules. It is by finding these peaks in the ETS spectrum
that one can determine the energies of such continuum orbitals.
Before closing this section, it is important to describe how one uses theory to
simulate the metastable states that arise in such ETS experiments. Such calculations are
not at all straightforward, and require the introduction of special tools designed to
properly model the resonant continuum orbital.
For metastable anions, it is difficult to approximate the potential experienced by
the excess electron. For example, singly charged anions in which the excess electron
occupies a molecular orbital ¦Õ that possesses non-zero angular momentum have effective
potentials as shown in Fig. 6.35, which depend on the angular momentum L value of the
orbital.
129
0.15 0.35 0.55 0.75 0.95 1.15 1.35 1.55 1.75 1.95 2.15 2.35 2.55 2.75 2.95
-30
-25
-20
-15
-10
-5
0
5
10
15
Potential Energy
r in Angstroms
Veff(r) = V(r) + L(L+1)/2mer
2
Large L
Small L
Shape resonance energy
level for the two L values
Figure 6.35 Radial Potentials and Shape Resonance Energy Levels for Two L Values
For example, the pi* orbital of N2- shown in Fig. 6.36 produces two counteracting
contributions to the effective radial potential Veff(r) experienced by an electron occupying
it.
NN
Figure 6.36 Antibonding pi* Orbital of N2- Showing its L = 2 Character
130
First, the two nitrogen centers exert attractive potentials on the electron in this orbital.
These attractions are strongest when the excess electron is near the nuclei but decay
rapidly at larger distances because the other electrons¡¯s Coulomb repulsions screen the
nuclear attractions. Secondly, because the pi* molecular orbital is comprised of atomic
basis functions of ppi, dpi, etc. symmetry, it possesses non-zero angular momentum.
Because the pi* orbital has gerade symmetry, its large-r character is dominated by L = 2
angular momentum. As a result, the excess electron has a centrifugal radial potential
L(L+1)/2m3r2 derived largely from its L = 2 character.
The attractive short-range valence potetials V(r) and the centrifugal potential
combine to produce a net effective potential as illustrated in Fig. 6.35. The energy of an
electron experiencing such a potential may or may not lie below the r ¡ú ¡Þ asymptote. If
the attractive potential is sufficiently strong, as it is for O2-1, the electron in the pi* orbital
will be bound and its energy will lie below this asymptote. On the other hand, if the
attractive potential is not as strong, as is the case for the less-electronegative nitrogen
atoms in N2-1, the energy of the pi* orbital can lie above the asymptote. In the latter cases,
we speak of metastable shape-resonance states. They are metastable because their
energies lie above the asymptote so they can decay by tunneling through the centrifugal
barrier. They are called shape-resonances because their metastability arises from the
shape of their repulsive centrifugal barrier.
If one had in-hand a reasonable approximation to the attractive short-range
potential V(r) and if one knew the L-symmetry of the orbital occupied by the excess
electron, one could form Veff(r) as above. However, to compute the lifetime of the shape
resonance, one has to know the energy E of this state.
131
The most common and powerful tool for studying such metastable states
theoretically is the stabilization method (SM). This method involves embedding the
system of interest (e.g., the N2-1 anion) within a finite radial ¡°box¡± in order to convert the
continuum of states corresponding, for example, to N2 + e-, into discrete states that can be
handled using more conventional methods. By then varying the size of the box, one can
vary the energies of the discrete states that correspond to N2 + e- (i.e., one varies the
kinetic energy KE of the orbital containing the excess electron). As the box size is varied,
one eventually notices (e.g., by plotting the orbitals) that one of the N2 + e- states
possesses a significant amount of valence (i.e., short-range) character. That is, one such
state has significant amplitude not only at large-r but also in the region of the two
nitrogen centers. It is this state that corresponds to the metastable shape-resonance state,
and it is the energy E where significant valence components develop that provides the
stabilization estimate of the state energy.
Let us continue using N2-1 as an example for how the SM would be employed,
especially how one usually varies the box within which the anion is constrained. One
would use a conventional atomic orbital basis set that would likely include s and p
functions on each N atom, perhaps some polarization d functions and some conventional
diffuse s and p orbitals on each N atom. These basis orbitals serve primarily to describe
the motions of the electrons within the usual valence regions of space.
To this basis, one would append an extra set of diffuse pi-symmetry orbitals.
These orbitals could be ppi (and maybe dpi) functions centered on each nitrogen atom, or
they could be ppi (and maybe dpi) obitals centered at the midpoint of the N-N bond. One
usually would not add just one such function; rather several such functions, each with an
132
orbital exponent ¦ÁJ that characterizes its radial extent, would be used. Let us assume, for
example, that K such pi functions have been used.
Next, using the conventional atomic orbital basis as well as the K extra pi basis
functions, one carries out a calculation (most often a variational calculation in which one
computes many energy levels) on the N2-1 anion. In this calculation, one tabulates the
energies of many (say M) of the electronic states of N2-1. Of course, because a finite
atomic orbital basis set must be used, one finds a discrete "spectrum" of orbital energies
and thus of electronic state energies. There are occupied orbitals having negative energy
that represent, via. Koopmans' theorem, the bound states of the N2-. There are also so-
called virtual orbitals (i.e., those orbitals that are not occupied) whose energies lie above
zero (i.e., do not describe bound states). The latter orbitals offer a discrete approximation
to the continuum within which the resonance state of interest lies.
One then scales the orbital exponents {¦ÁJ} of the K extra pi basis orbitals by a
factor ¦Ç: ¦ÁJ ¡ú ¦Ç ¦ÁJ and repeats the calculation of the energies of the M lowest energies
of N2-1. This scaling causes the extra pi basis orbitals to contract radially (if ¦Ç > 1) or to
expand radially (if ¦Ç < 1). It is this basis orbital expansion and contraction that produces
expansion and contraction of the ¡°box¡± discussed above. That is, one does not employ a
box directly; instead, one varies the radial extent of the most diffuse basis orbitals to
simulate the box variation.
If the conventional orbital basis is adequate, one finds that the extra pi orbitals,
whose exponents are being scaled, do not affect appreciably the energy of the neutral N2
molecule. This can be probed by plotting the N2 energy as a function of the scaling
parameter ¦Ç; if the energy varies little with ¦Ç, the conventional basis is adequate.
133
In contrast to plots of the neutral N2 energy vs. ¦Ç, plots of the energies of the M
N2-1 states show significant ¦Ç-dependence as Fig. 6.37 illustrates.
Figure 6.37 Typical Stabilization Plot Showing Several Levels of the Metastable
Anion and their Avoided Crossings
What does such a stabilization plot tell us and what do the various branches of the
plot mean? First, one should notice that each of the plots of the energy of an anion state
(relative to the neutral molecule¡¯s energy, which is independent of ¦Ç) grows with
increasing ¦Ç. This ¦Ç-dependence arises from the ¦Ç-scaling of the extra diffuse pi basis
Orbital Scaling Parameter ¦Ç
Anion State Energy (eV)
Resonance State Energy (eV)
E
0
1
2
3
4
134
orbitals. Because most of the amplitude of such basis orbitals lies outside the valence
region, the kinetic energy is the dominant contributor to such orbitals¡¯ energy. Because ¦Ç
enters into each orbital as exp(-¦Ç¦Á r2), and because the kinetic energy operator involves
the second derivative with respect to r, the kinetic energies of orbitals dominated by the
diffuse pi basis functions vary as ¦Ç2.
For small ¦Ç, all of the pi diffuse basis functions have their amplitudes concentrated
at large r and have low kinetic energy. As ¦Ç grows, these functions become more radially
compact and their kinetic energies grow. For example, note the three lowest energies
shown above increasing from near zero as ¦Ç grows.
As ¦Ç further increases, one reaches a point at which the third and fourth anion-
state energies undergo an avoided crossing. At this ¦Ç value, if one examines the nature of
the two wave functions whose energies avoid one another, one finds that one of them
contains substantial amounts of both valence and extra diffuse pi function character. Just
to the left of the avoided crossing, the lower-energy state (the third state for small ¦Ç)
contains predominantly extra diffuse pi orbital character, while the higher-energy state
(the fourth state) contains largely valence pi* orbital character.
However, at the special value of ¦Ç where these two states nearly cross, the kinetic
energy of the third state (as well as its radial size and de Broglie wavelength) are
appropriate to connect properly with the fourth state. By connect properly we mean that
the two states have wave function amplitudes, phases, and slopes that match. So, at this
special ¦Ç value, one can achieve a description of the shape-resonance state that correctly
describes this state both in the valence region and in the large-r region. Only by tuning
135
the energy of the large-r states using the ¦Ç scaling can one obtain this proper boundary
condition matching.
In summary, by carrying out a series of anion-state energy calculations for several
states and plotting them vs. ¦Ç, one obtains a stabilization graph. By examining this graph
and looking for avoided crossings, one can identify the energies at which metastable
resonances occur. It is also possible to use the shapes (i.e., the magnitude of the energy
splitting between the two states and the slopes of the two avoiding curves) of the avoided
crossings in a stabilization graph to compute the lifetimes of the metastable states.
Basically, the larger the avoided crossing energy splitting between the two states, the
shorter is the lifetime of the resonance state. So, the ETS and PES experiments offer
wonderful probes of the bound and continuum states of molecules and ions that tell us a
lot about the electronic nature and chemical bonding of these species. The theoretical
study of these phenomena is complicated by the need to properly identify and describe
any continuum orbitals and states that are involved. The stabilization technique allows us
to achieve a good approximation to resonance states that lie in such continua.
136