Parks, H.G., Needham, W., Rajaram, S. Rafferty, C. “Semiconductor Manufacturing”
The Electrical Engineering Handbook
Ed. Richard C. Dorf
Boca Raton: CRC Press LLC, 2000
23
Semiconductor
Manufacturing
23.1 Processes
Thermal Oxidation ? Diffusion ? Ion Implantation ? Deposition ?
Lithography and Pattern Transfer
23.2 Testing
Built-In Self-Test ? Scan ? Direct Access Testing ? Joint Test Action
Group ? Pattern Generation for Functional Test Using Unit
Delay ? Pattern Generation for Timing ? Temperature, Voltage, and
Processing Effects ? Fault Grading ? Test Program Flow
23.3 Electrical Characterization of Interconnections
Interconnection Metrics ? Interconnection Electrical Parameters
23.4 Process Modeling and Simulation
Ion Implantation ? Diffusion ? Oxidation ? Etching ?
Deposition ? Lithography ? Summary and Future Trends
23.1 Processes
Harold G. Parks
Integrated circuit (IC) fabrication consists of a sequence of processing steps referred to as unit step processes
that result in the devices contained on today’s microchips. These unit step processes provide the methodology
for introducing and transporting dopants to change the conductivity of the semiconductor substrate, growing
thermal oxides for inter- and intra-level isolation, depositing insulating and conducting films, and patterning
and etching the various layers in the formation of the IC. Many of these unit steps have essentially remained
the same since discrete component processing, whereas many have originated and grown with the integrated
circuit evolution from small-scale integration (SSI) with less than 50 components per chip through very large
scale integration (VLSI) with up to one million devices per chip. As the ultra large scale integration (ULSI)
era, with more than a million devices per chip, proceeds to billion-device chips shortly after the turn of the
century, new processes and further modification of the current unit step processes will be required. In this
section the unit step processes for silicon IC processing as they exist today with an eye toward the future are
presented. How they are combined to form the actual IC process will be discussed in a later section. Due to
space limitations only silicon processes are discussed. This author does not feel this is a major limitation, as
many of the steps are used in processing other types of semiconductors, and perhaps more than 98% of all ICs
today and in he near future are and will be silicon. Furthermore, only the highlights of the unit steps can be
presented in this space, with ample references provided for a more thorough presentation. Specifically, the
referenced processing textbooks provide detailed discussion of all processes.
Thermal Oxidation
Silicon dioxide (SiO
2
) layers are important in integrated circuit technology for surface passivation, as a diffusion
barrier and as a surface dielectric. The fact that silicon readily forms a high-quality, dense, natural oxide is the
major reason it is the dominant integrated circuit technology today. If a silicon wafer is exposed to air, it will
Harold G. Parks
The University of Arizona, Tucson
Wayne Needham
Intel Corporation
S. Rajaram
Lucent Technologies
Conor Rafferty
Bell Laboratories, Lucent
Technology
? 2000 by CRC Press LLC
grow a thin (≈45 ?) oxide in a relatively short time. To achieve the thicknesses of SiO
2
used in integrated circuit
technology (100 ? to 2 μm) alternative steps must be taken. Thermal oxidation is an extension of the natural
oxide growth at an elevated temperature (800 to 1200°C). The temperature is usually selected out of compro-
mise, i.e., it must be high enough to grow the oxide in a reasonable time and it must be as low as practical to
minimize crystal damage and unwanted diffusion of dopants already in the wafer.
The Oxidation Process
Thermal oxidation is usually accomplished by placing wafers in a slotted quartz carrier which is inserted into
a quartz furnace tube. The tube is surrounded by a resistance heater and has provisions for controlled flow of
an inert gas such as nitrogen and the oxidant. A vented cap is placed over the input end of the tube. The gas
flows in the back end of the tube, over the wafers, and is exhausted through the vented cap. The wafer zone
has a flat temperature profile to within 1/2°C and can handle up to 50 parallel stacked wafers. Modern furnaces
are computer controlled and programmable. Wafers are usually loaded, in an inert environment, ramped to
temperature, and switched to the oxidant for a programmed time. When the oxidation is complete the gas is
switched back to the inert gas and the temperature is ramped down to the unload temperature. All these
complications in the process are to minimize thermal stress damage to the wafers and the procedures can vary
considerably. Detailed discussions of the equipment and procedures can be found in references [Sze, 1983].
The two most common oxidizing environments are dry and wet. As the name implies, dry oxides are grown
in dry O
2
gas following the reaction:
Si + O
2
→ SiO
2
(23.1)
Wet oxides were originally grown by bubbling the dry oxygen gas through water at 95°C. Most “wet” oxides
today are accomplished by the pyrogenic reaction of H
2
and O
2
gas to form steam, and are referred to as steam
oxidations. In either case the reaction is essentially the same at the wafer:
Si + H
2
O → SiO
2
+ 2H
2
(23.2)
The oxidation process can be modeled as shown in Fig. 23.1 The
position X
0
represents the Si/SiO
2
interface which is a moving
boundary. The volume density of oxidizing species in the bulk
gas, N
G
, is depleted at the oxide surface, N
S
, due to an amount,
N
0
, being incorporated in the oxide layer. The oxidizing species
then diffuses across the growing oxide layer where it reacts with
the silicon at the moving interface to form SiO
2
. F
G
represents
the flux of oxidant transported by diffusion from the bulk gas
to the oxide surface. The oxidizing species that enters the SiO
2
diffuses across the growing SiO
2
layer with a flux, F
ox
. A reaction
takes place at the Si/SiO
2
interface that consumes some or all of
the oxidizing species, as represented by the flux, F
I
.
In steady state these three flux terms are equal and can be
used to solve for the concentrations N
I
and N
0
in terms of the
reaction rate and diffusion coefficient of the oxidizing species.
This in turn specifies the flux terms which can be used in the
solution of the differential equation:
(23.3)
for the oxide growth, x. In this equation N
ox
is the number of oxidant molecules per unit volume of oxide. An
excellent derivation of the growth equation is given in Grove [1967]. Here we give the result which can be
represented by:
FIGURE 23.1 Model of the oxidation process.
dx
dt
F
N
ox
=
? 2000 by CRC Press LLC
(23.4)
where x
ox
is the oxide thickness, B is the parabolic rate constant, B/A is the linear rate constant, t is the oxidation
time, and t represents the initial oxide thickness.
Referring to Eq. (23.4) we see there are two regimes of oxide growth. For thin oxides or short times, i.e., the
initial phase of the oxidation process, the equation reduces to:
(23.5)
and the growth is a linear function of time, limited by the surface reaction at the Si/SiO
2
interface.
For thicker oxides and longer times the reaction is limited by the diffusion of the oxidizing species across
the growing oxide layer, and the limiting form of Eq. (23.4) is
(23.6)
Oxidation Rate Dependencies
Typical oxidation curves showing oxide thickness as a function of time with temperature as a parameter for
wet and dry oxidation of <100> silicon are shown in Fig. 23.2. This type of curve is qualitatively similar for all
oxidations. The oxidation rates are strongly temperature dependent as both the linear and parabolic rate
constants show an Arrhenius relationship with temperature. The linear rate is dominated by the temperature
dependence of the interfacial growth reaction and the parabolic rate is dominated by the temperature depen-
dence of the diffusion coefficient of the oxidizing species in SiO
2
.
Wet oxides grow faster than dry oxides. Both the linear and parabolic rate constants are proportional to
equilibrium concentration of the oxidant in the oxide. The solubility of H
2
O in SiO
2
is greater than that of O
2
and hence the oxidation rate is enhanced for wet oxides.
FIGURE 23.2 Thermal silicon dioxide growth on <100> silicon for wet and dry oxides.
10
1.0
0.1 1.0 10 100
0.1
0.01
1200?C wet
1200?C dry
1000?C dry
1100?C dry
900?C dry1100?C wet
1000?C wet
900?C wet
Oxidation Time (hr)
Oxide Thickness
(
m
m)
x
AB
A
t
ox
=++-
é
?
ê
ê
ù
?
ú
ú
2
1
4
1
2
()t
x
B
A
t
ox
=+()t
xBt
ox
=
? 2000 by CRC Press LLC
Oxidation rate depends on substrate orientation [Ghandhi, 1968]. This effect is related to the surface atom
density of the substrate, i.e., the higher the density, the faster the oxidation rate. Oxidation rate also depends
on pressure. The linear and parabolic rates are dependent on the equilibrium concentration of the oxidizing
species in the SiO
2
which is directly proportional to the partial pressure of the oxidant in the ambient.
Oxide growth rate shows a doping dependence for heavily doped substrates (>10
20
cm
–3
). Boron increases
the parabolic rate constant and phosphorus enhances the linear rate constant [Wolf and Tauber, 1986].
Oxide Characteristics
Dry oxides grow more slowly than wet oxides, resulting in higher density, higher breakdown field strengths,
and more controlled growth, making them ideal for metal-oxide semiconductor (MOS) gate dielectrics.
Wet oxidation is used for forming thick oxides for field isolation and masking implants and diffusions. The
slight degradation in oxide density is more than compensated for by the thickness in these applications.
<100> substrates have fewer dangling bonds at the surface, which results in lower fixed oxide charge and
interface traps and therefore higher quality MOS devices.
Conventional dopants (B, P, As, and Sb) diffuse slowly in both wet and dry oxides and hence these oxides
provide a good barrier for masking diffusions in integrated circuit fabrication.
High-pressure steam oxidations provide a means for growing relatively thick oxides in reasonable times at
low temperatures to avoid dopant diffusion. Conversely, low-pressure oxidations show promise for forming
controlled ultra thin gate growth for ULSI technologies.
Chlorine added to gate oxides [Sze, 1988] has been shown to reduce mobile ions, reduce oxide defects,
increase breakdown voltage, reduce fixed oxide charge and interface traps, reduce oxygen- induced stacking
faults, and increase substrate minority carrier lifetime. Chlorine is introduced into dry oxidations in less than
5% concentrations as anhydrous HCl gas or by trichloroethylene (TCE) or trichloroethane (TCA).
Dopant Segregation and Redistribution
Since silicon is consumed during the oxidation process, the dopant in the substrate will redistribute due to
segregation [Wolf and Tauber, 1986]. The boundary condition across the Si/SiO
2
interface is that the chemical
potential of the dopant is the same on both sides. This results in the definition of a segregation coefficient, m,
as the ratio of the equilibrium concentration of dopant in Si to the equilibrium concentration of dopant in
SiO
2
. Depending on the value of m (i.e., less than or greater than 1) and the diffusion properties of the dopant
in SiO
2
, various redistributions are possible. For example, m ? 0.3 for boron and it is a slow diffuser in SiO
2
,
so it tends to deplete from the Si surface and accumulate in the oxide at the Si/SiO
2
interface. Phosphorus, on
the other hand, has m ? 10, is also a slow diffuser in SiO
2
, and tends to pile up in the Si at the Si/SiO
2
interface.
Antimony and arsenic behave similarly to phosphorus.
Diffusion
Diffusion was the traditional way dopants were introduced into silicon wafers to create junctions and control
the resistivity of layers. Ion implantation has now superseded diffusion for this purpose. The principles and
concepts of diffusion theory, however, remain important since they describe the movement and transport of
dopants and impurities during the high-temperature processing steps of integrated circuit manufacture.
Diffusion Mechanism
Consider a silicon wafer with a high concentration of an impurity on its surface. At any temperature there are
a certain number of vacancies in the Si lattice. If the wafer is subjected to an elevated temperature, the number
of vacancies in the silicon will increase and the impurity will enter the wafer moving from the high surface
concentration to redistribute in the bulk. The redistribution mechanism is diffusion, and depending on the
impurity type it will either be substitutional or interstitial [Ghandhi, 1982].
For substitutional diffusion the impurity atom substitutes for a silicon atom at a vacancy lattice site and then
progresses into the wafer by hopping from lattice site to lattice site via the vacancies. Clearly, the hopping can
be in a random direction; however, since the impurity is present initially in high concentration on the surface
only, there is a net flow of the impurity from the surface into the bulk.
? 2000 by CRC Press LLC
In the case of interstitial diffusion the impurity diffuses by squeezing between the lattice atoms and taking
residence in the interstitial space between lattice sites. Since this mechanism does not require the presence of
a vacancy, it proceeds much faster than substitutional diffusion.
Conventional dopants such as B, P, As, and Sb diffuse by the substitutional method. This is beneficial in that
the diffusion process is much slower and can therefore be controlled more easily in the manufacturing process.
Many of the undesired impurities such as Fe, Cu, and other heavy metals diffuse by the interstitial mechanism
and therefore the process is extremely fast. This again is beneficial in that at the temperatures used, and in the
duration of fabrication processes, the unwanted metals can diffuse completely through the Si wafer. Gettering
creates trapping sites on the back surface of the wafer for these impurities that would otherwise remain in the
silicon and cause adverse device effects.
Regardless of the diffusion mechanism, it can be formalized mathematically in the same way by introducing
a diffusion coefficient, D (cm
2
/sec), that accounts for the diffusion rate. The diffusion constants follow an
Arrhenius behavior according to the equation:
(23.7)
where D
0
is the prefactor, E
A
the activation energy, k Boltzmann’s constant, and T the absolute temperature.
Conventional silicon dopants (substitutional diffusers) have diffusion coefficients on the order of 10
–14
to 10
–12
at 1100°C, whereas heavy metal interstitial diffusers (Fe, Au, and Cu) have diffusion coefficients of 10
–6
to 10
–5
at this temperature.
The diffusion process can be described using Fick’s Laws. Fick’s first law says that the flux of impurity, F,
crossing any plane is related to the impurity distribution, N(x,t) per cm
3
, by:
(23.8)
in the one-dimensional case. Fick’s second law states that the time rate of change of the particle density in turn
is related to the divergence of the particle flux:
(23.9)
Combining these two equations gives:
(23.10)
in the case of a constant diffusion coefficient as is often assumed. This partial differential equation can be solved
by separation of variables or by Laplace transform techniques for specified boundary conditions.
For a constant source diffusion the impurity concentration at the surface of the wafer is held constant
throughout the diffusion process. Solution of Eq. (23.10) under these boundary conditions, assuming a semi-
infinite wafer, results in a complementary error function diffusion profile:
(23.11)
DD
E
kT
A
=-
é
?
ê
ê
ù
?
ú
ú
0
exp
FD
N
x
=
?
?
?
?
?
?
N
t
F
x
=
?
?
?
?
?
?
?
?
N
tx
D
N
x
D
N
x
=
?
è
?
?
?
÷
=
2
2
NN
x
Dt
xt(,)
=
?
è
?
?
?
÷
0
2
erfc
? 2000 by CRC Press LLC
Here, N
0
is the impurity concentration at the surface of the wafer, x the distance into the wafer, and t the
diffusion time. As time progresses the impurity profile penetrates deeper into the wafer while maintaining a
constant surface concentration. The total number of impurity atoms/cm
2
in the wafer is the dose, Q, and
continually increases with time:
(23.12)
For a limited source diffusion an impulse of impurity of dose Q is assumed to be deposited on the wafer surface.
Solution of Eq. (23.10) under these boundary conditions, assuming a semi-infinite wafer with no loss of
impurity, results in a Gaussian diffusion profile:
(23.13)
In this case, as time progresses the impurity penetrates more deeply into the wafer and the surface concentration
falls so as to maintain a constant dose in the wafer.
Practical Diffusions
Most real diffusions follow a two-step procedure, where the dopant is applied to the wafer with a short constant
source diffusion, then driven in with a limited source diffusion. The reason for this is that in order to control
the dose, a constant source diffusion must be done at the solid solubility limit of the impurity in the Si, which
is on the order of 10
20
for most dopants. If only a constant source diffusion were done, this would result in
only very high surface concentrations. Therefore, to achieve lower concentrations, a short constant source
diffusion to get a controlled dose of impurities in a near surface layer is done first. This diffusion is known as
the predeposition or predep step. Then the source is removed and the dose is diffused into the wafer, simulating
a limited source diffusion in the subsequent drive-in step.
If the Dt product for the drive-in step is much greater than the Dt product for the predep, the resulting
profile is very close to Gaussian. In this case the dose can be calculated by Eq. (23.12) for the predep time and
diffusion coefficient. This dose is then used in the limited source Eq. (23.13) to describe the final profile based
on the time and diffusion coefficient for the drive-in. If these Dt criteria are not met, then an integral solution
exists for the evaluation of the resulting profiles [Ghandhi, 1968].
Further Profile Considerations
A wafer typically goes through many temperature cycles during fabrication, which can alter the impurity profile.
The effects of many thermal cycles that take place at different times and temperatures are accounted for by
calculating a total Dt product for the diffusion that is equal to the sum of the individual process Dt products:
(23.14)
Here D
i
and t
i
are the diffusion coefficient and time that pertain to the ith process step.
Many diffusions are used to form junctions by diffusing an impurity opposite in type to the substrate. At
the metallurgical junction, x
j
, the impurity diffusion profile has the same concentration as the substrate. For
a junction with a surface concentration N
0
and substrate doping N
B
the metallurgical junction for a Gaussian
profile is
(23.15)
Q N dx N
Dt
xt
==
¥
ò
(,)
2
0
0
p
N
Q
Dt
x
Dt
xt(,)
exp –=
?
è
?
?
?
÷
é
?
ê
ê
ù
?
ú
ú
p 2
2
()Dt D t
ii
i
tot
=
?
xDt
N
N
j
B
=
?
è
?
?
?
÷
2
0
ln
? 2000 by CRC Press LLC
and for a complementary error function profile is
(23.16)
So far we have considered just vertical diffusion. In practical IC fabrication, usually only small regions are
affected by the diffusion by using an oxide mask and making a cut in it where specific diffusion is to occur.
Hence, we also have to be concerned with lateral diffusion of the dopant so as not to affect adjacent devices.
Two-dimensional numerical solutions exist for solving this problem [Jaeger, 1988]; however, a useful rule of
thumb is that the lateral junction, y
j
, is 0.8x
j
.
Another parameter of interest is the sheet resistance of the diffused layer. This has been numerically evaluated
for various profiles and presented as general-purpose graphs known as Irvin’s curves. For a given profile type,
such as n-type Gaussian, Irvin’s curves plot surface dopant concentration versus the product of sheet resistance
and junction depth with substrate doping as a parameter. Thus, given a calculated diffusion profile one could
estimate the sheet resistivity for the diffused layer. Alternatively, given the measured junction depth and sheet
resistance, one could estimate the surface concentration for a given profile and substrate doping. Most processing
books [e.g., Jaeger, 1988] contain Irvin’s curves.
Ion Implantation
Diffusion places severe limits on device design, such as hard to control low-dose diffusions, no tailored profiles,
and appreciable lateral diffusion at mask edges. Ion implantation overcomes all of these drawbacks and is an
alternative approach to diffusion used in the majority of production doping applications today. Although many
different elements can be implanted, IC manufacture is primarily interested in B, P, As, and Sb.
Ion Implant Technology
A schematic drawing of an ion implanter is shown in Fig. 23.3. The ion source operates at relatively high voltage
(?20–25 kV) and for conventional dopants is usually a gaseous type which extracts the ions from a plasma.
The ions are mass separated with a 90 degree analyzer magnet that directs the selected species through a
resolving aperture focused and accelerated to the desired implant energy. At the other end of the implanter is
the target chamber where the wafer is placed in the beam path. The beam line following the final accelerator
and the target chamber are held at or near ground potential for safety reasons. After final acceleration the beam
is bent slightly off axis to trap neutrals and is asynchronously scanned in the X and Y directions over the wafer
to maintain dose uniformity. This is often accompanied by rotation and sometimes translation of the target
wafer also.
The implant parameters of interest are the ion species, implant energy, and dose. The ion species can consist
of singly ionized elements, doubly ionized elements, or ionized molecules. The molecular species are of interest
in forming shallow junctions with light ions, i.e., B, using BF
2
+
. The beam energy is
E = nqV (23.17)
where n represents the ionization state (1 for singly and 2 for doubly ionized species), q the electronic charge,
and V the total acceleration potential (source + acceleration tube) seen by the beam. The dose, Q, from the
implanter is
(23.18)
where I is the beam current in amperes, A the wafer area in cm
2
, t
I
the implant time in sec, and n the ionization
state.
xDt
N
N
j
B
=
?
è
?
?
?
÷
-
2
1
0
erfc
Q
I
nqA
dt
t
I
=
ò
0
? 2000 by CRC Press LLC
Ion Implant Profiles
Ions impinge on the surface of the wafer at a certain energy and give up that energy in a series of electronic
and nuclear interactions with the target atoms before coming to rest. As a result the ions do not travel in a
straight line but follow a zigzag path resulting in a statistical distribution of final placement. To first order the
ion distribution can be described with a Gaussian distribution:
(23.19)
R
p
is the projected range which is the average depth of an implanted ion. The peak concentration, N
p
, occurs
at R
p
and the ions are distributed about the peak with a standard deviation DR
p
known as the straggle. Curves
for projected range and straggle taken from Lindhard, Scharff, and Schiott (LSS) theory [Gibbons et al., 1975]
are shown in Figs. 23.4 and 23.5, respectively, for the conventional dopants.
The area under the implanted distribution represents the dose as given by:
(23.20)
FIGURE 23.3Schematic drawing of an ion implanter.
FIGURE 23.4Projected range for B, P, and As based on LSS calculations.
+–
RRR
C
25kV
0 to 175kV
Ion Source
C
Acceleration
Tube
Focus
Neutral BeamTrap
and Beam Gate
y-axis Scanner
Neutral Beam
BeamTrap
Integrator
Q
x-axis Scanner
90°
Analyzing
Magnet
ResolvingAperture
C
1
2
3
4
5
+–
Wafer in Process
Chamber
+
–
NN
xR
R
xp
p
p
()
exp
()
()
=-
-
é
?
ê
ê
ù
?
ú
ú
2
2
2D
QNdxNR
p
==
¥
ò
()
2
0
pD
? 2000 by CRC Press LLC
which can be related to the implant conditions by Eq. (23.18).
Implant doses can range from 10
10
to 10
18
per cm
2
and can be
controlled within a few percent.
The mathematical representation of the implant profile just
presented really pertains to an amorphous substrate. Silicon
wafers are crystalline and therefore present the opportunity
for the ions to travel much deeper into the substrate by a
process known as channeling. The regular arrangement of
atoms in the crystalline lattice leaves large amounts of open
space that appear as channels into the bulk when viewed from
the major orientation directions, i.e, <110>, <100>, and
<111>. Practical implants are usually done through a thin
oxide with the wafers tilted off normal by a small angle (typ-
ically 7 degrees) and rotated by 30 degrees to make the surface atoms appear more random. Implants with
these conditions agree well with the projected range curves of Fig. 23.4, indicating the wafers do appear
amorphous.
Actual implant profiles deviate from the simple Gaussian profiles described in the previous paragraphs. Light
ions tend to backscatter from target atoms and fill in the distribution on the surface side of the peak. Heavy
atoms tend to forward scatter from the target atoms and fill in the profile on the substrate side of the peak.
This behavior has been modeled with distributions such as the Pearson Type-IV distribution [Jaeger, 1988].
However, for implant energies below 200 keV and first- order calculations, the Gaussian model will more than
suffice.
Masking and Junction Formation
Usually it is desired to implant species only in selected areas of the wafer to alter or create device properties,
and hence the implant must be masked. This is done by putting a thick layer of silicon dioxide, silicon nitride,
or photoresist on the wafer and patterning and opening the layer where the implant is desired. To prevent
significant alteration of the substrate doping in the mask regions the implant concentration at the Si/mask
interface, X
0
, must be less than 1/10 of the substrate doping, N
B
. Under these conditions Eq. (23.19) can be
solved for the required mask thickness as:
(23.21)
This implies that the range and straggle are known for the mask material being used. These are available in the
literature [Gibbons et al., 1975] but can also be reasonably approximated by making the calculations for Si.
SiO
2
is assumed to have the same stopping power as Si and thus would have the same mask thickness. Silicon
nitride has more stopping power than SiO
2
and therefore requires only 85% of calculated mask thickness,
whereas photoresist is less effective for stopping the ions and requires 1.8 times the equivalent Si thickness.
Analogous to the mask calculations is junction formation. Here, the metallurgical junction, x
j
, occurs when
the opposite type implanted profile is equal to the substrate doping, N
B
. Solving Eq. (23.19) for these conditions
gives the junction depth as:
(23.22)
Note that both roots may be applicable depending on the depth of the implant.
XR R
N
N
pp
p
B
0
2
10
=+
?
è
?
?
?
÷
D ln
XR R
N
N
jp p
p
B
=±
?
è
?
?
?
÷
D 2ln
FIGURE 23.5 Implant straggle for B, P, and As
based on LSS calculations.
? 2000 by CRC Press LLC
Lattice Damage and Annealing
During ion implantation the impinging atoms can displace Si atoms in the lattice, causing damage to the crystal.
For high implant doses the damage can be severe enough to make the implanted region amorphous. Typically,
light ions, or light doses of heavy ions, will cause primary crystalline defects (interstitials and vacancies), whereas
medium to heavy doses of heavy ions will cause amorphous layers. Implant damage can be removed by annealing
the wafers in an inert gas at 800 to 1000°C for approximately 30 minutes. Annealing cycles at high temperatures
can cause appreciable diffusion which must be considered, especially in the newer technologies where shallow
junctions are required. Rapid thermal annealing (RTA) can be successfully applied to prevent undesirable
diffusion in these cases [Wolf and Tauber, 1986].
Deposition
During IC fabrication, thin films of dielectrics (SiO
2
, Si
3
N
4
, etc.), polysilicon, and metal conductors are deposited
on the wafer surface to form devices and circuits. The techniques used for forming these thin films are physical
vapor deposition (PVD), chemical vapor deposition (CVD), and epitaxy, which is just a special case of CVD.
Physical Vapor Deposition
Vacuum evaporation and sputtering are the two methods of physical vapor deposition used in the fabrication
of integrated circuits. Both of these processes are carried out in a vacuum to prevent contamination of the
substrate and to provide a reasonable mean free path for the material being deposited. In the early days of
integrated circuits aluminum was used exclusively for IC metallization and evaporation was used for its
deposition. As IC technology matured, the need for metal alloys, alternative metals, and various insulating thin
films stimulated the development and acceptance of sputter deposition as the PVD method of choice.
When the temperature is raised high enough to melt a solid some of the atoms have enough internal energy
to break the surface and escape into the surrounding atmosphere. These evaporated atoms strike the wafer and
condense into a thin film. Typical film thickness used in the IC industry are in the few thousand angstroms to
1 mm range. Heat is provided by resistance heating, electron beam heating, or by rf inductive heating. Discussions
of these techniques and their historical significance can be found in most processing books [e.g., Sze, 1983;
Wolf and Tauber, 1986]. The most commonly used system employs a focussed electron beam scanned over a
crucible of metal as illustrated in Fig. 23.6(a). Contamination levels can be quite low because only electrons
come in contact with the melted metal. A high intensity electron beam, typically 15 keV, bent through a 270°
angle to shield the wafers from filament contamination, provides the heating for evaporation of the metal. The
wafers are mounted above the source on a planetary substrate holder that rotates around the source during the
deposition to insure uniform step coverage. For the planetary substrate holder, shown schematically in
Fig. 23.6(b), the growth rate is independent of substrate position. The relatively large size of the crucible provides
an ample supply of source material for the depositions. The deposition rate is controlled by changing the current
and energy of the electron beam. Dual beams with dual targets can be used to coevaporate composite films.
Device degradation due to X-ray radiation, generated by the electron beam system, is of great concern in MOS
processing. Because of this, sputtering has replaced e-beam evaporation in many process lines.
Sputtering is accomplished by bombarding a target surface with energetic ions that dislodge target atoms
from the surface by direct momentum transfer. Under proper conditions the sputtered atoms are transported
to the wafer surface where they deposit to form a thin film. Sputter deposition takes place in a chamber that
is evacuated and then backfilled with an inert gas at roughly 10 mtorr pressure. A glow discharge between two
electrodes, one of which is the target, within the gas creates a plasma that provides the source of ions for the
sputter process. Metals can be sputtered in a simple dc parallel plate reactor with the most negative electrode
being the target and the most positive electrode (usually ground) being the substrate holder. If the dc voltage
is replaced by an rf voltage, insulators as well as conductors can be sputter deposited. Magnetron systems
incorporate a magnetic field that enhances the efficiency of the discharge by trapping secondary electrons in
the plasma and lowers substrate heating by reducing the energy of electrons reaching the wafer. Circular
magnetrons (or S-guns) virtually eliminate substrate heating by electron bombardment due to a ring-shaped
cathode/anode combination with the substrate a nonparticipating system electrode [Sze, 1983]. Besides insu-
lators and conductors, sputtering can be used to deposit alloy films with the same composition as an alloy target.
? 2000 by CRC Press LLC
Chemical Vapor Deposition
Chemical vapor deposition (CVD) is a method of
forming thin films on a substrate in which energy is
supplied for a gas phase reaction. The energy may be
supplied by heat, plasma excitation, or optical excita-
tion. Since the reaction can take place close to the
substrate, CVD can be performed at atmospheric
pressure (i.e., low mean free path) or at low pressures.
Relatively high temperatures can be used, resulting in
excellent conformal step coverage, or relatively low
temperatures can be used to passivate a low melting
temperature film such as aluminum. Substrates can
be amorphous or single crystalline. Epitaxial growth
of Si is simply CVD on a single crystalline substrate
resulting in single crystal layer. Generally, unless the
qualifying adjective epitaxial is used, the depositions
are assumed to result in amorphous or polycrystalline films. Typically all CVD depositions show a growth rate
versus temperature dependence as illustrated in Fig. 23.7. At low temperatures the deposition is surface reaction
rate limited and shows an Arrhenius type behavior. At high temperatures the rate is dominated by mass transfer
of the reactant and growth rate is essentially temperature independent. The transition temperature where the
reaction switches from reaction rate to mass flow dominated is dependent on other factors such as pressure,
reactant species, and energy source.
Several types of films can be deposited by CVD. Insulators and dielectrics (such as SiO
2
and Si
3
N
4
), doped
glasses (PSG, BPSG), and nonstoichiometric dielectrics as well as semiconductors (such as Si, Ge, GaAs, and
GaP) can be deposited. Conductors of pure metals, such as Al, Ni, Au, Pt, Ti, etc., as well as silicides such as
WSi
2
and MoSi
2
can also be deposited.
FIGURE 23.6(a) Electron beam evaporation source. (b) Geometry for a planetary holder.
Source
Wafer
Normal
to
surface
Vapor
stream
Al charge
Water cooled
Cu hearth
e beam
(a) (b)
q
q
r
o
r
o
r
o
o
FIGURE 23.7 Typical chemical vapor deposition growth
rate versus reciprocal temperature characteristics.
? 2000 by CRC Press LLC
CVD systems come in a multitude of designs and configurations and are selected for the compatibility with
method, energy source, and temperature range being used. Reaction chambers can be quartz, similar to diffusion
or oxidation furnaces, or stainless steel. Wafer holders also depend on the type of reaction and can be graphite,
quartz, or stainless steel. Cold wall systems employ direct heating of the substrate or wafer holder by induction
or radiation. As such, the reaction takes place right at the wafer surface and is usually cleaner because the film
does not build up on the chamber walls. In a hot wall deposition system the reaction takes place in the gas
stream and the reaction product is deposited on every surface in the system, including the walls. Such unwanted
depositions build up and flake off of these surfaces in time. Without proper cleaning and maintenance proce-
dures they become a source of contamination.
Atmospheric deposition systems were first used in the deposition of epitaxial Si in bipolar processes (i.e.,
buried layer formation). Early systems were horizontal rf induction heated systems using graphite substrate
holders. Atmospheric pressure systems rely on flow dynamics in the chamber to produce uniform films. Since
the reactant species is being depleted from the gas stream, it is imperative that the system design (flow pattern
and rate) ensure that all wafers receive the same amount of deposit. Because of this, most atmospheric
depositions are carried out using a horizontal or near horizontal wafer holder.
Low-pressure CVD depositions are done at pressures in the 0.5 to 1 torr range and most often in a horizontal
hot wall reactor. Wafers are held vertically and the system is operated in the reaction rate limited regime. A
multi-zone furnace (typically three) is used to allow the temperature to be increased along the wafer holder to
compensate for gas depletion by the deposition along the flow path. LPCVD is typically used to deposit SiO
2
(?925°C), Si
3
N
4
(?850°C), and polycrystalline Si (?630°C). Due to the higher temperatures excellent conformal
step coverage can be obtained.
Plasma-enhanced CVD uses a plasma to supply the required reaction energy. Typically, rf energy is used to
produce a glow discharge such that the gas constituents are in a highly reactive state. Because of the ability of
the plasma to impart high energy to the reaction at low temperature, these depositions maintain many of the
excellent features of LPCVD, such as step coverage, with low temperature attributes, such as reduced wafer
warpage, less impurity diffusion, and less film stress. Plasma-enhanced deposition results in nonstoichiometric
highly hydrogenated films such as SiO-H, SiN-H, and amorphous Si-H.
Lithography and Pattern Transfer
In the production of integrated circuits various thin films are fabricated in the Si wafer or grown or deposited
on the surface. Each of these layers has a functionality as either an active part of a device, a barrier or mask,
or an inter- or intra-layer isolation. To perform its intended function, each of these layers has to be located in
specific regions on the wafer surface. This is accomplished by either patterning and etching a thin film to serve
as a mask to form the desired function by a blanket process or by forming the desired layer and then patterning
and etching it to provide the device function in local areas.
Wafer Patterning
Lithography is a transfer process where the pattern on a mask is replicated in a radiation-sensitive layer on the
wafer surface. Typically, this has been accomplished with UV light as the radiation source and photoresist, or
resist in conventional terminology, which is a UV-sensitive polymer, as the mask layer. The wafers with the
layer to be patterned are cleaned and prebaked (400–800°C for 20–30 min) to drive off moisture and promote
resist adhesion. Many processes also add an adhesion promoter, such as hexamethydisilazane (HMDS), at this
point, which functions by removing unwanted surface radicals that prevent adhesion. Then, a few drops of
resist are deposited on a wafer which is spinning at a slow rate to produce a uniform coating and the spin speed
is increased to enhance drying. The wafer with resist is softbaked at 80–90°C for 10–30 min to drive off the
remaining solvents. The wafers are then put in an exposure system and the mask pattern to be transferred is
aligned to any existing wafer patterns. Present-day exposure systems are step and repeat cameras where the
mask is a 5–10′ enlargement of a single chip pattern. Earlier systems used a 1′ mask for the whole wafer and
were of the contact, proximity, or 1′ projection variety [Anner, 1990]. The resist is exposed through the mask
to UV radiation that changes its structure, depending on whether the resist type is positive or negative. Negative
? 2000 by CRC Press LLC
resist becomes polymerized (i.e., cross linked) in areas exposed to the radiation, whereas in a positive resist the
polymer bonds are scissioned upon exposure. The resist is not affected in regions where the mask is opaque
in either case. After full wafer exposure the resist is developed such that the unpolymerized regions are selectively
dissolved in an appropriate solvent. The polymerized portion of the resist remains intact on the wafer surface,
replicating the opaque features of the mask in a positive resist and just the opposite for a negative resist.
As the minimum feature size approaches the wavelength of light used in optical exposure systems (?4000
?), resolution is lost due to diffraction limitations. Alternatives are being developed to overcome these limita-
tions [Okazaki, 1991], the simplest of which is using shorter wavelength, i.e., deep UV, radiation to reduce the
diffraction limit. A further extension of this technique introduces optical phase shifting in the photomask itself
to extend the diffraction limit even further. These techniques are eventually limited, somewhere in the range
of 2000 ?, and alternative techniques such as electron beam or X-ray radiation will have to replace the UV
radiation source. These techniques are quite analogous to UV systems but use resist specifically tailored for
sensitivity in their appropriate wavelength range.
Pattern Transfer
After the resist pattern is formed it is then transferred to the surface layer of the wafer. Sometimes this is an
invisible transfer, such as ion implantation, but more often than not it is a physical transfer of the pattern by
etching the surface layer, using the resist as a mask. This results in the desired structure or produces a more
etch-resistant mask for further pattern transfer operations.
Historically, the most common etch processes used wet chemicals. During wet etching, wafers with resist
(or a resist transferred mask) are immersed in a temperature-controlled etchant for a fixed period of time. The
etch rate is dependent on the strength of the etchant, temperature, and material being etched. Such chemical
etches are isotopic, which means the vertical and lateral etch rates are the same. Thus, the thicker the layer
being etched, the more undercutting of the mask pattern. Most wet etches are stopped with an underlying
etchstop layer that is impervious to the etchant used to remove the top layer. In order to ensure the layer is
totally removed, an over etch is allowed, which exacerbates the undercutting.
Modern high-density, small-feature processes require anisotropic etch processes; this requirement has driven
the development of dry etching techniques. Plasma etching is a dry etch technique that uses an rf plasma to
generate chemically active etchants that form volatile etch species with the substrate. Typically chlorine or
fluorine compounds, most notably being CCl
4
and CF
4
, have been tailored for etching polysilicon, SiO
2
, Si
3
N
4
,
and metals. The etch rate can be significantly enhanced by adding 5–10% O
2
to the etch gas. This, however,
increases erosion of resist masks.
Early plasma etch systems were barrel reactors which used a perforated metal cylinder to confine the plasma
in a region exterior to the wafers. In such reactors the etch species are electrically neutral so the reaction is
entirely chemical and just as isotropic as wet chemical etch. A similar result occurs for a planar parallel plate
rf reactor where the wafer is placed on the grounded electrode. However, some anisotropic etching is achieved
in this configuration because ions can reach the wafer. This is further enhanced in reactive ion etching (RIE)
where the wafer is placed on the rf electrode in a planar parallel plate reactor. Here the ions experience a
considerable acceleration to the wafer by the dc potential developed between plasma and cathode that results
in anisotropic etching. The etch processes require significant characterization and development through opti-
mization of pressure, gas flow rate, gas mixture, and power to produce the desired etch rate, anisotropy,
selectivity, uniformity and resist erosion. Nevertheless, this has become the etch process of choice in modern
ULSI processes. A less popular version uses a beam of reactive species for the etch process and is called reactive
ion beam etching (RIBE).
Finally, it should be noted that near total anisotropic dry etching can be achieved with ion etching. This is
done with an inactive species (e.g., Ar ions) either in a beam or with a parallel plate sputtering system with
the wafer on the rf electrode. This results in an etch process that is entirely physical through momentum transfer.
The etch rate is primarily controlled by the sputtering efficiency for various materials and thus does not differ
significantly from material to material. Hence, such processes suffer from poor selectivity. Resist or mask erosion
is also a problem with these techniques. Because of these limitations, virtually the only application of pure ion
etching in VLSI/ULSI is for sputter cleaning of wafers before deposition.
? 2000 by CRC Press LLC
Defining Terms
Chemical vapor deposition: A process in which insulating or conducting films are deposited on a substrate
by use of reactant gases and an energy source to produce a gas-phase chemical reaction. The energy
source may be thermal, optical, or plasma in nature.
Deposition: An operation in which a film is placed on a wafer surface, usually without a chemical reaction
with the underlying layer.
Diffusion: A high-temperature process in which impurities on or in a wafer are redistributed within the
silicon. If the impurities are desired dopants, this technique is often used to form specific device structures.
If the impurities are undesired contaminants, diffusion often results in undesired device degradation.
Dry etching: Processes that use gas-phase reactants, inert or active ionic species, or a mixture of these to
remove unprotected layers of a substrate by chemical processes, physical processes, or a mixture of these,
respectively.
Ion implantation: A high-energy process, usually greater than 10 keV, that injects ionized species into a
semiconductor substrate. Often done for introducing dopants for device fabrication into silicon with
boron, phosphorus, or arsenic ions.
Lithography: A patterning process in which a mask pattern is transferred by a radiation source to a radiation-
sensitive coating that covers the substrate.
Physical vapor deposition: A process in which a conductive or insulating film is deposited on a wafer surface
without the assistance of a chemical reaction. Examples are vacuum evaporation and sputtering.
Thermal oxidation of silicon: A high-temperature chemical reaction, typically greater than 800°C, in which
the silicon of the wafer surface reacts with oxygen or water vapor to form silicon dioxide.
Wet etching: A process that uses liquid chemical reactions with unprotected regions of a wafer to remove
specific layers of the substrate.
Related Topics
13.1 Analog Circuit Simulation ? 22.1 Physical Properties
References
G.E. Anner, Planar Processing Primer, New York: Van Nostrand Reinhold, 1990.
C.Y. Chang and S.M. Sze, ULSI Technology, New York: McGraw-Hill, 1995.
B. Ciciani, Manufacturing Yield of VLSI, IEEE Press, 1995.
S.K. Ghandhi, The Theory and Practice of Microelectronics, New York: John Wiley & Sons, 1968.
S.K. Ghandhi, VLSI Fabrication Principles—Silicon and Gallium Arsenide, New York: John Wiley & Sons, 1982.
J.F. Gibbons, W.S. Johnson, and S.M. Mylroie, Projected range statistics, in Semiconductors and Related Materials,
vol. 2, 2nd ed., Dowden, Hutchinson, and Ross, Eds., New York: Academic Press, 1975.
A.S. Grove, Physics and Technology of Semiconductor Devices, New York: John Wiley & Sons, 1967.
IEEE Symposium on Semiconductor Manufacturing, IEEE Press, 1995.
R.C. Jaeger, Introduction to Microelectronic Fabrication, vol. 5 in the Modular Series on Solid State Devices,
G.W. Neudek and R.F. Pierret, Eds., Reading, Mass.: Addison-Wesley, 1988.
S. Okazaki, “Lithographic technology for future ULSI’s,” Solid State Technology, vol. 34, no. 11, p. 77, November
1991.
S.M. Sze, Ed., VLSI Technology, 2nd ed., New York: McGraw-Hill, 1988.
P. Van Zant, Microchip Fabrication, 3rd ed., McGraw-Hill, 1996.
S. Wolf and R.N. Tauber, Silicon Processing for the VLSI ERA, vol. 1, Process Technology, Sunset Beach, Calif.:
Lattice Press, 1986.
Further Information
The references given in this section have been chosen to provide more detail than is possible to provide in the
limited space allocation for this section. Specifically, the referenced processing textbooks provide detailed
? 2000 by CRC Press LLC
discussion of all unit step processes presented. Further details and more recent process developments can be
found in several magazines/journals, such as Semiconductor International, Solid State Technology, IEEE Trans-
actions on Semiconductor Manufacturing, IEEE Transactions on Electron Devices, IEEE Electron Device Letters,
Journal of Applied Physics, and the Journal of the Electrochemical Society.
23.2 Testing
1
Wayne Needham
The function of test of a semiconductor device is twofold. First is design debug, to understand the failing section
of the device, identify areas for changes and verify correct modes of operation. The second major area is to
simply separate good devices from bad devices in a production test environment. Data collection and infor-
mation analysis are equally important in both types of test, but for different reasons. The first case is obvious,
debug requires understanding of the part. In the second case, data collected from a test program may be used
for yield enhancement. This is done by finding marginal areas of the device or in the fabrication process and
then making improvements to raise yields and therefore lower cost.
In this section, we will look at methods of testing which can be used for data collection, analysis, and debug
of a new device. No discussion of test would be useful if the test strategy was not thought out clearly before
design implementation. Design for test (access for control and observation) is a requirement for successful
debug and testing of any semiconductor device.
The basis for all testing of complex integrated circuits is a comparison of known good patterns to the response
of a DUT (device under test). The simulation of the devices is done with input stimuli, and those same input
stimuli (vectors) are presented on the DUT. Comparisons are made cycle by cycle with an option to ignore
certain pins, times, or patterns. If the device response and the expected response are not in agreement, the
devices are usually considered defective.
This section will cover common techniques for testing. Details of generation of test programs, simulation
of devices, and tester restrictions are not covered here.
Built-In Self-Test
Self-testing (built-in self-test or BIST) is essentially the implementation of logic built into the circuitry to do
testing without the use of the tester for pattern generation and comparison purposes. A tester is still needed
to categorize failures and to separate good from bad units. In this case the test system supplies clocks to the
device and determines pass/fail from the outputs of the device. The sequential elements are run with a known
data pattern, and a signature is generated. The signature can be a simple go or no-go signal presented on one
pin of the part, or the signal may be a polynomial generated during testing. This polynomial has some
significance to actual states of the part. Figure 23.8 shows a typical self-testing technique implemented on a
large block of random logic, inside a device. The number of unique inputs to the logic block should be limited
to 20 bits so the total vectors are less than one million. This keeps test time to less than one second in most cases.
Self-test capability can be implemented on virtually any size block. It is important to understand the tradeoffs
between the extra logic added to implement self-testing and the inherent logic of the block to be tested. For
instance, adding self-testing capability to a RAM requires adding counters and read, write, and multiplexor
circuitry to allow access to the part. The access is needed not only by the self-test circuitry, but by the circuitry
that would normally access the RAM.
When implementing self-testing on blocks such as RAMs and ROMs, it is worthwhile to note the typical
failure mode of semiconductor devices. Single-bit defects can easily be detected using self-testing techniques.
Single-point defects in the manufacturing process can show up as a single transistor failure in a RAM or ROM,
or they may be somewhat more complex. If a single-point defect happens to be in the decoder section or in a
row or column within the RAM, a full section of the device may be nonfunctional.
1
Portions reproduced from W. M. Needham, Designer’s Guide to Testable ASIC Devices, New York: Van Nostrand Reinhold,
1991. With permission.
? 2000 by CRC Press LLC
The problem with this failure mode is that RAMs and ROMs are typically laid out in a square or rectangular
array. They are usually decoded in powers of 2, such as a 32 * 64 or a 256 * 512 array. If the self-testing circuitry
is an 8-bit-wide counter or linear feedback shift register, there may be problems. There are 256 possible
combinations within the states of the counter, and this may be a multiple of the row or columns. Notice that
256 possible rows or multiples of 256 rows in the array and 256 states in the counter make for a potential error-
masking combination. If the right type of failure modes occur and a full row is nonfunctional, it may be masked.
The implementation of the counter or shift register must be done with full-column or row failure modes in mind,
or it can easily mask failures. This masking of the failure gives a false result that the device is passing the test.
Scan
Scan is a test technique that ties together some or all the latches in the part to form a path through the part
to shift data. Figure 23.9 shows the implementation of a scan latch in a circuit. Note that the latch has a dual
mode of operation. In the normal mode the latches act like normal flip-flops for data storage. In the scan mode
the latches act like a shift register connecting one element to another. This is a basic implementation of a shift
FIGURE 23.8Example of self-test in circuit.
FIGURE 23.9Scan test implementation.
Block of logic
In Out
Counter Latches
Combinatorial logic
DI Q
SI
CLK
SCLK
DI Q
SI
CLK
SCLK
DI Q
SI
CLK
SCLK
? 2000 by CRC Press LLC
register or scan latch in a block of logic. Data can be shifted in via the shift pin or can be clocked in from the
data pin. Clock line is the normal system clock and SCLK is the shift clock for scan operations.
Data for testing is shifted in on the serial data in pins of the device. Patterns for the exercise of the
combinatorial logic could be generated by truth table or by random generation of patterns. These patterns are
then clocked a single time to store the results of the combinatorial logic. The latches now contain the results
of the combinatorial logic operations. Testing of the logic becomes quite easy, as the sequential depth into the
part is of no significance to the designer. Once the patterns are latched, the same serial technique is used to
shift them out for comparison purposes. At the time of outward shift, new patterns are shifted in. Figure 23.10
shows this pattern shift for a circuit using scan. This is the actual implementation of scan in a small group of
logic including the truth table associated with it and one state of testing.
Direct Access Testing
Direct access is a method whereby one gains access to a device logic block by bringing signals from the block
to the outside world via multiplexors. Data is then forced into the block directly and the outputs are directly
measured. This is one of the simplest methods to check devices for logic functionality. This particular method
would supplement previous testing methods by allowing the user to impose data patterns directly on large
blocks of logic. The same feature holds true for output observation. In the direct access scheme, access of a test
mode would force certain logic blocks via multiplexors to have access to the outside pins of the part. One could
then drive the data patterns to the input pins, compare output pins of the part, and measure the access, status,
and logical functionality of the block directly.
Figure 23.11 shows implementation of direct access test techniques on a block of logic. During normal
operation, the logic block B has inputs driven by the logic block A, and outputs are connected to the logic
block C. In the test mode, the two multiplexors are switched so that the input and output pins of the device
can control and observe the logic block B directly.
Joint Test Action Group (JTAG)
When implemented in a device, JTAG (Joint Test Action Group), or IEEE 1149.1, allows rapid and accurate
measurement of the direct connections from one device to another on a PC board. This specification defines
FIGURE 23.10Scan test example and patterns.
DI Q
SI
CLK
SCLK
DI Q
SI
CLK
SCLK
DI Q
SI
CLK
SCLK
Scan
Data
In
In Out SCLKCLK
1
0
1
X
1
1
1
X
X
X
X
0
0
1
1
1
1
0
1
1
1
0
0
0
1
0
0
0
? 2000 by CRC Press LLC
a Test Access Port (TAP) for internal and external IC testing. Figure 23.12 shows the TAP use in a small device,
thus allowing accurate measurement and detection of solder connections and bridging on a PC board. This
technique allows the shifting of data through the input and output pins of the part to ensure correct connections
on the PC board. The four required pins are shown at the bottom of the drawing, and all the I/O ports are
modified to include 1149.1 latches. The internal logic of the part does not need to change in order to implement
1149.1 boundary scan capability.
JTAG test capability, or IEEE P1149.1 standard Test Access Port and boundary scan capability, allows the
designer to implement features that enhance testing. Both PC board test capability and the internal logic
verification of the device can be facilitated. For systems where remaining components on the boards are
implemented with the 1149.1 scan approach, adding a TAP to a device is a wise step to take. Scan, BIST, and
direct access can all be controlled by the 1149.1 test access port.
FIGURE 23.11Direct access implementation.
FIGURE 23.12TAP circuit implementation.
Input
pins
Output
pins
Logic
A
Logic
B
Multiplexor
Multiplexor
Logic
C
? 2000 by CRC Press LLC
Pattern Generation for Functional Test Using Unit Delay
During simulation of the function portion of the device, patterns are captured and stored for exercise of the
system. Figure 23.13 shows typical patterns for each block and the width of the data for test. The patterns check
the functionality of the logic and ensure that the logic implemented performs the desired function in the device.
The patterns can be generated by several methods if they were not generated by self-test. First is exercising of
the system by use of code such as assemblers, macros, and high-level inputs that ensure the operation. Usually
this is done by comparison of a high-level model to the actual logic. A second method of pattern generation
is random number generation; in this case random numbers of ones and zeros are impressed on the logic and
the results compared to a high-level model. Finally, coding of ones and zeros for logic checking is an alternative,
but this is typically prohibitive in today’s technology where devices may contain millions of logic gates.
Pattern Generation for Timing
After the functional patterns are complete, specific tests for timing may be generated. Timing tests are test
procedures to verify the correction operation to the timing specification of the device. Typical timing tests
include outputs relative to the clocks, propagation delays, set-up and hold times, access times, minimum and
maximum speed of operation, rise and fall time, and others. These tests are captured in simulation and used
in the test system to verify performance of the device. Table 23.1 shows the relationship between time-based
simulation and cycle-based test files.
Temperature, Voltage, and Processing Effects
Figure 23.14 shows the impact on speed of process variations, and as a result of voltage or temperature variations.
It is important to simulate with the total variation over the entire process, temperature, and voltage range to
ensure testability. Remember that if the system design of the logic was not done with some guardbanding, there
may be no margin. If the library used for the design did not include some amount of margin, there may be a
need to add guardbanding by optimizing logic for speed or choosing faster gates. There must be a delta placed
FIGURE 23.13Typical patterns for function test.
Block
A
Block
B
Block
C
Block
D
Multiplexor
Multiplexor
Test Mode
Block Width Pattern Sets
1
2 3 4
5 6 7 8
9 10
4
12
16
6
A
B
C
D
? 2000 by CRC Press LLC
between the testing parameters used for initial production testing and final quality assurance (QA) test of the
devices. This will ensure that they are electrically stable and manufacturable.
It is very important to review the critical paths, timing parameters, and the early simulations versus the
timing parameters in the final simulation. The effect of parasitic capacitance and resistance will show up now.
This is the time to look back to see whether the performance of the device will meet all the system specifications.
If the design meets all the device specifications and all the system specifications, now is the time to sign off
and accept this simulation. If the simulations do not match the system requirements, this is the time to fix
them once again.
Fault Grading
Fault grading is a measure of test coverage of a given set of stimuli and response for a given circuit. This figure
of merit is used to ensure that the device is fully testable and measurable. Once the test patterns are developed
for the device, artificial faults are put in via simulation to ensure that the part is testable and observable.
TABLE 23.1 Typical Devices Simulation and Test Files
Pin Number
12345678911111111112
Time (ns) 01234567890
31 10111011111111011111
52 10111011011111011010
97 10111011111111011100
101 10111011111111011100
156 10111011111111011100
207 10111101101100000000
29 10111011011111000011
Vector Time Data
1 0 10111011111111011111
2 50 10111011011111011011 N=NRZ
3 10 10111011111111011100 R=RZ
4 150 10111011111111011100 1=R1
5 20 10111101101100000000
Format N N N NNR1111NN11RNN1NN
FIGURE 23.14 Effects of voltage, process, and temperature on test.
Vcc
1.0
0.5
0
High Low
Derating Factor
Temperature
Low High
Process
Fast Slow
Total
High Low
? 2000 by CRC Press LLC
Figure 23.15 shows a single stuck-at-one faults (S@1) injected into logic circuitry with data patterns for
checking the functionality. Notice in the truth table that faults on one gate show up on an output pin of the
part or test observation point. Gaining high fault coverage is a very desirable method for ensuring that the
device will function correctly in the system. Semiconductor failure modes include not only stuck-at-zeros and
-ones but several other types, including delay faults, bridging faults, and leakage faults.
It is also important to mention that although fault grading using single stuck-at-zero and single stuck-at-
one faults covers many aspects of semiconductor failures, it does not cover all of them. For example, bridging
faults are one of the more common types of failures within semiconductor devices; they may be caught with
stuck-at fault-graded patterns if the patterns are done correctly.
Test Program Flow
Data patterns that were generated during simulation are then converted into a functional test program.
Table 23.2 shows a typical test flow used by a semiconductor vendor for the exercising of the device. It describes
the basic test done along with the portions of the specification of the device tested in each step. The flow is
from loose testing of the device to detailed checking of specifications; this basic flow is both efficient and helpful
in debugging failures encountered during test. Table 23.3 shows the approximate test time for a 100-pin device
as executed on a verification system. Notice that a large amount of tests can be done in a very short time. The
test times are approximate and will vary based on the type of system used by the type of test executed, length
of patterns, and pin count.
The cost of testing is very much related to test time. In production, package handlers, wafer probe equipment,
people, and perhaps a computer network are all needed to run production tests. When comparing test time
on a small system to a large system, total cost must also be compared. In this case, total cost is the cost of the
test equipment along with the needed support equipment. Table 23.4 shows the same test time for the same
FIGURE 23.15 Stuck-at-faults test sequences.
Good Circuit
ABCDE
00000
00001
00010
00011
00100
00101
00110
00111
01000
01001
01010
01011
01100
01101
01110
01111
10000
10001
10010
10011
10100
10101
10110
10111
11000
11001
11010
11011
11100
11101
11110
11111
Out
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
1
0
0
0
1
0
0
0
1
1
1
1
1
Expected
ABCDE
00000
00001
00010
00011
00100
00101
00110
00111
01000
01001
01010
01011
01100
01101
01110
01111
10000
10001
10010
10011
10100
10101
10110
10111
11000
11001
11010
11011
11100
11101
11110
11111
Out
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
1
1
1
1
Actual
Faulted Circuit
+
XOR
1 = Detected Fault
A
B
C
D
E
S@1
Out
? 2000 by CRC Press LLC
device on a large production test system. Again the test time is approximate and will vary depending on options,
program flow, and test details. The biggest reason for the difference in test time from a verification system to
a production system is due to pattern reload time.
Regardless of the techniques that were used for the generation of the data patterns, it will be necessary to
include those patterns into a test program. This is similar to that shown in Table 23.5. This is the same flow
with the addition of information from the net list and vector simulations. Vectors are used for functional tests
and some portions of I/O testing. Net list description and selection of input pads and power connections are
used in the parametric tests. Although the flow may vary from one manufacturer to another, most of them
perform essentially the same kinds of tests.
Even if the device used special testing techniques such as JTAG, BIST, scan, or direct access, the flow is always
the same. It is basically always done with a setup sequence or a preconditioning of the part, and a formal
measurement of the data on the device under test. The testing may be on a cycle-by-cycle basis or as a pass or
fail conclusion at the end of a long routine. There may be a burst of data to set up the part and a burst of data
to measure it. Finally, information is interpreted by the program to categorize or bin the device as a pass or
fail rating.
TABLE 23.2Typical Test Flow
Test Function
Shorts Checks if the adjacent pins are shorted
Open Checks to see if the p-n junction exits (pad protection device)
Basic function test Checks functionality of the part, uses most vectors, loose timing and nominal voltages
dc spec test Checks the inputs and outputs compared to spec for dc levels
ac spec and margin test Checks the vectors with timing set to spec; checks at minimum and maximum voltage
TABLE 23.3Test Execution Time on a Verification Tester
Number Type Time
Test of Test of Test to Execute
Opens 100 Parametric 1 s
Shorts 100 Parametric 1 s
Basic function test 40,000 Vector pattern 0.04 s*
ac spec test 300–500 Parametric-functional 3–5 s
ac spec and margin test 100,000 Vector pattern 0.1 s*
*Execution time only; vector reload time may be between 10 and 500 s.
TABLE 23.4Test Execution Time on a Production Tester
Number of Type of Time to
Test Test Test Execute
Opens/shorts 2 Parametric 0.05 s
Basic function test 40,000 Vector pattern 0.04 s*
ac spec test 300–500 Parametric-functional 0.5 s
ac spec and margin test 100,000 Vector pattern 0.75 s*
*Execution time only; vector reload time may be between 10 and 500 s.
? 2000 by CRC Press LLC
Defining Terms
Built-In Self-Test (BIST): A design process where logic is added to the part to perform testing. Advantages
include easy testing of buried logic. The biggest negative is patterns are set in hardware and the design
must be changed to improve test coverage.
DUT: Device Under Test
Fault coverage: A metric of test pattern coverage. Calculated by measuring faults tested, divided by total
possible faults in the circuits.
JTAG: A specification from the IEEE 1149.1 that defines a Test Access Port (TAP).
Scan: A method of connection of latches within an integrated circuit that allows (a) patterns to be shifted in,
(b) combinatorial logic check to be exercised, and (c) patterns to be shifted out and compared to known
simulation results.
T
0
: A timing reference point used in simulation and on the tester. This point T
0
is where all timing is referenced.
Test program: A software routine consisting of patterns, flow information, voltage and timing control, and
decision processes to make certain the DUT is correct.
Tester: A piece of equipment used to verify the device, often called Automate Test Equipment (ATE).
Vector: A series of ones and zeros that describe the input and output states of the device.
Yield: A metric of good devices after test divided by total tested.
Related Topics
33.1 Introduction ? 34.6 Test ? 110.1 Introduction
References
J. M. Acken, Deriving Accurate Fault Models, Palo Alto, Calif.: Stanford University, Computer Systems Laboratory,
1989.
V. D. Agrawal and S.C. Seth, “Fault coverage requirements in production testing of LSI circuits,” IEEE Journal
of Solid-State Circuits, vol. SC-17, no. 1, pp. 57–61, February 1982.
E. J. McCluskey, Logic Design Principles: With Emphasis on Testable Semicustom Circuits, Englewood Cliffs, N.J.:
Prentice-Hall, 1986.
W. M. Needham, Designer’s Guide to Testable ASIC Devices, New York: Van Nostrand Reinhold, 1991.
F. Tsui, LSI/VLSI Testability Design, New York: McGraw-Hill, 1987.
Further Information
IEEE Design and Test Computers is a bi-monthly publication focusing on test of digital circuitry.
Proceedings of the IEEE Test Conference—This conference occurs each year in the late summer. The conference
is the major focal point for test equipment vendors, test technology, and a forum for advanced test papers.
TABLE 23.5 Test Data Sources
Test Source of Data
Opens Process description and net list for I/O pads
Shorts Process description and net list for I/O pads
Basic function test Simulations done for test
Internal library for cores selected by the user
dc spec test Net list for types of pads
Automatic place and route data for placement
Simulation data for vector setup sequences
ac spec and Simulations done for test and performance
margin test Internal library for cores selected by the user
? 2000 by CRC Press LLC
Design Automation Conference—This conference is held annually in the late spring. The focus is the design
process, but many of the papers and vendors at the conference have programs for design for test.
Vendor material is available from design and test equipment vendors.
23.3 Electrical Characterization of Interconnections
S. Rajaram
Semiconductor technology provides electronic system designers with high-speed integrated circuit (IC) devices
that operate at switching speeds approaching 1 ns and lower. For the past 25 years, the performance and cost
of digital electronic systems have improved continuously because of the technological advances in manufac-
turing physically smaller devices. With the reduction in the feature sizes of the gates and cells that make up
these devices, there has also been a tremendous increase in the density of IC integration. These physically
smaller devices with their intrinsic faster switching speeds and lower power per gate have given rise to the very
large scale integration (VLSI) technology. We therefore have a situation today in which the IC technology
includes both high density and high speed. The unprecedented gains in the increasing scales of integration in
IC technology are shown in Fig. 23.16.
For the leading-edge technology, the maximum number of components (transistors) per chip increases by
a factor of 100 per decade. It is tempting to believe that these increases may be extended indefinitely, leading
to ever smaller devices with increasingly larger electronic functionality. However, the limiting factor for such
spectacular gains in IC technology is the electrical performance capability of its associated interconnections
and packaging (I&P). Thus, interconnections and packaging is the bottleneck to IC performance. The signal
originates from an IC device referred to as the driver and is received by another IC device, called the receiver.
The path between the driver and the receiver is the I&P medium. The various elements of I&P are shown in
Fig. 23.17.
The elements that make up I&P are the IC package (including its wirebonds inside the package and the leads
or pins external to the package), printed circuit board (PCB) with conductor (copper) traces, electrical PCB
connector, backplane or motherboard, and external cables. The driver and receiver may be on the same PCB
or on different PCBs within the same equipment or in different equipments. I&P should be designed so as to
cause minimal signal degradation from the driver to the receiver. Thus, there are several interfaces within I&P,
and signal degradation occurs at every interface. The elements of I&P between the driver and the receiver
resemble the mechanical links in a chain.
FIGURE 23.16The history of silicon integrated circuit scale of integration and its effect on interconnection density.
? 2000 by CRC Press LLC
The key role of I&P is to enable electronic system designers to fully exploit the advancements taking place
in device and manufacturing technologies related to ICs. I&P technology is dictated by three main external
forces. These are silicon IC technology, automated systems for assembly and testing, and the fundamental
architecture of electronic systems [Hoover et al., 1987]. We have already seen the 100-fold increase in the
component density at the IC level. Most of the electronic functions are built into these dense ICs. However,
the system functionality is contained in several ICs that must “talk” to each other. The medium connecting
various ICs in a system is generically referred to as I&P. Therefore, in order to attain the performance capability
of the IC, it is necessary to have an I&P medium to match its electrical performance. The electrical parameters
of the I&P must be tailored to meet IC performance. This becomes the crux of any high-speed electronic system
design. To optimize IC performance, therefore, it is also necessary to minimize the size of all interconnections.
However, to optimize manufacturing costs I&P must be large enough to attain reliable and high-yield assembly
operations, test access, repair, and maintenance. Thus, these requirements imply automated assembly and test
equipment for high-quality manufacture of electronic systems with optimal cost. Finally, I&P is also dictated
by system architecture requirements.
Historically, telecommunications systems have had system requirements dictating high density of intercon-
nection and input/output (I/O) signals at moderately low speeds and low interconnection density or low I/O
at moderately higher speeds. However, today the trend is towards high I/O and high speed which increases the
interconnection density on I&P elements. These have caused a fundamental change in the IC package and
manufacturing and assembly technologies associated with electronic systems. Electrical performance require-
ments of the I&P have forced the change from dual inline package (DIP) ICs to surface mount (SM). The SM
ICs have much smaller package outlines and leads than the corresponding DIPs. While the DIPs are soldered
into holes drilled through the PCB, SM components are soldered onto pads on the surface of the PCB. While
the pitch on the leads of a DIP is 100 mils (0.100 inch), the pitch on SM IC package leads is typically 25 to 50
mils. I&P technology has therefore evolved rapidly as the driving forces behind it have advanced at a rapid
pace. It is fairly accurate to state that the ultimate quality and reliability of many complex electronic systems
are determined primarily by the reliability of the I&P of the system. The stringent electrical and density
requirements impose severe constraints and challenges on mechanical design and manufacturing capability.
Most failure mechanisms today are related to interconnections in the system.
Interconnection Metrics
In Fig. 23.16, we observed the tremendous increase in the IC scale of integration. We have also made the
statement that the increased IC component density has led to increased interconnection density. It is important
to understand the relationship between the scale of component integration inside the IC package and the
interconnection density requirements outside the IC package. Figure 23.16 also shows the evolution of the
number of pins interconnected (assembly points) on a PCB within an area of 100 square inches, and the trend
is obvious. This data is derived from CAD packages used within Bell Labs for PCB layout. IC circuits are usually
described in terms of number of gates (1 gate ? 4 transistors). The relationship between the number of gates
of an IC (G) to the number of signal I/O pins in the package (P) is given empirically by Rent’s rule [Hoover
et al., 1987]. For random logic, Rent’s rule states that P = KG
a
. The value of K ranges from 3 to 6 and a typically
FIGURE 23.17 Chip-to-chip interconnections on same board and between boards through backplane.
? 2000 by CRC Press LLC
ranges from 0.4 to 0.55. For a functionally complete chip, K
? 7, and a ? 0.2 [Moresco, 1990]. The empirical Rent’s rule
has been found to apply remarkably well over a large range
of circuit sizes as shown in Fig. 23.18.
This simple empirical relationship is important in under-
standing fundamental trade-offs between interconnection
costs, reliability, and increased scales of integration. Let’s
assume that a certain electrical function requires the use of
100,000 gates. Let’s partition the circuitry into ICs with 100, 1000, and 10,000 gates per chip (device). Let us
apply Rent’s rule by taking K = 4, and a = 0.5, and examine the I/O pins, and therefore the interconnection
requirements for the three circuit partition choices, as shown in Table 23.6 [Hoover et al., 1987].
The most important observation to be made from Table 23.6 is that increasing the IC scale of integration
by a factor of 100 moves 90 percent of the external interconnections into silicon. The density of interconnection
at the silicon increases, but the overall cost of interconnections for the system decreases since the relative costs
per interconnection in silicon are 1 to 2 orders of magnitude cheaper than those on PCBs and hybrid integrated
circuits (HICs). This is clearly shown in Fig. 23.19.
FIGURE 23.18Rent’s rule for LSI and VLSI chips. (?) Silicon microprocessors and random logic, (X) silicon memories,
(°) GaAs functional chips, (M) GaAs memory.
FIGURE 23.19Interconnections cost trade-off.
TABLE 23.6Distribution of Interconnections
for 100,000-Gate Random Logic Circuit
Gates per IC
Interconnections 100 1,000 10,000
On silicon 160,000 187,400 196,000
On PCB 40,000 12,600 4,000
? 2000 by CRC Press LLC
This is the driving force for integrating as much functionality in silicon, with reduced external interconnec-
tions. However, the problem is compounded by the fact that many such dense ICs are now placed on a board,
and these have to be interconnected on a dense multilayer board. This has led to the requirement for fine-line
PCB technology with increasing number of layers. The dense traces on the board lead to high I/O (pin count)
connectors requiring as many as 400 I/Os on an 8-inch board edge. Thus, there is an increasing density of
interconnection at every level, both internal and external to silicon, as shown in Table 23.7.
Another important aspect of increasing scale of integration at the IC level is the improved overall reliability
of the system. Practical experience indicates that the FIT (Failure unIT) count of an IC is approximately constant
for a given IC technology and manufacturer and is almost independent of the number of gates per chip. A FIT
is defined as one failure in 10
9
hours. Therefore, system FIT count due to devices should decrease inversely in
proportion to scale of integration. Once again let us consider the example of partitioning 100,000 gates into
devices with 100, 1000 and 10,000 gates per IC. Thus, from 100 gates per IC, we have increased the scale of
integration by a factor of 10 and 100 as we go to 1000 and 10,000 gates per chip, respectively. The FIT count
for an IC is taken as 50. The total system FIT count is the sum total of the IC and interconnections FIT count.
These are shown in Table 23.8 where the FIT counts for interconnections are those appropriate for the PCB
types for the levels of integration based on experience at AT&T [Hoover et al., 1987].
The important message is that there is a tremendous improvement in system reliability (FIT count) with
increasing scales of integration in silicon. It is important to remember that the example shown in Table 23.8
is an ideal situation. In any practical situation, the circuit is partitioned into many devices with different levels
of integration, with a few VLSI, but 50 or more small scale integration devices. Typically, the total system FIT
count including devices and interconnections may be in the range of 3000 to 5000. The remarkable attribute
of IC technology is that as the scale of integration increases, the total system cost decreases, while the overall
system reliability improves considerably. It is this remarkable attribute that has led to a continuous decrease in
the cost of electronics products with increased functionality with time. However, the miniaturization and the
consequent increased scales of integration and interconnections density at the packaging level together with
increasing switching speeds have resulted in a whole range of interesting electrical problems. Although these
problems appear to be new, solutions to many of them are available from extensive work that has been well
documented in the microwave area. In the sections that follow we will discuss the significance of the important
electrical parameters and techniques to evaluate them.
TABLE 23.7Interconnection Limits for Telecommunications Systems
Packaging Technologies 1970 1978 1986 1990s
PCB area, square inch 50 100 200 200
Number of layers in PCB 2 6 8 12–14
Line width, mils 25 7 6 4
Number of device terminal 100 2,000 9,000 16,000
connections on PCB
External I/Os 80 300 600 > 800
Logic gates 10
2
10
4
10
5
10
6
TABLE 23.8Effect of Increasing Scale of Integration on System Reliability
Gates per IC
100 1,000 10,000
Number of ICs 1,000 100 10
IC FIT count, total 50,000 5,000 500
Number of external interconnections 40,000 12,600 4,000
Interconnection FIT count, total 4,000 750 200
System FIT count 54,000 5,750 700
? 2000 by CRC Press LLC
Interconnection Electrical Parameters
The interconnection medium is the basic path for transmitting the pulse from the driver to the receiver. As the
speed of the circuits goes beyond 10 MHz, it is necessary to use high-frequency techniques developed by RF
engineers. The fundamental electrical parameters of the circuit such as inductance L and capacitance C, behave
as lumped elements at low frequencies and as distributed parameters at high frequencies where transmission
line techniques must be employed. The transition from lumped element to transmission line behavior depends
upon the risetime of the pulse T
r
and on the total delay in the pulse transmission through the interconnection
T
d
. In the lumped element mode, the inductance and capacitance appear to the pulse to be concentrated at a
point. On the other hand, in the transmission line mode, inductance and capacitance appear to be uniformly
distributed throughout the interconnection, and as far as the pulse is concerned, the medium is infinite in
length, and all the characteristics of wave propagation must be taken into consideration. Interestingly, similar
techniques may be used for electrical characterization of all of the interconnection elements such as wirebonds,
package leads, PCBs, connectors, and cables. The basic pulse transmission parameters of interest are propagation
delay (T
d
), characteristic impedance (Z
0
), reflection coefficient (G), crosstalk (X), and risetime degradation (T
dr
).
Propagation Delay (T
d
)
For a pulse being transmitted through a medium of length l and wave velocity v, delay T
d
= l/v. The speed of
pulse transmission depends upon the dielectric constant of the material e
r
and is given by v = c
0
/, where c
0
is the speed of light in air = 3 ′ 10
8
m/s. Thus, propagation delay is proportional to length and also the square
root of the dielectric constant. In order to reduce delay and reduce machine cycle time and increase speed, it
is necessary to reduce the dielectric constant of the material. There is a tremendous interest in developing low
dielectric constant materials for this reason.
Characteristic Impedance (Z
0
)
In general, a transmission line is any structure that propagates an electromagnetic wave from one point to
another. However, in the world of microwave, RF, or high-speed digital designs, the use of the term transmission
line is far more restrictive. In order for a structure to be a transmission line, the electrical length of the
(transmission) line must be much larger than the wavelength at the frequency of interest. For an interconnection
medium which behaves as a transmission line as shown in Fig. 23.20, the characteristic impedance is defined as
(23.23)
where R = series resistance (W/m), L = series inductance (H/m), G = shunt conductance (W
–1
m
–1
), C = shunt
capacitance (F/m), and w = 2pf = radian frequency. In general, Z
0
is complex, and the distributed per unit
length quantities R, L, G, and C must be determined from the material and structural characteristics of the
transmission line. For most applications R << L, and G << C, and
(23.24)
FIGURE 23.20Representation of a short section of transmission line.
e
r
Z
RjL
GjC
0
=
+
+
w
w
()W
Z
L
C
0
=
? 2000 by CRC Press LLC
The speed of wave propagation through the transmission line is given by v = 1/ . Therefore we can also
define Z
0
= 1/vC = vL.
A digital pulse propagating in a circuit consists of voltages and currents of different frequencies (ac compo-
nents). A digital pulse is therefore an electromagnetic wave of many frequencies propagating down the trans-
mission line. In any circuit with transients (ac components), the current and voltage are not in phase because
of inductance and capacitance. In a pure inductor, voltage leads current by 90 degrees, and in a pure capacitor,
voltage lags behind current by 90 degrees. The total energy (W
T
) of the electromagnetic wave is made up of
the magnetic energy (W
m
) and the electric energy (W
e
). Magnetic energy is stored in inductance and is given
by W
m
= 1/2LI
2
, where I is the current. Similarly, electric energy is stored in its capacitance and is given by
1/2CV
2
. Therefore, the total energy is
(23.25)
In an alternating field, the total energy is continually being swapped between the magnetic and electrical
elements, one at the expense of the other. Because of the phase relationships discussed earlier, magnetic energy
is a maximum when electric energy is 0, and vice versa. Since it is the same stored energy which appears
alternately as magnetic and electrical energy, we can write
(23.26)
Therefore, Z
0
= V
max
/I
max
= .
Therefore, characteristic impedance Z
0
gives the relationship between the maximum voltage and maximum
current in a transmission line and has the units of impedance (W). Thus, it is important to note that the current
required to drive a transmission line is determined by its characteristic impedance. Z
0
is really the impedance
(resistance) to energy transfer associated with electromagnetic wave propagation. In fact, characteristic imped-
ance is not unique to electromagnetic waves alone. Characteristic impedance Z
0
is an important parameter
associated with propagation of waves in a medium. Some examples are given below:
? Electromagnetic waves: Z
0
= = vL = 1/vC
? Transverse vibrating string: Z
0
= l
0
v = where l
0
= mass/unit length and T
0
= force
? Longitudinal waves in a rod: Z
0
= r
0
v = where r
0
= mass/unit volume and E = Young’s modulus
? Plane acoustic waves: Z
0
= r
0
v = where r
0
= density and B = bulk modulus
Note that Z
0
is a unique characteristic of material properties and geometry alone.
The performance of an interconnect that behaves as a transmission line is measured in terms of the efficiency
of energy (information) transfer from input to output, with minimal loss and dispersion effects. At high speeds,
the interconnect must maintain a uniform Z
0
along the length of the signal. Any mismatch in characteristic
impedance across interconnect interfaces will cause reflection of the signal at the interface, which can cause
errors in digital circuits. Reflections are part of the losses in interconnect and lead to loss of information. These
reflections are sent back to the signal source. Therefore multiple reflections from interfaces can distort the input
signal. The magnitude of reflection is defined by the reflection coefficient G, given by
(23.27)
where Z
L
is the load impedance. Note that when the load and impedances are matched, Z
0
= Z
L
, reflection
coefficient G = 0, and the wave (energy, information) is transmitted without loss. For an open circuit, Z
L
= ¥,
and K = 1. Thus, the entire pulse is reflected back to the source. For a short circuit, Z
L
= 0, and K = –1. Here,
the pulse is reflected back to the source with the same amplitude, but with the sign reversed. For practical
purposes, Z
0
of PCBs may be considered to be independent of frequency up to nearly 1 GHz. In other words,
PCBs may be considered lossless transmission lines for impedance calculations. Beyond 1 GHz, skin effect in
LC
WWW LI CV
Tme
=+= +
1
2
1
2
22
WLI CV
T
==
1
2
1
2
22
max max
LC
LC
l
00
T ,
r
0
E,
Br
0
,
G=
+
ZZ
ZZ
L
L
–
0
0
? 2000 by CRC Press LLC
conductors, and dielectric and dispersion losses, cause signal degradation. Z
0
is a parameter associated with
PCBs and cables but not with connectors at lower speeds, because of the definition of transmission line. The
length of an electrical path through a connector is approximately 1 inch, which is very small compared to the
wavelength at lower frequencies. However, as speeds increase, the connector also has a Z
0
associated with it.
Otherwise, a connector acts as a lumped L and C in a transmission path with its associated losses such as
reflection and degradation. Z
0
values for rectangular and circular transmission lines with and without ground planes
applicable for cables are given in Everitt [1970]. We shall now look at evaluation of Z
0
for typical PCB structures.
Z
0
of PCB Structures.The two commonly used PCB structures in electronic circuits are the microstrip and
stripline designs, shown in Figs. 23.21 and 23.22. In a pure microstrip, there is only one ground plane below
the conductor. The space below the conductor is filled with a dielectric material, and above the conductor it
is air. In most applications, however, the conductor traces are protected with a solder mask coating of 2–3 mils
thickness above it, as in Fig. 23.23. This solder mask has the effect of reducing Z
0
of the microstrip. In a pure
microstrip, the electromagnetic wave is transmitted partially in the dielectric and partially in air. Therefore the
effective dielectric constant is a weighted average between that of air and the dielectric material. In a stripline
design, the conductor is placed symmetrically between two ground planes and the space filled with dielectric
material. Thus, in a stripline, the wave is completely transmitted in the dielectric. A variation of the stripline design
is the asymmetric stripline where the conductor is closer to one ground plane than the other, as in Fig. 23.24.
For a microstrip transmission line structure, the characteristic impedance Z
0
is given by [Kaupp, 1967]
(23.28)
FIGURE 23.21Classical microstrip.
FIGURE 23.22Classical stripline.
Z
h
wt
h
wt
r
r
0
60
0475 067
4
06708
87
141
598
08
=
+
+
é
?
ê
ê
ù
?
ú
ú
=
+
+
é
?
ê
ê
ù
?
ú
ú
..
ln
.(. )
.
ln
.
(. )
e
e
W
? 2000 by CRC Press LLC
The effective dielectric constant e
reff
= . Experimental measurements with fiberglass epoxy
boards have shown that Eq. (23.28) predicts Z
0
accurately for most practical applications.
For stripline configuration, characteristic impedance is given by [Howe, 1974]
For w/b £ 0.35, (23.29a)
(23.29b)
For w/b 3 0.35, (23.29c)
(23.29d)
FIGURE 23.23 Covered microstrip.
FIGURE 23.24 Asymmetric stripline.
0 475 0 67..e
r
+
Z
b
d
r
0
60 4
=
é
?
ê
ê
ù
?
ú
úe
p
ln W
d
wt
w
w
t
t
w
=+ + +
?
è
?
?
?
÷
?
è
?
?
?
?
÷
÷
é
?
ê
ê
ê
ù
?
ú
ú
ú
2
11
4
051
2
p
p
pln .
Z
w
b
t
b
C
r
f
0
94 15 1
1
=
?
è
?
?
?
÷
+
¢
.
–
e
e
W
¢
=
+
?
è
?
?
?
?
?
?
÷
÷
÷
÷
ì
í
?
?
?
?
?
ü
y
?
?
t
?
?
?
è
?
?
?
?
?
?
÷
÷
÷
÷
?
è
?
?
?
÷
?
è
?
?
?
?
?
?
÷
÷
÷
÷
ì
í
?
?
?
?
?
ü
y
?
?
t
?
?
C
t
b
t
b
t
b
t
b
f
ep
p
1
2
1
1
1
1
1
1
1
1
1
1
1
2
–
ln
–
–
–
–
ln
–
–
? 2000 by CRC Press LLC
where C
f
¢/e is the ratio of the static fringing capacitance per unit length between conductors to the permittivity
(in the same units) of the dielectric material. This ratio is independent of the dielectric constant. e = e
r
e
0
, where
e
0
is the permittivity of air 8.854*1.0e-12 F/m. For the asymmetric stripline, where the conductor is closer to
one ground plane than the other,
(23.30a)
Here C/e is the ratio of the static capacitance per unit length between the conductors to the permittivity of the
dielectric medium and is once again independent of the dielectric constant. Capacitance is made up of both
the parallel plate capacitances C
p
and fringing capacitances C
f
. Subscripts 1 and 2 refer to the two ground
planes which are different distances from the line.
(23.30b)
(23.30c)
(23.30d)
(23.30e)
Z
C
r
0
376 7
=
.
e
e
CC C C C
pp ff
ee e e e
=++ +
1 2 1 2
22
C
w
bs
t
bs
C
w
bs
t
bs
pp
1 2
2
1
2
1
ee
==
+
+
–
–
–
–
and
C
t
bs
t
bs
t
bs
t
bs
f 1
2
1
2
1
1
1
1
1
1
1
1
1
1
1
ep
p
=
+
?
è
?
?
?
?
?
?
÷
÷
÷
÷
ì
í
?
?
?
?
?
ü
y
?
?
t
?
?
?
è
?
?
?
?
?
?
÷
÷
÷
÷
?
è
?
?
?
÷
?
è
?
?
?
?
?
?
÷
÷
÷
÷
ì
í
?
?
?
?
?
ü
y
?
?
t
?
?
–
–
ln
–
–
–
–
–
–
ln
–
–
–
C
t
bs
t
bs
t
bs
t
bs
f 2
2
1
2
1
1
1
1
1
1
1
1
1
1
1
ep
p
=
++
+
?
è
?
?
?
?
?
?
÷
÷
÷
÷
ì
í
?
?
?
?
?
ü
y
?
?
t
?
?
+
?
è
?
?
?
?
?
?
÷
÷
÷
÷
+
?
è
?
?
?
÷
?
è
?
?
?
?
?
?
÷
÷
÷
÷
ì
í
?
?
?
?
?
ü
y
?
?
t
?
?
–
ln
–
–
–
–
ln
–
–
? 2000 by CRC Press LLC
The formulas given above are for Z
0
of a single (isolated) line. As we shall see later, the presence of adjacent
lines alters the value of Z
0
. The appropriate line widths and the dielectric thicknesses required for Z
0
in the
range of 50 to 75 W in glass epoxy FR4 boards with e
r
= 4.2 are shown in Tables 23.9 and 23.10 for microstrip
and stripline. Also shown are the maximum crosstalk noise X
max
between two conductors in percent, for a given
spacing s between conductors in the same signal layer as in Figs. 23.27 and 23.38b.
An important observation from Tables 23.9 and 23.10 is that the thickness of the dielectric for a microstrip
is considerably less than that of the stripline for the same Z
0
. Thus microstrip structures lend themselves to
much thinner PCBs, and this is an advantage from a manufacturing point of view, since thicker boards are
more expensive to make. In practical PCBs, the traces are protected by a coat of solder mask (cover coat). This
cover coat is usually a dry film or is screen printed and has an e
r
? 4. Thus, we never have a classical microstrip
with air at the top, but a covered microstrip. The effect of the cover layer is to increase the effective dielectric
constant. Consequently, the capacitance of the line increases, and Z
0
decreases. The reduction in Z
0
with cover
layer thickness is shown in Fig. 23.25.
In practical situations, the thickness of cover layer is ?3–4 mils, which can lead to a reduction in Z
0
of ?10%.
The covered microstrip can be considered to be a three dielectric layer problem, and the appropriate line
parameters may be evaluated by techniques given in Das and Prasad [1984]. For most practical designs today,
the line width is typically 6 or 8 mils. There are some high-speed designs where 4-mil lines have been used,
TABLE 23.9Microstrip in Glass Epoxy, e
r
= 4.2, t = 1.2 mil
wsh Z
0
X
max
(mil) (mil) (mil) w/h s/h (W) (%)
8 8 5 1.6 1.6 50.3 4.1
8 8 10 0.8 0.8 75.8 9.5
6 6 7.5 0.8 0.8 73.9 9.7
TABLE 23.10Stripline in Glass Epoxy, e
r
= 4.2, t = 1.2 mil
wsb Z
0
X
max
(mil) (mil) (mil) w/bs/b (W) (%)
8 8 22 0.363 0.363 50.5 6.8
8 8 40 0.2 0.2 67.7 14.6
6 6 18.0 0.33 0.33 51.2 7.8
6 6 35 0.17 0.17 70.6 16.9
FIGURE 23.25Z
0
reduction due to coverlay.
? 2000 by CRC Press LLC
where the technology is migrating due to the demand for high circuit and interconnection density. There are
some innovative design techniques which enable the user to attain high interconnection (circuit) density without
increasing the board thickness by a large amount. We noted earlier that the microstrip structure lends itself to
thinner dielectrics. However, the stripline provides a pure transverse electromagnetic (EM) wave and the
protection of two ground planes. We can combine the advantages of both of these structures by using the
asymmetric stripline.
To design a stripline with Z
0
= 50 W in FR4 material (e
r
? 4.2) for a line width of 8 mils, we require a
dielectric thickness of 22 mils. Now consider a classical stripline with w = 6 mils and b = 18 mils. In FR4, Z
0
? 51 W. If we now start moving one of the ground planes away from the line, Z
0
increases up to a certain
distance, beyond which the line is not influenced by the presence of this ground plane. From Fig. 23.26 we see
that when b
2
is greater than 20 to 25 mils, the influence of this ground plane on Z
0
is negligible. We can therefore
move the ground plane sufficiently far so as to have another signal plane to obtain an asymmetrical stripline
as shown in Fig. 23.27.
Now for w = 6 mils and b = 22 mils, we obtain two signal layers, while with 8 mils and b = 22 mils, we can
only get one signal layer. Thus, by going to an asymmetrical stripline design, we can almost double the
interconnection (circuit) density. There is the potential therefore to reduce an 8-layer multilayer board to
FIGURE 23.26Increase in Z
0
by moving second groundplate.
FIGURE 23.27Two signal layers S
1
and S
2
in asymmetric stripline. S
1
and S
2
are orthogonal lines.
? 2000 by CRC Press LLC
4 layers. The penalty that we pay is in increased crosstalk, but that can be addressed by wider interline spacing
as will be shown later.
Effect of Manufacturing Tolerances on Z
0
.For high-speed systems, designers require a tight control on
impedance of boards. However, there is variation in Z
0
because of manufacturing tolerances associated with
line w, t, dielectric thickness h or b, and with e
r
. A common design requirement is that the variation in Z
0
not
exceed 10%. In a manufacturing environment, the variation in the relevant parameters is random in nature.
We may therefore express the variation in Z
0
as
(23.31)
The partial derivatives may be evaluated from the appropriate expressions given in Eqs. (23.28), (23.29), and
(23.30). They are shown in Table 23.11 for Z
0
= 50 and 75 W for classical microstrip and stripline.
However, the worst-case tolerances or the limits of the tolerance are defined by a Taylor series as
(23.32)
These tolerance limits are shown in Figs. 23.28 and 23.29 for a microstrip of Z
0
= 75 W and for a stripline
of 50 W. The line width is 6 mils (?150 mm) and the values chosen for Z
0
are of practical interest. The limits
of variation in Z
0
are shown for a change in dielectric constant De
r
= 0.1, which is very realistic for variation
in material properties. Also, dielectric constant for FR4 is not a constant, but varies with frequency as shown
in Fig. 23.30, taken from measurements made by S. Mumby at Bell Laboratories.
We observe that e
r
decreases with frequency, and that in the range of 100 Hz to 1 GHz, e
r
decreases from
4.8 to 4, a change of ?20%. From the equations for Z
0
, we observe that it scales approximately as 1/ . Thus,
a change of 20% in e
r
will result in ?14% change in Z
0
as we go from 100 Hz to 1 GHz. This is just to point
out that e
r
is not a constant but does vary slightly with frequency. In Figs. 23.29 and 23.30, it is clear that in
addition to a change of 0.1 in e
r
, any variation in w (Dw) and dielectric thickness (Dh or Db) moves us up along
the contours to increase DZ
0
/Z
0
. The tolerance limits are shown for Dh or Db of 12.5, 25, and 50 mm (0.5, 1,
and 2 mil). In order to limit the Z
0
tolerance to 10%, we need tight control over the manufacturing process.
Designers should be aware of the manufacturing tolerances of PCB manufacturing to determine the ability to
meet their controlled impedance requirements. These aspects are discussed in Ritz [1988].
Crosstalk (X)
Crosstalk may be defined as noise that occurs on idle lines due to interactions with stray EM fields that originate
from active (pulsed) lines. This interaction is shown pictorially in Figs. 23.31 and 23.32.
TABLE 23.11Variation in Z
0
Due To Parameter Changes
?Z
0
/?h or
Z
0
w ?Z
0
/?b ?Z
0
/?w ?Z
0
/?er
Design (W) (mil) (W/mil) (W/mil) (W)
Microstrip 50 8 7.5 –3.8 –4.6
Microstrip 50 6 9.5 –4.8 –4.6
Microstrip 75 8 3.8 –3.8 –6.9
Microstrip 75 6 4.8 –4.8 –6.9
Stripline 50 8 1.3 –2.7 –6.2
Stripline 50 6 1.6 –3.4 –6.2
Stripline 50 4 2.1 –4.6 –6.2
DDDDDZ
Z
w
w
Z
h
h
Z
t
t
Z
r
r0
0
2
2 0
2
2 0
2
2 0
2
2
?
?
è
?
?
?
÷
+
?
è
?
?
?
÷
+
?
è
?
?
?
÷
+
?
è
?
?
?
÷
?
?
?
?
?
?
?
?e
e
DDDDDZ
Z
w
w
Z
h
h
Z
t
t
Z
r
r0
0000
?
?
è
?
?
?
÷
+
?
è
?
?
?
÷
+
?
è
?
?
?
÷
+
?
è
?
?
?
÷
?
?
?
?
?
?
?
?e
e
e
r
? 2000 by CRC Press LLC
There are two components to crosstalk. They are inductive crosstalk due to mutual inductance, L
m
, and
capacitive crosstalk due to mutual capacitance, C
m
. Inductive crosstalk is proportional to L
m
dI/dt, and capacitive
crosstalk is proportional to C
m
dV/dt. In any conductor in which we have a transient current propagating
perpendicular to the plane of the paper, circular magnetic field lines are generated in space as shown in
Fig. 23.31. Any conductor that interacts with this magnetic field has crosstalk noise imposed on it due to mutual
inductive coupling L
m
between the pulsed and the idle lines. Similarly, a transient voltage pulse in the conductor
generates radial electric fields emanating from the conductor as shown in Fig. 23.32. An idle line that interacts
FIGURE 23.28 Tolerance limits for microstrip line. Z
0
= 75 W, w = 6 mil, t = 1.4 mil, e
r
= 4.2 (1 mil = 25 mm).
FIGURE 23.29 Tolerance limits for stripline. Z
0
= 50 W, w = 6 mil, t = 1.4 mil, e
r
= 4.2.
? 2000 by CRC Press LLC
FIGURE 23.30 Variation of e
r
with frequency for FR4.
FIGURE 23.31 Schematic of inductive crosstalk coupling.
FIGURE 23.32 Schematic of capacitive crosstalk coupling.
? 2000 by CRC Press LLC
with this electric field has crosstalk noise imposed on it due to mutual capacitance coupling between the pulsed
and idle lines. Thus, to predict crosstalk, we must be able to evaluate the mutual coupling coefficients L
m
and C
m
.
Consider a pulsed line and an idle line which are terminated in their characteristic impedances as shown in
Fig. 23.33.
If we generate a digital pulse on the active pulsed line, then the crosstalk noise on the idle line is given by
(23.33)
(23.34)
The section of the line near the source is called the “near end” (NE), and the end of the line away from the
source is called the “far end” (FE). Note that the near-end crosstalk noise V
NE
is the sum of the inductive and
capacitive components, while the far-end crosstalk noise V
FE
is the difference between the capacitive and
inductive components. The crosstalk noise given by Eqs. (23.33) and (23.34) is per unit length of the line.
Therefore, to evaluate the total crosstalk, we must multiply V
NE
and V
FE
by the length of coupling. This is
generally true for PCBs and cables where L
m
and C
m
are expressed in nH/m and pF/m. For a pure transverse
EM (TEM) wave propagation Z
0
= . Therefore, the inductive and capacitive crosstalk noise components
are equal, and V
FE
= 0. However, in most practical applications, we do not have a pure TEM wave, and the two
components differ by a small amount. Therefore, in connectors and PCBs where the length of coupling ranges
from 0.5 to 1.0 inch for connectors, and about 10 to 20 inches for PCBs and backplanes, V
FE
? 0. However, in
cables which may run for several meters between equipment, the total far-end crosstalk V
FE
can become quite
large since we are multiplying a small difference between capacitive and inductive crosstalk by a large length.
Thus, for connectors and PCBs we are mainly interested in near-end crosstalk noise V
NE
, but for cables, we are
interested in both V
NE
and V
FE
.
From Eqs. (23.33) and (23.34) it is obvious that V depends on L
m
, C
m
, Z
0
, the rate of change of current and
voltage, dI/dt and dV/dt, respectively, and the length of coupling. What is of interest is to know the shape and
duration of both V
NE
and V
FE
. We shall see that the answer to this question depends upon the relative magnitudes
of the propagation delay in the line T
d
(which is proportional to length) and the risetime of the pulse T
r
in
digital systems, or the frequency of the pulse for analog systems. The sign of the inductive and capacitive noise
FIGURE 23.33Crosstalk model showing near end and far end.
VL
dI
dt
ZC
dV
dt
mmNE
V/m=+
?
è
?
?
?
÷
1
2
0
VZC
dV
dt
L
dI
dt
mmFE
V/m=
?
è
?
?
?
÷
1
2
0
–
LC
m m
? 2000 by CRC Press LLC
components may be easily understood by looking at the simple mechanical analogy of the spring-mass system.
The inductive noise given by L
m
dI/dt has the effect of inserting a voltage source V
L
in the idle line as shown
in Fig. 23.34.
However, inductance has the effect of opposing the source current pulse. The source creates a clockwise loop
of current (action). Inductance behaves like the mechanical spring which produces a reaction –kx to the
displacement x (action). So, as a reaction to the clockwise loop of current in the source line (action), mutual
inductance induces a counterclockwise loop of current in the idle line (reaction) to oppose the source pulse.
This therefore determines the polarity of the voltage source on the idle line as shown in Fig. 23.34. It is this
voltage source that generates the counterclockwise loop of current. Therefore, we observe from Fig. 23.34 that
since the current flows from the near end to ground, it must be positive with respect to ground. Similarly, since
the direction of current is from ground to far end, it must be negative with respect to ground. Thus, we may
observe from Eqs. (22.33) and (22.34) that the inductive coupling term is positive at the near end and negative
at the far end. The current produced by mutual inductance is E
L
/2Z
0
. Therefore, the inductive coupling noise
at the near end and far end is given byV
NE
L
= –V
FE
L
= E
L
/2, where E
L
= L
m
dI/dt. For the source line, we may
write dI/dt ? V
s
/Z
0
T
r
. Here, V
s
is the magnitude of the source pulse, and T
r
is the risetime of the pulse.
Capacitive coupling has the effect of inserting a mutual capacitor C
m
between the source and idle lines as
shown in Fig. 23.35. The current that passes through C
m
divides evenly in the idle line, with half going to
ground through the near end and the other half going to ground through the far end. Thus, the capacitive
crosstalk voltages at the near end and the far end are given byV
NE
C
=V
FE
C
=I
C
Z
0
/2. The coupled capacitive current
I
C
= C
m
dV/dt ? C
m
V
s
/T
r
. Therefore, mutual capacitance may be considered to be like mass (inertia) in the
mechanical spring-mass analogy. Mass (inertia) tends to keep going in the direction of displacement. Similarly,
mutual capacitance induces the same noise in both the near and far end as can be seen in Eqs. (23.33) and
(23.34), and the sign of the noise is the same as dV/dt (inertia). The total crosstalk noise (X) is due to inductive
(X
L
) and capacitive (X
C
) noise and is usually expressed as a fraction of the input source pulse V
s
as X = V/V
s
.
Therefore, for a coupling length of L, we may write crosstalk noise as
(23.35)
(23.36)
But we know that length L = vT
d
. Therefore
(23.37)
FIGURE 23.34Inductive noise on idle line.
X
V
V
ZC
L
Z
L
T
s
m
m
r
NE
NE
== +
?
è
?
?
?
÷
1
2
0
0
X
V
V
ZC
L
Z
L
T
s
m
m
r
FE
FE
==
?
è
?
?
?
÷
1
2
0
0
–
X
V
V
v
ZC
L
Z
T
T
K
T
T
s
m
md
r
d
r
NE
NE
NE
== +
?
è
?
?
?
÷
=
?
è
?
?
?
÷
4
22
0
0
? 2000 by CRC Press LLC
(23.38)
(23.39)
(23.40)
The expressions derived here are for noise in digital systems. It can be shown analytically and experimentally
[Rainal, 1979] that near-end noise, X
NE
increases with length of coupling, reaches a maximum value and
saturates there. Any increase in coupling length beyond a critical length will not increase the value of X
NE
. This
critical coupling length for maximum crosstalk X
max
depends upon the delay and signal risetime and is given by
(23.41)
(23.42)
Equation (23.41) is the condition for transmission line behavior, that is, 2T
d
/T
r
3 1. Equation (23.42) is the
limit for lumped parameter analysis, that is, 2T
d
/T
r
< 1. Note that in the transmission line condition, X
NE
is
independent of risetime. For the transmission line condition (long coupling length as in cables and backplanes),
the shape of the near-end noise is trapezoidal, while for lumped parameter condition (as in connectors), the
shape of the near-end noise is triangular. These are clear in Figs. 23.36a and 23.36b.
Also, we observe from Fig. 23.36a that X
NE
reaches a maximum value and saturates there. It is instructive to
determine the lengths of lines for which transmission line characteristics are valid and when maximum crosstalk
occurs. If we consider FR4 glass epoxy boards for which e
r
? 4, the wave speed v is ?1.5 ′ 10
8
m/s ? 6 in./ns,
or the delay is 1/6 ns/in. The lengths for which 2T
d
? T
r
are shown in Table 23.12 for FR4 boards.
Note that for a risetime of 1ns, it only takes 3 inches of coupling for maximum crosstalk to occur. Thus,
while the near-end noise X
NE
has a maximum limit, far-end noise X
FE
given by Eq. (23.38) increases linearly
with length. This is the reason that X
FE
is important for cables running over long distances. This is obvious
from Fig. 23.36a, where near-end and far-end noise are almost of the same magnitude. Another very important
observation to be made from Fig. 23.36a is that while the active pulse (source) leaves the source line at time
FIGURE 23.35Capacitive noise on idle line.
X
V
V
ZC
L
Z
L
T
K
L
T
s
m
m
rr
FE
FE
FE
==
?
è
?
?
?
÷
=
?
è
?
?
?
÷
1
2
0
0
–
K
v
ZC
L
Z
m
m
NE
=+
?
è
?
?
?
÷
4
0
0
KZC
L
Z
m
m
FE
=
?
è
?
?
?
÷
1
2
0
0
–
XKX
T
T
d
r
NE NE max
for (maximum crosstalk)== 3
2
1
XK
T
T
T
T
d
r
d
r
NE NE
for =
?
è
?
?
?
÷
<
22
1
? 2000 by CRC Press LLC
T
d
, the near-end noise has a width of 2T
d
. Thus, the crosstalk noise stays around for an extra time delay T
d
,
even though the active pulse causing the noise has already left the source line. But, observe that far-end noise
occurs as a single pulse at time T
d
. The reason for this phenomenon is that near-end coupling occurs the
moment the pulse enters the active line. As the source pulse travels on the active line, the coupled noise on the
idle line is sent back continuously to the near-end from every point of coupling between the two lines. Therefore,
FIGURE 23.36a Typical cable crosstalk noise X
NE
and X
FE
.
FIGURE 23.36b Typical connector near-end crosstalk X
NE
.
TABLE 23.12 Lengths for Maximum
Crosstalk in FR4 Boards
T
r
T
d
= T
r
/2 L
(ns) (ns) (in.)
0.2 0.1 0.5
0.5 0.25 1.5
1.0 0.5 3
2.0 1.0 6
5.0 2.5 15
10 5 30
? 2000 by CRC Press LLC
there is a delay in the noise as it travels from a point of coupling back to the near-end. As the coupled noise
travels back to the near-end, the last portion of the noise that coupled at the end of the active line (far end)
has to travel back all the way to the near-end. It took the active pulse a time T
d
to reach the end of the line,
and it takes another T
d
for the coupled noise to travel back to the near end. Thus, the near-end noise is
continuously being sent back for a duration of time 2T
d
, even though the active pulse has left the line at T
d
.
This is why near-end crosstalk is also referred to as backward crosstalk. On the other hand, far-end noise travels
forward with the active pulse and appears as a single noise pulse at T
d
just as the active pulse leaves the line.
For this reason, far-end crosstalk is also referred to as forward crosstalk.
For a periodic signal, the crosstalk formulas are given in Rainal [1979] as
X
NE
= 2K
NE
sin(2 p f T
d
) (23.43)
X
FE
= K
FE
(2 p f L) (23.44)
Note that the maximum value of near-end crosstalk for a periodic signal given by Eq. (23.43) is twice that
obtained for a digital pulse. Also, the magnitude of X
NE
depends upon both the length of coupling (delay T
d
),
as well as frequency. Since X
NE
is sinusoidal, we can obtain greater crosstalk with smaller coupling length and
vice versa. Once again, far-end crosstalk increases linearly with coupling length, and in addition also increases
linearly with frequency. However, note that for both analog and digital signals, crosstalk noise X
NE
and X
FE
depend upon the mutual coupling coefficients L
m
and C
m
. Thus, the key to reducing crosstalk is to reduce both
L
m
and C
m
. The formulas given so far are for crosstalk due to a single source. If there are n sources pulsing
simultaneously, then the worst-case crosstalk on an idle line is given by
X
TOTAL
= X
1
+ X
2
+ · · · + X
n
(23.45a)
For most practical applications, the pulses may never be synchronized, and a more realistic estimate of crosstalk
may be given by
(23.45b)
Normally, X
NE
and X
FE
are measured experimentally using a Time Domain Reflectometer (TDR). From these
measured values, we may evaluate the capacitive crosstalk X
C
and inductive crosstalk X
L
as
(23.46a)
(23.46b)
From these, we can evaluate the mutual coefficients L
m
and C
m
which enable us to scale the results for crosstalk
noise for different values of T
r
, T
d
, L and T
r
, or f. L
m
and C
m
may also be evaluated analytically for many practical
situations, and these will now be considered.
Expressions for L
m
and C
m
.In order to analytically evaluate L
m
and C
m
, we use the coupled lines techniques
developed by microwave engineers. For this, we consider the two coupled lines to be in the odd and even mode
configurations as shown in Figs. 23.37a and 23.37b for a microstrip design.
In the odd mode configuration, the potential on one line is the negative potential of the other line, while in
the even mode, both lines have the same potential imposed upon them. Let C
a
be the capacitance of an isolated
line to ground. Then, the odd and even mode capacitances, inductances, and characteristic impedances are
XXXX
nTOTAL
= + +×××+
1
2
2
22
XXX
CNEFE
=+
( )
1
2
XXX
LNEFE
=
( )
1
2
–
? 2000 by CRC Press LLC
(23.47a)
(23.47b)
The characteristic impedance for a balanced or differential line Z
0b
is given by
Z
0b
= 2Z
0odd
(23.47c)
Expressions for C
odd
and C
even
, and Z
0odd
and Z
0even
are given in many references (such as Davis, 1990). From
these, we can then evaluate L
odd
and L
even
, and we may express the mutual coefficients as
(23.47d)
FIGURE 23.37a Odd mode microstrip.
FIGURE 23.37b Even mode microstrip.
CCCLLLZ
L
C
am amodd odd odd
odd
odd
=+ = =2
0
,–,
CCLLLZ
L
C
aameven even even
even
even
==+ =,,
0
C
CC
L
LL
mm
==
odd even even odd
22
–
,
–
? 2000 by CRC Press LLC
Mathematically, the odd and even mode impedances are the minimum and maximum values of impedances
for coupled lines. Observe that this change in impedance is due to the mutual coefficients. This has important
implications in high-speed and high-density designs where many lines are closely spaced. The very presence of
an idle line in proximity alters Z
0
of the line. Consider a completely isolated line (alone) with its parameters
C
a
, L
a
, and Z
a
as in Fig. 23.38a. Next consider one idle neighbor line as in Fig. 23.38b.
The presence of an idle line sets up a series capacitive path from active line to ground through mutual
coupling to idle line. Thus, with one idle neighbor line, capacitance increases from C
a
to C
1i
given by
(23.48)
Thus, the presence of one idle neighbor increases the capacitance by ? C
m
. Similarly, the presence of two idle
neighbors increases capacitance by ? 2C
m
. However, in both cases inductance remains the same as the isolated
line, L
a
. Therefore, the impedances for one and two idle neighbors are given respectively as
(23.49)
Therefore, since the presence of idle lines increases capacitance, there is a reduction in characteristic impedance
which may be important for controlled impedance lines. The decrease in impedance due to one and two idle
neighbor lines on a 50-W stripline design in FR4 (e
r
= 4.2, and t = 1.2 mil) is shown in Fig. 23.39a for different
line width and space (w/s in mils/mils) combinations. In Fig. 23.39b are shown the maximum crosstalk for the
FIGURE 23.38aIsolated line.
FIGURE 23.38bLine with one idle neighbor.
CC
CC
CC
CC
C
C
ia
am
am
am
m
a
1
for =+
+
?+ <<1
Z
L
CC
Z
L
CC
i
a
am
i
a
am
01 02
?
+
?
+
,
2
? 2000 by CRC Press LLC
same w/s combinations. Observe that the shape of the lines in both figures is identical. The larger the crosstalk,
the greater will be the change in impedance. Thus, crosstalk and impedance are related, and when we design
for low crosstalk we are automatically designing for good impedance control. Also, from Fig. 23.39b we observe
that to reduce crosstalk we must increase the space s between lines. As we go from a 6/6 design to a 6/10 design,
maximum crosstalk reduces from 7.5% to 3.5%, a reduction of 4%. This has relevance to Fig. 23.27 where we
showed that the asymmetric stripline design lends itself to higher interconnection density. We also noted that
the penalty to be paid for moving one ground plane farther away is increased crosstalk. However, by increasing
s, we can reduce crosstalk and reduce this penalty. As shown in Fig. 23.40, as we change from a w/s design of
6/6 to 6/8, we get an almost uniform reduction of 2% in maximum crosstalk. Thus, we can adjust the spacing
FIGURE 23.39aReduction in Z
0
due to presence of adjacent lines. Stripline nominal Z
0
= 50 W, e
r
= 4.2, t = 1.2 mil.
FIGURE 23.39bMaximum crosstalk for various w/s combinations. Stripline nominal Z
0
= 50 W, e
r
= 4.2, t = 1.2 mil.
? 2000 by CRC Press LLC
to achieve crosstalk immunity. We should always keep in mind that in addition to manufacturing tolerances,
presence of adjacent lines in dense boards has the effect of altering Z
0
.
Noise in Connectors, IC Leads, and Wirebonds. Unlike boards, backplanes, and cables, the lengths of the
signal paths in connectors, IC pins, and wirebonds are very short so that delay is small compared to risetime.
Therefore, they behave like lumped parameters. Therefore, near-end crosstalk noise is important. We also noted
that in a pure TEM wave, both inductive and capacitive noise contributions are almost equal, which is appro-
priate for boards and cables. However, in connectors, pins, and wirebonds, the predominant noise contribution
is inductive in nature. Fortunately, inductive noise can be evaluated from some simple analytical expressions
that have been verified experimentally. However, to evaluate capacitive noise, we have to resort to detailed field
solutions of the electromagnetic equations. This can be very complicated. However, from measurements in
connectors, we have determined that the maximum capacitive noise contribution is ?30% of X
NE
. In a practical
situation, this may be ?15–20%, which may be added to the inductive noise. Inductive noise itself is made up
of two components: noise due to self inductance of the pins or wires and noise due to mutual inductance
between pins or wires. The mutual inductance, L
m
(nH), between two wires of length l (in.) and center line
separation of d (in.) is given by Rainal [1984].
(23.50)
Similarly, the self inductance L
s
(nH) of a pin of radius r (in.) and length l (in.) is given by
(23.51)
Thus, total self inductance is the sum of the contributions from both external and internal magnetic fields. For
a rectangular conductor of perimeter p (in.), self inductance is given by
(23.52)
FIGURE 23.40 Asymmetric stripline crosstalk reduction. w = 6 mil, t = 1.2 mil, h
1
= 7.5 mil.
Ll
l
d
ld dl dl
m
?++
?
è
?
?
?
÷
++
é
?
ê
ê
ù
?
ú
ú
51 1
22
ln ( / ) – ( / ) / nH
LL
l
smd
=+
=
*
r
5
4
nH
Ll
l
p
s
?
?
è
?
?
?
÷
+
é
?
ê
ê
ù
?
ú
ú
5
41
2
ln nH
? 2000 by CRC Press LLC
Thus, inductive noise due to self inductance can be significant even in the presence of mutual inductance. It
is this phenomenon that gives rise to ground noise in connector pins and wire-bonds in IC packages and may
be as high as 100 mV or more [Rainal, 1984]. This becomes an important issue in the allocation of ground
return leads or wires during the signal layout of connectors and ICs. The layout of a connector or IC leads
(wirebonds) consists of arrays or clusters of conductors. Of these, a certain number are allocated to signals and
the remaining to ground. In connectors, we are mainly interested in X
NE
on an idle signal pin, when one or
more of the adjacent signal lines are pulsed. If an array of n conductors are active, then the total noise on the
idle pin due to mutual coupling is given by
(23.53)
Noise on the ground pins is the sum of self and mutual inductances and is given by
(23.54)
The return currents in the ground pins or leads will vary according to the inductive field around it. Since the
ground has shifted, the near-end inductive crosstalk on the idle pin is given by
(23.55)
The model for noise evaluation in connectors and wirebonds is shown in Fig. 23.41.
Note that ground noise becomes an important part of inductive noise. This phenomenon, also called “ground
bounce,” can be a significant problem when 16 or 32 signal bits are switched simultaneously in a chip. For
high-speed logic designs, ground noise must be kept to a minimum. This requires as many ground return leads
as possible. In addition, ground and signal leads must be closely alternated in a checkerboard pattern. Keeping
all the signals clustered in one region and all the grounds clustered in another region will cause significant
ground noise problems [Rainal, 1984].
FIGURE 23.41Inductive noise model for an array of signal (S
i
), ground (G
i
), and idle conductors.
V L dIdt
mmi
i
in
i
=
=
=
?
1
()/
VLLdIdt
ns
i
in
mi i
=
=
=
?
–(–)()
1
/
V
VV
V
t
mnL
==
–
2
NE
? 2000 by CRC Press LLC
Risetime Degradation
When a pulse is passed through an interconnection element, there is an increase in the risetime of the pulse,
and this slowing down of the wave is referred to as risetime degradation. If the input risetime is T
ri
and the
output risetime is T
ro
, then risetime degradation T
rd
is defined as
(23.56)
Thus, if we have a step input, the output risetime is T
rd
. If there are n interconnection elements, each with
risetime degradation T
rd1
, T
rd2
, etc., then the risetime at output of n interconnection elements is
T
ro
2
= T
ri
2
+ T
rd1
2
+ T
rd2
2
+ · · · + T
rdn
2
(23.57)
This relationship is important because risetime degradation is not just the sum, but the square root of the sum
of squares. Thus, if T
rd1
is 0.2, and T
rd2
is 0.5, it is almost 6 times (0.25/0.04) more important than T
rd1
. Therefore,
attention should be paid to those elements that are most important. In general, risetime degradation occurs
due to resistive, inductive, and capacitive effects of circuits. In PCBs with long traces as transmission lines, T
rd
is due to a combination of dc (IR loss) and skin effect. At very low frequencies, the current fills the entire cross
section of the conductor. As we go to higher frequencies, the current is concentrated in a very thin layer (skin)
around the perimeter of the conductor. This thin layer is called skin depth, where s is the
conductivity, m the permeability, and f the frequency. For copper, d = 2.09 mm at 1 GHz. Skin effect adds a
series resistance to the line, and for a rectangular conductor, the first-order approximation is
(23.58)
Skin effect slows down the signal and also lowers the magnitude of the pulse. For a copper conductor with w
= 4 mils and t = 2 mils, for a risetime of 0.5 ns, skin effect becomes critical when the length of the trace
approaches 40 in. [Chang, 1988]. Therefore, for most practical board designs today, the lossless line is a good
approximation.
Risetime degradation is important because it is related to the bandwidth (BW) of the interconnection
medium given as
(23.59)
The concept of bandwidth itself comes from low-pass RC filter theory. We know that the classical low-pass
RC filter passes low frequencies readily, but attenuates high frequencies. For this RC filter, the bandwidth is
given by f
2
= 1/ 2(pRC) and is the frequency at which the gain of the filter falls to 3 dB (70%) of the low-
frequency value. It is like a cutoff frequency and is a qualitative measure of the transfer of energy through the
filter. If we put a step input to the filter, the output from the filter is shown in Fig. 23.42.
From theory, we can show that f
2
= 0.35/t
r
. Typically, good design requires that the interconnection loss be
limited to ?1 dB. For connector characterization we pass a very fast pulse (almost a step input) and observe
the output pulse leaving the connector and its bandwidth expressed according to Eq. (23.59).
The low-pass filter concept is very useful because most interconnection elements such as connectors, short
PCB traces, and IC package pins behave as low-pass filters. Since their lengths are relatively small compared to
pulse risetime, they act as lumped parameters and discontinuities along the transmission path. The degradation
of the signal is mainly due to capacitance and inductance of the pins and leads. The appropriate model for
such an interconnection discontinuity at a PCB IC package interface is shown in Fig. 23.43 [Rainal, 1988].
TTT
rd ro ri
=-
22
dps=m1 f ,
R
s
=
′′
1
[]
d perimeter
BW
T
rd
=
035.
? 2000 by CRC Press LLC
This generic model is valid for any interface including connector and wirebonds or short PCB traces which
act as lumped elements. The interface is dominated by both parasitic capacitance and inductance of the signal
and ground leads. As the signal enters the interface, part of the energy is absorbed, part of it is reflected due
to the discontinuity, and the remaining is transmitted. This combination of events causes both an attenuation
of the magnitude of the pulse, as well as degradation of the risetime. Interestingly, all of the parameters of
interest, such as reflection, magnitude, and T
rd
, may be predicted from two nondimensional parameters. These
are
(23.60)
FIGURE 23.42Low-pass RC circuit response to step input.
FIGURE 23.43Electrical model of the transmission path from a printed wiring board to a matched high-speed chip.
(Source: A.J. Rainal, “Performance limits of electrical interconnections to a high-speed chip,” IEEE Trans. CHMT, vol. 11,
no. 3, pp. 260–266, Sept. 1988. ? 1988 IEEE.)
ab== =
L
RC
LR
RC
T
RC
r
2
/
,
? 2000 by CRC Press LLC
a is the ratio of inductive to capacitive time constants, b is a nondimensional risetime, and R is the terminating
resistor = Z
0
. The results for various combinations of a and b are given in Figs. 23.44, 23.45, and 23.46 for the
practical application of a high-speed chip package.
Other cases may be obtained from equations given in Rainal [1988]. Typical values of L for PCB connectors
vary from 15 to 25 nH, and the capacitance varies from 1.5 to 2 pF. If the connector is placed on a 75-W line
through which we pass a pulse with a risetime T
r
of 1 ns, then for L = 20 nH, and C = 2 pF, a ? 1.8, and b ?
6.7. Therefore, from Fig. 23.44, the maximum reflection coefficient from the connector is 0.1 (10%). Also, from
Fig. 23.45, for a 1-V input pulse, the output from the connector will only be 0.75 V (75%). Thus, this connector
FIGURE 23.44Maximum reflection coefficient at interface.
FIGURE 23.45For a 1-V ramp input, V
0
denotes the output response at the interface for t = bRC. (Source: A.J. Rainal,
“Performance limits of electrical interconnections to a high-speed chip,” IEEETrans. CHMT, vol. 11, no. 3, pp. 260–266,
Sept. 1988. ? 1988 IEEE.)
? 2000 by CRC Press LLC
configuration may not be suitable for high-speed designs as we would like the reflection coefficient to be < 5%
and the output voltage at least 80%. In addition, there is also the degradation of the risetime. Note that this
degradation of signal is just at the connector alone. In any interconnection medium, there may be many
interfaces at which signal distortion also occurs. The goal is to minimize distortion at every interconnecting
interface. Observe that the minimum reflection occurs for a = 1, when L/R = RC, or R = = Z
0
, when
the inductive and capacitive time constants are equal. This may be considered as a “matched” discontinuity.
For a > 1, the interface may be considered inductive in nature, while it is capacitive for a < 1. Thus, the
reflection will be positive in one case and negative in the other. The a – b theory given above also implies some
important considerations. From Fig. 23.45, we observe that b must be large to have a high value for the output
voltage. b may be made large by reducing C, but when we reduce C, we increase a, and we observe that both
the output voltage falls as a increases, and at the same time the reflection coefficient also increases. Therefore,
just reducing C results in signal degradation. It is not possible to maintain signal integrity by tuning either L
or C alone. We need to specify joint bounds for L and C to minimize signal degradation. These are discussed
in Rainal [1988]. This simple electrical model is a very practical tool for evaluating interconnection elements
for signal integrity.
Conclusions
This article is an overview of electrical characterization of interconnections and packaging (I&P) of electronic
systems. It is shown that I&P technology is mainly driven by advances in silicon IC technology, such as VLSI
where there has been a hundred-fold increase in component density per decade. Such a rapid increase in silicon
integration has also led to larger connections outside of the IC package, where the pin count is dictated by the
empirical Rent’s rule. Integration of such dense ICs in turn has led to multilayer PCBs with increased track
density and layer count. The signal traces from these dense multilayer boards terminate on dense PCB connec-
tors with 300 to 400 I/Os on an 8-in. height. The trend in electronics packaging is towards both high speed
and high density. I&P technology has therefore evolved at a rapid pace along with the driving forces behind it.
The elements that make up I&P are the IC package with the wirebonds inside and the pins outside, PCB
with the conductor traces, and vias between the layers, PCB connector, motherboard with pins and conductor
traces, and cabling between boards and/or equipments. The elements of I&P are analogous to a mechanical
chain, with each element forming a link in the chain. The mechanical analogy may be extended further by
noting that the I&P chain is only as strong as the weakest link. As the signal travels from a driver to a receiver,
it passes through several interfaces between elements. There is signal degradation at every interface. I&P should
be designed so as to minimize the degradation at each interface. A high-performance IC may be degraded due
FIGURE 23.46 Output waveform at interface for a 1-V ramp input. (Source: A.J. Rainal, “Performance limits of electrical
interconnections to a high-speed chip,” IEEETrans. CHMT, vol. 11, no. 3, pp. 260–266, Sept. 1988. ? 1988 IEEE.)
L C
? 2000 by CRC Press LLC
to improper interconnections. Today, I&P is the bottleneck to designing high-performance electronic systems.
Reliability of the system depends critically upon the integrity of the interconnection scheme.
The important electrical parameters characterizing I&P are delay, characteristic impedance, crosstalk, and
risetime degradation. The article outlines the significance of each of these parameters and techniques to evaluate
them. The electrical transmission path from driver to receiver is a very complicated one. It is extremely difficult
to model the problem electrically from end to end. The best approach is to electrically characterize each interface
(element) individually for signal integrity. This breaks the problem down to simple parts. The goal is to address
those elements that are the cause of severe signal degradation and in the process optimize the entire intercon-
nection chain. Breaking the problem into individual elements identifies the important links. If the entire
problem is solved from end to end, it is often difficult to isolate those elements that are the major cause of
degradation. Fortunately, most of the electrical parameters can be evaluated using simple analytical techniques
that are readily available in literature. These have been used and verified experimentally at Bell Labs in the
design of many systems. In the design of high-speed PCBs, designers should also consider the effect of manu-
facturing tolerances on electrical parameters, such as characteristic impedance. Due to increased speed and
density requirements of many systems, careful electrical characterization and optimization of I&P is the key to
system performance.
Defining Terms
Bandwidth (BW): Related to risetime degradation and the frequency at which the energy transfer through
I&P falls to –3 dB below the low-frequency value. It is only a qualitative measure to compare different
options, and the concept comes from classical RC low-pass filter theory.
Characteristic impedance (Z
0
): The impedance (resistance) to energy transfer associated with wave propa-
gation in a line that is much larger than the wavelength. It gives the relationship between the maximum
voltage and maximum current in a line.
Crosstalk (X): Noise that appears on a line due to interactions with stray electromagnetic fields that originate
from adjacent pulsed lines.
Interconnections and packaging (I&P): Elements of the electrical signal transmission path from the driver
chip to the receiver chip. Various elements that make up I&P are chip wirebonds and package pins, circuit
boards, connectors, motherboards, and cables.
Propagation delay (T
d
): Time required by a signal to travel from source to receiver.
Risetime degradation (T
rd
): A measure of the slowing down of the pulse as it passes through an I&P element.
It includes both the increase in risetime of the pulse, as well as loss in amplitude.
References
C.S. Chang, “Electrical design of signal lines for multilayer printed circuit boards,” IBM J. Res. Develop., vol.
32, no. 5, pp. 647–657, Sept. 1988.
B.N. Das and K.V.S.V.R. Prasad, “A generalized formulation of electromagnetically coupled striplines,” IEEE
Trans. Microwave Theory and Techniques, vol. MTT-32, no. 11, pp. 1427–1433, Nov. 1984.
W.A. Davis, Microwave Semiconductor Circuit Design, New York: Van Nostrand Reinhold, 1990.
W.L. Everitt (Ed.), Physical Design Of Electronic Systems, Volume 1, Design Technology, Englewood Cliffs, N.J.:
Prentice-Hall, 1970, pp. 362–363.
C.W. Hoover, W.L. Harrod, and M.I. Cohen, “The technology of interconnection,” AT&T Technical Journal,
vol. 66, issue 4, pp. 2–12, July/August 1987.
H. Howe, Stripline Circuit Design, Burlington, Mass.: Microwave Associates, 1974.
H.R. Kaupp, “Characteristics of microstrip transmission lines,” IEEE Trans. Electronic Computers, vol. EC-16,
no. 2, pp. 185–193, April 1967.
L.L. Moresco, “Electronic system packaging: The search for manufacturing the optimum in a sea of constraints,”
IEEE Trans. CHMT, vol. 13, no. 3, pp. 494–508, Sept. 1990.
A.J. Rainal, “Transmission properties of various styles of printed wiring boards,” Bell System Tech. Journal, vol.
58, no. 5, pp. 995–1025, May-June 1979.
? 2000 by CRC Press LLC
A.J. Rainal, “Computing inductive noise of chip packages,” AT&T Bell Laboratories Tech. Journal, vol. 63, no.
1, pp. 177–195, 1984.
A.J. Rainal, “Performance limits of electrical interconnections to a high-speed chip,” IEEE Trans. CHMT, vol.
11, no. 3, pp. 260–266, Sept. 1988.
K. Ritz, “Manufacturing tolerances for high-speed PCBs,” Circuit World, vol. 15, no. 1, pp. 54–56, 1988.
Further Information
The issue of the AT&T Technical Journal in Hoover et al. [1987] focuses on the technology of electronic
interconnections. It is an excellent source of information on important areas of interconnections and packaging.
It begins with an overview and includes areas such as systems integration and architecture, materials and media,
computer-aided design (CAD) tools, reliability evaluation, and standardized systems packaging techniques. The
book Microelectronics Packaging Handbook, edited by Rao R. Tummala and Eugene J. Rymaszewski and pub-
lished by Van Nostrand Reinhold, is a standard reference for packaging engineers, covering all major areas in
detail. Most of the work on electrical characterization was done by researchers in the microwave area. These
studies have been extensively published in the IEEE Transactions on Microwave Theory and Techniques, a
publication of the IEEE Microwave Theory and Techniques Society. Parameters for electrical package design
and evaluation have also been published in the IBM Journal of Research and Development. A source for the
general areas of components, connector technologies, and manufacturing aspects related to packaging is the
IEEE Transactions on Components, Hybrids, and Manufacturing Technology.
23.4 Process Modeling and Simulation
Conor Rafferty
Fabricating an integrated circuit (IC) requires a lengthy sequence of manufacturing steps, including growth or
deposition of thin films, patterning them by lithography and etching, implanting dopant atoms, and high-
temperature annealing. The structure and the behavior of the devices (transistors, resistors, capacitors, diodes)
in a circuit fabricated by such a sequence is a complex function of all the manufacturing variables such as
furnace temperatures, film thicknesses, implant energies, and doses. Computer tools are used to predict the
outcome of the manufacturing steps in terms of the structure of the devices, as to their geometry and the
distribution of dopant atoms in the semiconducting regions. These tools are called process simulators. When
coupled with a device simulator, which takes the structure of the device and predicts the relation between current
and, a powerful synergy occurs. The electrical behavior of devices can be predicted in advance of fabrication,
knowing only the proposed processing sequence and the layout of the lithographic masks. Processing sequences
can be modified and resimulated many times until the desired device behavior is obtained. Each computer trial
can be carried out in a matter of hours, while each experimental sequence would take weeks to months to
complete. A great reduction in process design time is possible using accurate process and device simulation tools.
In addition, the computer tools provide the designer with insight into device behavior by examining the internal
dopant distributions and electrical fields directly — information not available by any experimental technique.
Process simulation tools are available for each of the key processes used in fabricating devices. These processes
can be grouped into thermal processing and doping (ion implantation, diffusion, oxidation), pattern definition
(lithography), and pattern transfer (etching, deposition). Historically, doping simulators were first developed
due to the primary importance of the doping distribution in defining the electrical behavior of the devices.
Lithography simulation soon followed to predict the optical image formed by illumination through a mask.
Etching and deposition simulations present the largest challenge to modeling, as simulating these processes
requires knowledge of reactor chemistry, plasma electrodynamics, and surface evolution under particle fluxes.
The following sections focus on each of the unit processes for which simulation tools exist.
Ion Implantation
The goal of simulating ion implantation is to predict the distribution of implanted ions in the target and also
to predict the amount of damage generated in the target. During ion implantation, accelerated ions penetrate
? 2000 by CRC Press LLC
the surface of the wafer and come to rest in its interior after losing energy through interactions with the nuclei
and electrons of the target. Two main types of models are used: Monte Carlo models and moment models.
Monte Carlo methods are based on following the trajectories of many individual ions as they undergo inter-
actions with the target. By tracking a representative sample of ions, typically 10 to 100,000 ions, an accurate
distribution of both implanted ions and target recoils can be generated. The slowing of the ions is described
by the stopping power, which has a nuclear and an electronic component (Eq. (23.61)). The nuclear component
can be described by a series of collisions with the atoms of the target, modeled by a classical screened-Coulomb
interaction potential, while the electronic stopping acts as a viscous drag force, proportional to the square root
of the ion’s energy. Excellent accuracy is available in a number of codes, such as TRIM [Biersack, 1986],
MARLOWE [Robinson, 1974], or IMSIL [Hobler, 1986]. Such calculations can be time-extensive. When speed
is important, approximate models based on representing the ion distribution by precomputed tables are used.
These tables of distribution moments can be calibrated either by comparison to experimental data, where
available, or to the physically based Monte Carlo methods. In these moment models, the ion distribution
resulting from impacts at a single point (Fig. 23.47(a)) is typically represented by a product of functions in the
perpendicular and lateral directions (Eq. (23.62)), where N
2
is a Gaussian and N
1
is a Gaussian, Pearson, or
Dual-Pearson [Lim, 1993] distribution. An areal distribution can then be generated by superposition
(Fig. 23.47(b)). An example of the resulting distribution of ions under a mask edge is shown in Fig. 23.48.
(23.61)
(23.62)
Diffusion
Heat treatment is an essential part of wafer fabrication, for the growth of oxide films, activation of implanted
dopants, and in diffusion of dopants from the surface. Any heat treatment above 600°C has the potential of
allowing dopants to redistribute from their current locations according to the laws of diffusion. The goal of
diffusion simulation is to predict the distribution of dopant atoms as a function of the thermal cycle seen by
the wafer, and it plays a key role in the design cycle. The equilibrium diffusivity of dopants in silicon below
900°C is small (Fig. 23.49), but many effects allow diffusion to proceed at an accelerated rate: high concentrations
of dopants, oxidation of the surface (OED), and point defects generated by ion implantation (TED). Diffusion
must be carefully controlled to prevent undesired approach of electrically separate parts of the device. TED is
particularly troublesome as it can give rise to diffusivity enhancements of 10,000 or more. A unified explanation
of the many aspects of diffusion in silicon has been developed by considering the role of point defects in
diffusion [Fahey, 1989]. Dopants diffuse by interacting with vacancies or interstitials in the crystal lattice
(Fig. 23.50). Phosphorus, indium, and boron are known to diffuse predominantly by an interstitial mechanism,
while antimony diffuses mainly by vacancies, and arsenic may use both mechanisms. A comprehensive model
of diffusion can be built starting with Eq. (23.63), which relates the diffusion of interstitials I and dopants D,
including interactions with interstitial clusters C, traps T, and dopant precipitates P. The flux of dopant-interstitial
FIGURE 23.47 (a) The point response function is the distribution of ions resulting from impacts at one specific point on
the surface. (b) Implantation through a window can be calculated by superimposing the point response functions from each
point of the window.
S
dE
dx
dE
dx
nuclear electronic
=
?
?
?
?
?
?
+
?
?
?
?
?
?
Nxyz N xN y z,,
()
=
()
+
?
?
?
?
12
22
? 2000 by CRC Press LLC
FIGURE 23.48 Distribution of ions underneath a mask edge. Boron is implanted at a dose of 5 × 10
15
cm
–2
and an energy
of 100 keV. The logarithmic contours are in units of ions/cm
3
and the interval is half a decade.
FIGURE 23.49 Diffusivity of common dopants in silicon. Interstitial diffusers such as boron, phosphorus, and indium are
most rapid, while arsenic and antimony are almost a decade slower.
? 2000 by CRC Press LLC
pairs F
D
has a number of interesting features; it shows that the diffusion can be enhanced by the supersaturation
of interstitials relative to equilibrium I/I*, by the electric field –?ψ, and by gradients in the interstitial population
?I/I* (see below).
(23.63)
The phenomenon of OED [Fair, 1981], enhanced diffusion due to oxidation, occurs because interstitials are
generated at the surface where oxidation occurs, and rapidly diffuse into the bulk where their supersaturation
enhances dopant diffusion. The rate of injection of interstitials is set by a balance between interstitial injection
and recombination, and is a sublinear function of the oxidation rate. Interstitials can recombine with bulk
vacancies, or at the surface. They can also be lost to interactions with bulk traps such as oxygen precipitates in
high oxygen material.
Transient-enhanced diffusion, TED, results because the large number of interstitial atoms generated by ion
implantation take some time to be recombined; and while excess interstitials are present, they create a large
diffusion enhancement. An unusual feature in TED is that there is a strong gradient of interstitials to the surface
that gives rise to a “defect wind” which “blows” dopants to the surface, causing a localized surface pileup
[Rafferty, 1993]. Figure 23.51 shows the distribution of interstitials following TED and Figure 23.52 shows the
effect on dopant profiles. Diffusion models must also account for dopant precipitation or clustering, which
limit the maximum soluble and electrically active concentration that can be attained. The active concentration
C
A
is generally related to the chemical concentration C
C
by an equilibrium relation of the form in Eq. (23.64),
which arises from clusters of m dopant atoms leaving solution together [Fair, 1981]. Dopant diffusion in
polysilicon is quite different. The point defects are mainly in equilibrium due to the close proximity of grain
boundaries; instead, a two-stream model must be used, considering both diffusion in the grains and diffusion
along grain boundaries, which is much faster. See, for example [Lau, 1990].
(23.64)
Oxidation
Insulating thin films of silicon dioxide can be grown on silicon by exposing it to a high-temperature stream of
oxygen or steam. The oxide film can be grown selectively by masking the surface with silicon nitride, which is
resistant to oxidation. The goal of simulating oxidation is to predict the local thickness of the oxide film after
FIGURE 23.50 Diffusion mechanisms in a crystal lattice.
?
?
=?? ?
( )
??
( ) ( ) ( ) ( )
?
?
=??
( )
=
( )
?+ ?+?
?
?
?
?
?
?
?
?
=
(
I
t
IFfIVfIDPfICfIT
D
t
FfIDP
F
I
I
DD D
I
I
P
t
fIDP
IDRP CT
DP
DD D
P
H5104
H5104
––,–,,–,–,
––,,
–
**
,,
ψζψ
)
?
?
=
( )
?
?
=
( )
C
t
fIC
T
t
fIT
CT
,,
CCC
AA
m
C
+=β
? 2000 by CRC Press LLC
FIGURE 23.51 Distribution of interstitials following implantation and annealing of phosphorus masked by a gate. At the
end of the anneal, all excess interstitials have been removed; the logarithmic contours give the time-integrated supersaturation
∫I/I*dt as a function of space. The contour unit is seconds and the interval a quarter decade. The supersaturation is largest
where the phosphorus is implanted, but is significant under the gate also.
FIGURE 23.52 Dopant profiles at the center of a short (dark) and a long (light) gate. Far from the gate edge, no excess
diffusion occurs; but under a short gate, interstitials reach laterally and dramatically change the dopant profile.
? 2000 by CRC Press LLC
oxidation, and to calculate any stresses that may arise. The oxide growth process can be modeled as the result
of three sequential processes (Fig. 23.53): oxidant diffusion through the exiting oxide, oxidant reaction with
silicon at the oxide/silicon interface, and flow of the overlying oxide to accommodate the new volume at the
interface. For planar oxidation, the flow process is simply a vertical displacement, and only diffusion and
reaction are rate-limiting. Solving this system gives rise to the well-known Deal-Grove growth law [Deal, 1965]
for thickness L as a function of time t (Eq. (23.65)). The coefficients k
l
, k
p
are the linear and parabolic coefficients
of oxidation, respectively, and have been tabulated as a function of temperature, crystal orientation, and
oxidizing ambient. It is found by experiment that a correction term must be added for thin oxides grown in
an oxygen ambient. An empirical model of the correction term is ?L = Cexp(–L/λ). The critical length λ is
about 200 ? at any temperature [Massoud, 1985].
(23.65)
When oxidizing a nonplanar surface, the flow of the overlying oxide must be taken into account in calculating
the rate of oxidation. In addition, the oxide stretching or compression leads to stress generation. The stress can
be large enough to cause slip in the silicon substrate, as well as feeding back into the oxidant diffusion and
reaction to reduce the oxidation rate [Kao, 1985]. A Stokes flow model (Eq. (23.66)) gives a first-order account
of oxide deformation [Chin, 1982]. The effect of stress can be taken into account by allowing the viscosity and
oxidation coefficients to be functions of stress [Rafferty, 1990]. Figure 23.54 illustrates reduced oxide growth
at a corner due to stress.
μ?
2
ν = ?P (23.66)
Etching
Reactive ion etching is used to transfer patterns from the soft photosensitive resist into the hard final materials
of the integrated circuit. The ultimate goal of an etching simulator is to predict the new surface topography (e.g.,
sidewall angle or undercut) as a function of the reactor control variables (e.g., RF power and gas mixture). The
etching process is a result of several sequential mechanisms, principally gas-phase kinetics, plasma electrody-
namics, particle transport, and surface kinetics. As a result, modeling the full etching process requires charac-
terization of many physical mechanisms and is extremely complex. A common simplification (e.g., [Hamaguchi,
1993]), most effective for physical etching, is to take the etching rate as a function of reactor controls as a given,
and calculate the new topography using local etch rates that are a function only of substrate orientation or
curvature (evolution simulation). When re-emission of sputtered material is taken into account (Fig. 23.55) good
predictions of surface evolution are possible and provide insight into etching mechanisms [Singh, 1992].
For reactive ion etching, gas-phase chemistry plays an important role, and can be described by a system of
rate equations between gas species. Eq. (23.67) gives a subset of the reactions considered in chorine-based
silicon etching. State-of-the-art feature scale etch simulators [Oldham, 1980; Cale, 1992] include multiple
incoming species (ions and neutrals), multiple surface species, and general reactions between surface species.
FIGURE 23.53 Oxidation is the result of three sequential processes. Oxidant must diffuse from the exposed surface to the
silicon interface where the oxidation reaction occurs. The old oxide overlying the new-grown oxide must then flow to
accommodate the extra volume.
dL
dt k
L
k
lp
=+
?
?
?
?
?
?
?
?
1
1–
? 2000 by CRC Press LLC
(23.67)
FIGURE 23.54 Oxide growing on an outside corner with and without stress effects. Without accounting for stress, a thick
oxide is grown on the corner due to its larger collection area for oxidant. Experimentally, it is found that the corner oxide
is reduced due to stress effects, as shown in the second simulation [Rafferty, 1989].
FIGURE 23.55 Sputtering and redeposition. Incoming ions can stick, be reflected, or sputter target ions, which in turn
are subject to the same mechanisms.
Si Cl SiCl
SiCl Cl SiCl
SiCl Cl SiCl
sg s
sg s
sg g
() ( ) ()
() ( ) ()
() () ()
+?
+?
+?
2
34
L
? 2000 by CRC Press LLC
Deposition
Like etching, the goal of deposition simulation is to predict wafer topography features, such as step coverage,
as a function of reactor controls, such as power or gas mixture. However, in this case, the processes under
consideration are deposition of metals, insulators, or semiconductors by, for example, sputtering or chemical
vapor deposition (CVD). The basic framework for simulation is quite similar to that for etching, and many
etch simulators also have deposition capability. Additional features for sputter deposition include a consider-
ation of the source geometry, which determines the angular distribution of incoming particles. Figure 23.56
shows the result of simulating metal deposition inside a trench.
Monte Carlo methods are also used to good effect in modeling deposition, and can predict physical properties
of the deposited film in addition to its topography. Gilmer [1998] shows how simulation can be used to predict
growth competition between different grain orientations, while the tool SIMBAD [Smy, 1998] can be used to
examine the variation of Ti grain microstructure deposited on trench sidewalls.
Lithography
Lithography simulation is playing an increasingly critical role in the development of imaging processes, as the
size of features to be printed begins to drop below the wavelength of the illumination source. The ultimate goal
of simulation is to predict the features that will result in photoresist after it has been exposed and developed.
There are three components to the process: optics simulation, where the light intensity in the resist (aerial
image) is computed; exposure simulation, where the interaction of the light with the resist is computed; and
development simulation, where the evolution of resist topography is computed.
Optical simulation is the most advanced, as it can bring all the power of geometric optics to bear on the
calculation. The role of the optics system (Fig. 23.57) is to project a reduced image of the mask on a resist-
coated wafer. In the SAMPLE program [Oldham, 1979], the aerial image computation is based on the Hopkins
theory of imaging with partially coherent light [O’Toole, 1979]. The wavefront from the illumination source
is decomposed into its plane-wave components; then the components evolve by being convolved with appro-
priate transfer functions to represent the pupils, condenser lens, mask, and objective lens. The image at the
resist is calculated by adding the intensities of all the plane waves at the mask, assuming an incoherent source.
Figure 23.58 shows an example aerial image calculated with SPLAT [Toh, 1988].
During exposure, the light intensity inside the resist is determined by the intensity of the aerial image at the
resist surface, the absorption of light in the resist, and standing waves caused by reflections off the resist surfaces
and steps on the wafer. The absorption changes during exposure as the resist bleaches under the incoming
light. The calculation proceeds by an initial intensity calculation in the resist, following by an estimate of the
amount of bleaching in a short time interval. Local absorption coefficients are updated and a new intensity
calculation is performed for a series of intervals through the total exposure time. The accumulated light intensity
absorbed determines the concentration of resist dissolution inhibitors remaining at each point in the resist.
FIGURE 23.56 Simulation of metal deposition: 2000? of aluminum is deposited into a 2:1 aspect ratio via sputtering.
Shadowing of the inside causes cornices to develop at the upper corners, which will eventually close and lead to the generation
of a void [O’Sullivan, 1999].
? 2000 by CRC Press LLC
Resist development is simulated based on a model introduced by Dill [1975]. The development process is
treated as a surface-etching phenomenon, with the etch rate depending only on the local concentration of
inhibitor. The evolution of the surface with development time can therefore be treated by the same evolution
algorithms used in etching and deposition simulation.
As optical lithography is pushed to its limits in imaging ever smaller features using illumination wavelengths
that are not easily reduced, simulation is playing an increasingly important role in the development of resolu-
tion-enhancement techniques such as optical-proximity-correction (OPC) features, phase shift masks, and off-
axis illumination.
FIGURE 23.57 Optical image formation: each of the components of an imaging system affects the wavefront; between
components, light follows free space propagation rules [Leon, 1998].
FIGURE 23.58 Aerial image: the light intensity at the resist surface due to imaging through two rectangles of size 0.28 ×
0.6 μm separated by 0.12 μm [Watson, 1999] at an illumination wavelength of 248 nm. The half-wavelength feature spacing
is well resolved.
? 2000 by CRC Press LLC
Summary and Future Trends
Computer simulation of semiconductor processing has become a widely accepted technique to reduce the high
cost and long turnaround time of fabrication trials. Physically based models of the classic process steps have
been established and widely applied, while new processes such as chemical-mechanical polishing are begetting
a new generation of models to simulate them. The increasing speed of computers and the improving under-
standing of fabrication processes, compared to the increasing cost of experiments in a multibillion dollar
fabrication line, will continue to drive the development and refinement of accurate process simulation tools.
Defining Terms
Aerial image: The output of an optical simulator.
Device simulator: A computer simulation program that predicts the relation between current and voltage of
an electron device based on its geometrical structure and its dopant atom distribution.
Empirical models: Models based primarily on fitting measured data without an analysis of the underlying
phenomena.
Evolution simulator: A computer simulation tool for predicting the change in surface shape under the
influence of surface motion rates.
Lithography simulator: A computer simulation tool for predicting the shape of resist features after exposure
and development.
Monte Carlo models: Many physical systems can be modeled by following the trajectories of a representative
number of individual particles under the applied forces. A random choice is made whenever statistically
equally likely outcomes of events are possible.
Optical proximity correction: The modification of mask features in order to counteract undesired diffraction
effects around small geometry features.
Off-axis illumination: The use of a non-point illumination source to improve lithographic resolution.
Optical simulator: A computer simulation tool for predicting light intensity at the surface of resist after
passing through a projection lithography system.
Oxidation-enhanced diffusion (OED): The diffusion of dopants in the bulk of a wafer is enhanced when
oxidation occurs at its surface.
Phase shift masks: The use of partially transmitting features on a mask to improve lithographic resolution.
Physically based models: Models based on fundamental physical and chemical principles.
Process simulator: A computer simulation program that predicts the outcome of the integrated circuit
fabrication steps in terms of the geometrical structure and dopant distribution of the wafer.
Stokes flow: The flow of a liquid when body forces and inertial terms are negligible in comparison to viscous
resistance.
Transient-enhanced diffusion (TED): The diffusion of dopants in the bulk of a wafer is very much enhanced
following ion implantation.
Topography simulator: A computer simulation tool for predicting the net effect of a number of etching and
deposition steps on the wafer topography.
References
[Biersack, 1986] J.P. Biersack and L.G. Haggmark, A Monte Carlo computer program for the transport of
energetic ions in amorphous targets, Nucl. Inst. and Meth., B13, 100 (1986).
[Cale, 1992] T.S. Cale, G.B. Raupp, and T.H. Gandy, Ballistic transport-reaction prediction of film conformality
in tetraethoxysilane O
2
plasma enhanced deposition of silicon dioxide, J. Vacuum Sci. Technol., A10(4),
1128, (1992).
[Chin, 1982] D. Chin, S.Y. Oh, S.M. Hu, R.W. Dutton, and J.L. Moll, Stress in local oxidation, IEDM Technical
Digest, 228 (1982).
[Deal, 1965] B.E. Deal and A.S. Grove, General relationship for the thermal oxidation of silicon, J. Appl. Phys.,
36(12), 3370 (1965).
? 2000 by CRC Press LLC
[Dill, 1975] F.H. Dill, A.R. Neureuther, J. A. Tuttle, and E.J. Walley, Modeling projection printing of positive
photoresists, IEEE Trans. Electron Dev., 22, 445 (1975).
[Fahey, 1989] P.M. Fahey, P.B. Griffin, and J.D. Plummer, Point defects and dopant diffusion in silicon, Rev.
Modern Phys., 6(12), 289 (1989).
[Fair, 1981] R.B. Fair, Concentration profiles of diffused dopants in silicon, Impurity Doping, F.F.Y. Wang (Ed.),
North-Holland (1981).
[Gilmer, 1998] G.H. Gilmer, H. Huang, and T. Diaz de la Rubia, Thin film deposition, in Computational Material
Science, T. Diaz de al Rubia (Ed.), Elsevier, in press.
[Hamaguchi, 1993] S. Hamaguchi, M. Dalvie, R.T. Farouki, and S. Sethuraman, A shock-tracking algorithm
for surface evolution under reactive-ion etching, J. Appl. Phys., 74(8), 5172 (1993).
[Hobler, 1986] G. Hobler, E. Langer, and S. Selberherr, Two-dimensional modeling of ion implantation, in
Second Int. Conf. Simulation of Semiconductor Devices and Process, K. Board and R. Owen, Eds., Pineridge
Press, Swansea (1986).
[Kao, 1985] D.-B. Kao, J.P. McVittie, W.D. Nix, and K.C. Saraswat, Two-dimensional silicon oxidation experi-
ments and theory, IEDM Technical Digest, 388 (1985).
[Lau, 1990] F. Lau, Modeling of polysilicon diffusion sources, IEDM Technical Digest, 737 (1990).
[Leon, 1998] F. Leon, Short course on next generation TCAD: models and methods, International Electron
Device Meeting, Dec. 13, San Francisco (1998).
[Lim, 1993] D. Lim, S. Yang, S. Morris, and A.F. Tasch, An accurate and computationally efficient model of
boron implantation through screen oxide layers into (100) single-crystal silicon, IEDM, 291 (1993).
[Massoud, 1985] H.Z. Massoud, J.D. Plummer, and E.A. Irene, Thermal oxidation of silicon in dry oxygen: growth-
rate enhancement in the thin regime I. Experimental results, J. Electrochem. Soc., 132, 2685 (1985).
[Oldham, 1979] W.G. Oldham, A.R. Neureuther, C.K. Snug, J.L. Reynolds, and S.N. Nandgaonkar, A general
simulator for VLSI lithography and etching processes. Part I. Application to projection lithography, IEEE
Trans. Elect. Dev., 26, 712 (1979).
[Oldham, 1980] W.G. Oldham, A.R. Neureuther, C.K. Snug, J.L. Reynolds, and S.N. Nandgaonkar, A general
simulator for VLSI lithography and etching processes. Part II. Application to deposition and etching,
IEEE Trans. Elect. Dev., 27, 1455 (1980).
[O’Sullivan, 1999] Peter O’Sullivan, private communication (1999).
[O’Toole, 1979] M.M. O’Toole and A.R. Neureuther, Developments in semiconductor microlithography IV,
SPIE, 174, 22 (1979).
[Rafferty, 1989] C.S. Rafferty, Unpublished.
[Rafferty, 1990] C.S. Rafferty, Two-dimensional modeling of viscous flow in thermal SiO
2
, Extended Abstracts
of the Electrochemical Society Spring Meeting, May 6–11, 423 (1990).
[Rafferty, 1993] C.S. Rafferty, H.-H. Vuong, S.A. Eshraghi, M.D. Giles, M.R. Pinto, and S.J. Hillenius, Expla-
nation of reverse short channel effect by defect gradients, IEDM Technical Digest, 311 (1993).
[Robinson, 1974] M.T. Robinson, Computer simulation of atomic displacement cascaded in solids in binary
collision approximation, Phys. Rev., B9(12), 5008, (1974).
[Singh, 1992] V. Singh, E.S.G. Shaqfeh, and J.P. McVittie, J. Vac. Sci. Technol., B10(3), 1091 (1992).
[Smy, 1998] T. Smy, R.V. Joshi, N. Tait, S.K. Dew, and M.J. Brett, Deposition and simulation of refractory barriers
into high aspect ratio re-entrant features using directional sputtering, IEDM Technical Digest, 311 (1998).
[Toh, 1988] K.K.H. Toh, Two-dimensional images with effects of lens aberrations in optical lithography,
Memorandum UCB/ERL M88/30, University of California, Berkeley, May 20 (1988).
[Watson, 1999] Patrick Watson, private communication (1999).
For Further Information
Several classic textbooks now exist with good information on numerical methods and process simulation.
Among them are:
Physics and Technology of Semiconductor Devices, A.S. Grove, Wiley (1967)
VLSI Technology, edited by S.M. Sze, McGraw Hill (1988 2nd ed.)
? 2000 by CRC Press LLC
Silicon Processing for the VLSI Era, S. Wolf, R.N. Tauber, Vols. 1 & 2, Lattice Press (1986, 1990)
The Finite Element Method, O.C. Zienkiewicz, McGraw-Hill (1977)
Matrix Iterative Analysis, R.S. Varga, Prentice-Hall (1962)
The proceedings of the annual conference SISPAD (Simulation of Semiconductor Processes and Devices),
the annual International Electron Device Meeting (IEDM), and the bi-annual meetings of the Electrochemical
Society and Materials Research Society, are among the main outlets of process simulation work.
? 2000 by CRC Press LLC