Dorf, R.C., Wan, Z., Millstein, L.B., Simon, M..K. “Digital Communication”
The Electrical Engineering Handbook
Ed. Richard C. Dorf
Boca Raton: CRC Press LLC, 2000
70
Digital Communication
70.1 Error Control Coding
Block Codes?Convolutional Codes?Code Performance?Trellis-
Coded Modulation
70.2 Equalization
Linear Transversal Equalizers?Nonlinear Equalizers?Linear
Receivers?Nonlinear Receivers
70.3 Spread Spectrum Communications
A Brief History?Why Spread Spectrum??Basic Concepts and
Terminology?Spread Spectrum Techniques?Applications of
Spread Spectrum
70.1 Error Control Coding
Richard C. Dorf and Zhen Wan
Error correcting codes may be classified into two broad categories: block codes and tree codes. A block code
is a mapping of k input binary symbols into n output binary symbols. Consequently, the block coder is a
memoryless device. Since n > k, the code can be selected to provide redundancy, such as parity bits, which are
used by the decoder to provide some error detection and error correction. The codes are denoted by (n, k),
where the code rate R is defined by R = k/n. Practical values of R range from 1/4 to 7/8, and k ranges from 3
to several hundred [Clark and Cain, 1981]. Some properties of block codes are given in Table 70.1.
A tree code is produced by a coder that has memory. Convolutional codes are a subset of tree codes. The
convolutional coder accepts k binary symbols at its input and produces n binary symbols at its output, where
the n output symbols are affected by v + k input symbols. Memory is incorporated since v > 0. The code rate
is defined by R = k/n. Typical values for k and n range from 1 to 8, and the values for v range from 2 to 60.
The range of R is between 1/4 and 7/8 [Clark and Cain, 1981].
Block Codes
In block code, the n code digits generated in a particular time unit depend only on the k message digits within
that time unit. Some of the errors can be detected and corrected if d 3 s + t + 1, where s is the number of
errors that can be detected, t is the number of errors that can be corrected, and d is the hamming distance.
Usually, s 3 t, thus, d 3 2t + 1. A general code word can be expressed as a
1
, a
2
,...,a
k
, c
1
, c
2
,...,c
r
. k is the number
of information bits and r is the number of check bits. Total word length is n = k + r.
In Fig. 70.1, the gain h
ij
(i = 1, 2,..., r, j = 1, 2,..., k) are elements of the parity check matrix H. The k data
bits are shifted in each time, while k + r bits are simultaneously shifted out by the commutator.
Cyclic Codes
Cyclic codes are block codes such that another code word can be obtained by taking any one code word, shifting
the bits to the right, and placing the dropped-off bits on the left. An encoding circuit with (n – k) shift registers
is shown in Fig. 70.2.
Richard C. Dorf
University of California, Davis
Zhen Wan
University of California, Davis
L. B. Milstein
University of California
M. K. Simon
Jet Propulsion Laboratory
? 2000 by CRC Press LLC
In Fig. 70.2, the gain g
k
s are the coefficients of the generator polynomial g(x) = x
n–k
+ g
1
x
n–k–1
+ . . . + g
n–k–1
x
+ 1. The gains g
k
are either 0 or 1. The k data digits are shifted in one at a time at the input with the switch s
held at position p
1
. The symbol D represents a one-digit delay. As the data digits move through the encoder,
they are also shifted out onto the output lines, because the first k digits of code word are the data digits
themselves. As soon as the last (or kth) data digit clears the last (n – k) register, all the registers contain the
parity-check digits. The switch s is now thrown to position p
2
, and the n – k parity-check digits are shifted out
one at a time onto the line.
TABLE 70.1 Properties of Block Codes
Code
a
Property BCH Reed–Solomon Hamming Maximal Length
Block length n = 2
m
– 1, n = m(2
m
– 1) bits n = 2
m
– 1 n = 2
m
– 1
m = 3, 4, 5, . . .
Number of parity bits r = m2t bits r = m
Minimum distance d 3 2t + 1 d = m(2t + 1) bits d = 3 d = 2
m
– 1
Number of information bits k 3 n – mt k = m
a
m is any positive integer unless otherwise indicated; n is the block length; k is the number of information
bits; t is the number of errors that can be corrected; r is the number of parity bits; d is the distance.
FIGURE 70.1An encoding circuit of (n, k) block code.
FIGURE 70.2An encoder for systematic cyclic code. (Source: B.P. Lathi, Modern Digital and Analog Communications, New
York: CBS College Publishing, 1983. With permission.)
? 2000 by CRC Press LLC
Examples of cyclic and related codes are
1.Bose–Chaudhuri–Hocquenhem (BCH)
2.Reed–Solomon
3.Hamming
4.Maximal length
5.Reed–Muller
6.Golay codes
Convolutional Codes
In convolutional code, the block of n code digits generated by the encoder in a particular time unit depends
not only on the block of k message digits within that time unit but also on the block of data digits within a
previous span of N – 1 time units (N >1). A convolutional encoder is illustrated in Fig. 70.3.
Here k bits (one input frame) are shifted in each time, and concurrently n bits (the output frame) are shifted
out, where n > k. Thus, every k-bit input frame produces an n-bit output frame. Redundancy is provided in
the output, since n > k. Also, there is memory in the coder, since the output frame depends on the previous
K input frames where K > 1. The code rate is R = k/n, which is 3/4 in this illustration. The constraint length,
K, is the number of input frames that are held in the kK bit shift register. Depending on the particular
convolutional code that is to be generated, data from the kK stages of the shift register are added (modulo 2)
and used to set the bits in the n-stage output register.
Code Performance
The improvement in the performance of a digital communication system that can be achieved by the use of
coding is illustrated in Fig. 70.4. It is assumed that a digital signal plus channel noise is present at the receiver
input. The performance of a system that uses binary-phase-shift-keyed (BPSK) signaling is shown both for the
case when coding is used and for the case when there is no coding. For the BPSK no code case, P
e
= Q ().
For the coded case a (23,12) Golay code is used; P
e
is the probability of bit error—also called the bit error rate
(BER)—that is measured at the receiver output.
FIGURE 70.3Convolutional encoding (k = 3, n = 4, K = 5, and R = 3/4).
2(/EN
bo
? 2000 by CRC Press LLC
Trellis-Coded Modulation
Trellis-coded modulation (TCM) combines multilevel modulation and coding to achieve coding gain without
bandwidth expansion [Ungerboeck, 1982, 1987]. TCM has been adopted for use in the new CCITT V.32 modem
that allows an information data rate of 9600 b/s (bits per second) to be transmitted over VF (voice frequency)
lines. The TCM has a coding gain of 4 dB [Wei, 1984]. The combined modulation and coding operation of
TCM is shown in Fig. 70.5(b). Here, the serial data from the source, m(t), are converted into parallel (m-bit)
FIGURE 70.4Performance of digital systems—with and without coding. E
b
is the energy-per-bit to noise-density at the
receiver input. The function Q(x) is Q(x) = (1/ x)e
–x2/2
.
TABLE 70.2Coding Gains with BPSK or QPSK
Coding Gain (dB) Coding Gain (dB) Data Rate
Coding Technique Used at 10
–5
BER at 10
–8
BER Capability
Ideal coding 11.2 13.6
Concatenated Reed–Solomon and convolution
(Viterbi decoding) 6.5–7.5 8.5–9.5 Moderate
Convolutional with sequential decoding
(soft decisions) 6.0–7.0 8.0–9.0 Moderate
Block codes (soft decisions) 5.0–6.0 6.5–7.5 Moderate
Concatenated Reed–Solomon and short block 4.5–5.5 6.5–7.5 Very high
Convolutional with Viterbi decoding 4.0–5.5 5.0–6.5 High
Convolutional with sequential decoding
(hard decisions) 4.0–5.0 6.0–7.0 High
Block codes (hard decisions) 3.0–4.0 4.5–5.5 High
Block codes with threshold decoding 2.0–4.0 3.5–5.5 High
Convolutional with threshold decoding 1.5–3.0 2.5–4.0 Very high
BPSK: modulation technique—binary phase-shift keying; QPSK: modulation technique—quadrature phase-
shift keying; BER: bit error rate.
Source: V.K. Bhargava, “Forward error correction schemes for digital communications,” IEEE Communication
Magazine, 21, 11–19, ? 1983 IEEE. With permission.
2p
? 2000 by CRC Press LLC
GEORGE ANSON HAMILTON
(1843–1935)
elegraphy captivated George Hamilton’s
interest while he was still a boy — to the
extent that he built a small telegraph line
himself, from sinking the poles to making the
necessary apparatus. By the time he was 17, he
was the manager of the telegraph office of the
Atlantic & Great Western Railroad at Ravenna,
Ohio. Hamilton continued to hold managerial
T
? 2000 by CRC Press LLC
data, which are partitioned into k-bit and (m – k)-bit words where k 3 m. The k-bit words (frames) are
convolutionally encoded into (n = k + 1)-bit words so that the code rate is R = k/(k + 1). The amplitude and
phase are then set jointly on the basis of the coded n-bit word and the uncoded (m – k)-bit word. Almost 6
dB of coding gain can be realized if coders of constraint length 9 are used.
Defining Terms
Block code: A mapping of k input binary symbols into n output binary symbols.
Convolutional code: A subset of tree codes, accepting k binary symbols at its input and producing n binary
symbols at its output.
Cyclic code: Block code such that another code word can be obtained by taking any one code word, shifting
the bits to the right, and placing the dropped-off bits on the left.
Tree code: Produced by a coder that has memory.
Related Topics
69.1 Modulation ? 70.2 Equalization
positions with telegraph companies until 1873
when he became assistant to Moses G. Farmer in
his work on general electrical apparatus and
machinery.
In 1875, Hamilton joined Western Union as
assistant electrician and, for the next two years,
worked with Gerritt Smith in establishing and
maintaining the first quadruplex telegraph cir-
cuits in both America and England. He then
focused on the development of the Wheatstone
high-speed automatic system and was also the
chief electrician on the Key West–Havana cable
repair expedition. Hamilton left Western Union
in 1889, however, to join Western Electric,
where he was placed in charge of the production
of fine electrical instruments until the time of
his retirement. (Courtesy of the IEEE Center for
the History of Electrical Engineering.)
References
V.K. Bhargava, “Forward error correction schemes for digital communications,” IEEE Communication Magazine,
21, 1983.
G.C. Clark and J.B. Cain, Error-Correction Coding for Digital Communications, New York: Plenum, 1981.
L.W. Couch, Digital and Analog Communication Systems, New York: Macmillan, 1990.
B.P. Lathi, Modern Digital and Analog Communication, New York: CBS College Publishing, 1983.
G. Ungerboeck, “Channel coding with multilevel/phase signals,” IEEE Transactions on Information Theory, vol.
IT-28 (January), pp. 55–67, 1982.
G. Ungerboeck, “Trellis-coded modulation with redundant signal sets,” Parts 1 and 2, IEEE Communications
Magazine, vol. 25, no. 2 (February), pp. 5–21, 1987.
L. Wei, “Rotationally invariant convolutional channel coding with expanded signal space—Part II: Nonlinear
codes,” IEEE Journal on Selected Areas in Communications, vol. SAC-2, no. 2, pp. 672–686, 1984.
Further Information
For further information refer to IEEE Communications and IEEE Journal on Selected Areas in Communications.
70.2 Equalization
Richard C. Dorf and Zhen Wan
In bandwidth-efficient digital communication systems the effect of each symbol transmitted over a time
dispersive channel extends beyond the time interval used to represent that symbol. The distortion caused by
the resulting overlap of received symbols is called intersymbol interference (ISI) [Lucky et al., 1968]. ISI arises
in all pulse-modulation systems, including frequency-shift keying (FSK), phase-shift keying (PSK), and quadra-
ture amplitude modulation (QAM) [Lucky et al., 1968]. However, its effect can be most easily described for a
baseband PAM system.
The purpose of an equalizer, placed in the path of the received signal, is to reduce the ISI as much as possible
to maximize the probability of correct decisions.
FIGURE 70.5Transmitters for conventional coding and for TCM.
? 2000 by CRC Press LLC
Linear Transversal Equalizers
Among the many structures used for equalization, the simplest is the transversal (tapped delay line or nonre-
cursive) equalizer shown in Fig. 70.6. In such an equalizer the current and past values r(t – nT) of the received
signal are linearly weighted by equalizer coefficients (tap gains) c
n
and summed to produce the output. In the
commonly used digital implementation, samples of the received signal at the symbol rate are stored in a digital
shift register (or memory), and the equalizer output samples (sums of products) z(t
0
+ kT) or z
k
are computed
digitally, once per symbol, according to
where N is the number of equalizer coefficients and t
0
denotes sample timing.
The equalizer coefficients, c
n
, n = 0, 1,. . .,N – 1, may be chosen to force the samples of the combined channel
and equalizer impulse response to zero at all but one of the NT-spaced instants in the span of the equalizer.
Such an equalizer is called a zero-forcing (ZF) equalizer [Lucky, 1965].
If we let the number of coefficients of a ZF equalizer increase without bound, we would obtain an infinite-
length equalizer with zero ISI at its output. An infinite-length zero-ISI equalizer is simply an inverse filter, which
inverts the folded frequency response of the channel. Clearly, the ZF criterion neglects the effect of noise
altogether. A finite-length ZF equalizer is approximately inverse to the folded frequency response of the channel.
Also, a finite-length ZF equalizer is guaranteed to minimize the peak distortion or worst-case ISI only if the
peak distortion before equalization is less than 100% [Lucky, 1965].
The least-mean-squared (LMS) equalizer [Lucky et al.,1968] is more robust. Here the equalizer coefficients
are chosen to minimize the mean squared error (MSE)—the sum of squares of all the ISI terms plus the noise
power at the output of the equalizer. Therefore, the LMS equalizer maximizes the signal-to-distortion ratio
(S/D) at its output within the constraints of the equalizer time span and the delay through the equalizer.
Automatic Synthesis
Before regular data transmission begins, automatic synthesis of the ZF or LMS equalizers for unknown channels
may be carried out during a training period. During the training period, a known signal is transmitted and a
synchronized version of this signal is generated in the receiver to acquire information about the channel
characteristics. The automatic adaptive equalizer is shown in Fig. 70.7. A noisy but unbiased estimate:
FIGURE 70.6Linear transversal equalizer. (Source: K. Feher, Advanced Digital Communications, Englewood Cliffs, N.J.:
Prentice-Hall, 1987, p. 648. With permission.)
z crt kT nt
kn
n
N
=+
=
?
(–)
–
0
0
1
d
d
e
ck
ert kTnT
k
n
k
2
0
2
()
(–)=+
? 2000 by CRC Press LLC
is used. Thus, the tap gains are updated according to
c
n
(k + 1) = c
n
(k) – De
k
r(t
0
+ kT – nT), n = 0, 1, . . ., N – 1
where c
n
(k) is the nth tap gain at time k, e
k
is the error signal, and D is a positive adaptation constant or step
size, error signals e
k
= z
k
– q
k
can be computed at the equalizer output and used to adjust the equalizer coefficients
to reduce the sum of the squared errors. Note q
k
= ?x
k
.
The most popular equalizer adjustment method involves updates to each tap gain during each symbol
interval. The adjustment to each tap gain is in a direction opposite to an estimate of the gradient of the MSE
with respect to that tap gain. The idea is to move the set of equalizer coefficients closer to the unique optimum
set corresponding to the minimum MSE. This symbol-by-symbol procedure developed by Widrow and Hoff
[Feher, 1987] is commonly referred to as the stochastic gradient method.
Adaptive Equalization
After the initial training period (if there is one), the coefficients of an adaptive equalizer may be continually
adjusted in a decision-directed manner. In this mode the error signal e
k
= z
k
– q
k
is derived from the final (not
necessarily correct) receiver estimate {q
k
} of the transmitted sequence {x
k
} where q
k
is the estimate of x
k
. In
normal operation the receiver decisions are correct with high probability, so that the error estimates are correct
often enough to allow the adaptive equalizer to maintain precise equalization. Moreover, a decision-directed
adaptive equalizer can track slow variations in the channel characteristics or linear perturbations in the receiver
front end, such as slow jitter in the sampler phase.
Nonlinear Equalizers
Decision-Feedback Equalizers
A decision-feedback equalizer (DFE) is a simple nonlinear equalizer [Monsen, 1971], which is particularly
useful for channels with severe amplitude distortion and uses decision feedback to cancel the interference from
symbols which have already been detected. Fig. 70.8 shows the diagram of the equalizer.
The equalized signal is the sum of the outputs of the forward and feedback parts of the equalizer. The forward
part is like the linear transversal equalizer discussed earlier. Decisions made on the equalized signal are fed back
via a second transversal filter. The basic idea is that if the values of the symbols already detected are known
(past decisions are assumed to be correct), then the ISI contributed by these symbols can be canceled exactly,
by subtracting past symbol values with appropriate weighting from the equalizer output.
The forward and feedback coefficients may be adjusted simultaneously to minimize the MSE. The update
equation for the forward coefficients is the same as for the linear equalizer. The feedback coefficients are adjusted
according to
b
m
(k + 1) = b
m
(k) + De
k
?
x
k–m
m = 1, . . ., M
where
?
x
k
is the kth symbol decision, b
m
(k) is the mth feedback coefficient at time k, and there are M feedback
coefficients in all. The optimum LMS settings of b
m
, m = 1, . . ., M, are those that reduce the ISI to zero, within
the span of the feedback part, in a manner similar to a ZF equalizer.
FIGURE 70.7Automatic adaptive equalizer. (Source: K. Feher, Advanced Digital Communications, Englewood Cliffs, N.J.:
Prentice-Hall, 1987, p. 651. With permission.)
? 2000 by CRC Press LLC
Fractionally Spaced Equalizers
The optimum receive filter in a linear modulation system is the cascade of a filter matched to the actual channel,
with a transversal T-spaced equalizer [Forney, 1972]. The fractionally spaced equalizer (FSE), by virtue of its
sampling rate, can synthesize the best combination of the characteristics of an adaptive matched filter and a
T-spaced equalizer, within the constraints of its length and delay. A T-spaced equalizer, with symbol-rate
sampling at its input, cannot perform matched filtering. A fractionally spaced equalizer can effectively compen-
sate for more severe delay distortion and deal with amplitude distortion with less noise enhancement than a
T-equalizer.
A fractionally spaced transversal equalizer [Monsen, 1971] is shown in Fig. 70.9. The delay-line taps of such
an equalizer are spaced at an interval t, which is less than, or a fraction of, the symbol interval T. The tap
spacing t is typically selected such that the bandwidth occupied by the signal at the equalizer input is *f* <
FIGURE 70.8Decision-feedback equalizer. (Source: K. Feher, Advanced Digital Communications, Englewood Cliffs, N.J.:
Prentice-Hall, 1987, p. 655. With permission.)
FIGURE 70.9Fractionally spaced equalizer. (Source: K. Feher, Advanced Digital Communications, Englewood Cliffs, N.J.:
Prentice-Hall, p. 656. With permission.)
? 2000 by CRC Press LLC
1/2t: that is, t-spaced sampling satisfies the sampling theorem. In an analog implementation, there is no other
restriction on t, and the output of the equalizer can be sampled at the symbol rate. In a digital implementation
t must be KT/M, where K and M are integers and M > K. (In practice, it is convenient to choose t = T/M,
where M is a small integer, e.g., 2.) The received signal is sampled and shifted into the equalizer delay line at
a rate M/T, and one input is produced each symbol interval (for every M input sample). In general, the equalizer
output is given by
The coefficients of a KT/M equalizer may be updated once per symbol based on the error computed for that
symbol according to
Linear Receivers
When the channel does not introduce any amplitude distortion, the linear receiver is optimum with respect to
the ultimate criterion of minimum probability of symbol error. The conventional linear receiver consists of a
matched filter, a symbol-rate sampler, an infinite-length T-spaced equalizer, and a memoryless detector. The
linear receiver structure is shown in Fig. 70.10.
In the conventional linear receiver, a memoryless threshold detector is sufficient to minimize the probability
of error; the equalizer response is designed to satisfy the zero-ISI constraint, and the matched filter is designed
to minimize the effect of the noise while maximizing the signal.
Matched Filter
The matched filter is the linear filter that maximizes (S/N)
out
= s
2
0
(t)/n
2
0
(t) of Fig. 70.11 and has a transfer
function given by
where S(?) = F[s(t)] is the Fourier transform of the known input signal s(t) of duration T sec. P
n
(?) is the PSD
of the input noise, t
0
is the sampling time when (S/N)
out
is evaluated, and K is an arbitrary real nonzero constant.
A general representation for a matched filter is illustrated in Fig. 70.11. The input signal is denoted by s(t)
and the output signal by s
0
(t). Similar notation is used for the noise.
Nonlinear Receivers
When amplitude distortion is present in the channel, a memoryless detector operating on the output of this receiver
filter no longer minimizes symbol error probability. Recognizing this fact, several authors have investigated optimum
or approximately optimum nonlinear receiver structures subject to a variety of criteria [Lucky, 1973].
FIGURE 70.10Conventional linear receiver.
zcrtkT
nKT
M
kn
n
N
=+
?
è
?
?
?
÷
=
? 0
0
1
–
–
ck ck ertkT
nKT
M
nN
nnk
( ) ()– – , ,,..., –+= +
?
è
?
?
?
÷
=101
0
D
Hf K
Sf
Pf
e
n
jt
()
*()
()
–
=
w
0
? 2000 by CRC Press LLC
Decision-Feedback Equalizers
A DFE takes advantage of the symbols that have already been detected (correctly with high probability) to
cancel the ISI due to these symbols without noise enhancement. A DFE makes memoryless decisions and cancels
all trailing ISI terms. Even when the whitened matched filter (WMF) is used as the receive filter for the DFE,
the DFE suffers from a reduced effective signal-to-noise ratio, and error propagation, due to its inability to
defer decisions.
An infinite-length DFE receiver takes the general form (shown in Fig. 70.12) of a forward linear receive filter,
symbol-rate sampler, canceler, and memoryless detector. The symbol-rate output of the detector is then used
by the feedback filter to generate future outputs for cancellation.
Adaptive Filters for MLSE
For unknown and/or slowly time-varying channels, the receive filter must be adaptive in order to obtain the
ultimate performance gain from MLSE (maximum-likelihood sequence estimation). Secondly, the complexity
of the MLSE becomes prohibitive for practical channels with a large number of ISI terms. Therefore, in a
practical receiver, an adaptive receive filter may be used prior to Viterbi detection to limit the time spread of
the channel as well as to track slow time variation in the channel characteristics [Falconer and Magee, 1973].
Several adaptive receive filters are available that minimize the MSE at the input to the Viterbi algorithm.
These methods differ in the form of constraint [Falconer and Magee, 1973] on the desired impulse response
(DIR) which is necessary in this optimization process to exclude the selection of the null DIR corresponding
to no transmission through the channel. The general form of such a receiver is shown in Fig. 70.13.
FIGURE 70.11Matched filter.
FIGURE 70.12Conventional decision-feedback receiver. (Source: K. Feher, Advanced Digital Communications, Englewood
Cliffs, N.J.: Prentice-Hall, 1987, p. 675. With permission.)
? 2000 by CRC Press LLC
One such constraint is to restrict the DIR to be causal and to restrict the first coefficient of the DIR to be
unity. In this case the delay (LT) in Fig. 70.13 is equal to the delay through the Viterbi algorithm and the first
coefficient of {b
k
} is constrained to be unity.
The least restrictive constraint on the DIR is the unit energy constraint proposed by Falconer and Magee
[1973]. This leads to yet another form of the receiver structure as shown in Fig. 70.13. However, the adaptation
algorithm for updating the DIR coefficients {b
k
} is considerably more complicated [Falconer and Magee, 1973].
Note that the fixed predetermined WMF and T-spaced prefilter combination of Falconer and Magee [1973]
has been replaced in Fig. 70.13 by a general fractionally spaced adaptive filter.
Defining Terms
Equalizer: A filter used to reduce the effect of intersymbol interference.
Intersymbol interference: The distortion caused by the overlap (in time) of adjacent symbols.
Related Topic
70.1 Coding
References
L.W. Couch, Digital and Analog Communication Systems, New York: Macmillan, 1990.
D.D. Falconer and F.R. Magee, Jr., “Adaptive channel memory truncation for maximum likelihood sequence
estimation,” Bell Syst. Technical Journal, vol. 5, pp. 1541–1562, November 1973.
K. Feher, Advanced Digital Communications, Englewood Cliffs, N.J.: Prentice-Hall, 1987.
G.D. Forney, Jr., “Maximum-likelihood sequence estimation of digital sequences in the presence of intersymbol
interference,” IEEE Trans. Information Theory, vol. IT-88, pp. 363–378, May 1972.
R.W. Lucky, “Automatic equalization for digital communication,” Bell Syst. Tech. Journal, vol. 44, pp. 547–588,
April 1965.
R.W. Lucky, “A survey of the communication theory literature: 1968–1973,” IEEE Trans. Information Theory,
vol. 52, pp. 1483–1519, November 1973.
R.W. Lucky, J. Salz, and E.J. Weldon, Jr., Principles of Data Communication, New York: McGraw-Hill, 1968.
P. Monsen, “Feedback equalization for fading dispersive channels,” IEEE Trans. Information Theory, vol. IT-17,
pp. 56–64, January 1971.
FIGURE 70.13 General form of adaptive MLSE receiver with finite-length DIR. (Source: K. Feher, Advanced Digital
Communications, Englewood Cliffs, N.J.: Prentice-Hall, 1987, p. 684. With permission.)
? 2000 by CRC Press LLC
70.3 Spread Spectrum Communications
1
L.B. Milstein and M.K. Simon
A Brief History
Spread spectrum (SS) has its origin in the military arena where the friendly communicator is (1) susceptible
to detection/interception by the enemy and (2) vulnerable to intentionally introduced unfriendly interference
(jamming). Communication systems that employ spread spectrum to reduce the communicator’s detectability
and combat the enemy-introduced interference are respectively referred to as low probability of intercept (LPI)
and antijam (AJ) communication systems. With the change in the current world political situation wherein
the U.S. Department of Defense (DOD) has reduced its emphasis on the development and acquisition of new
communication systems for the original purposes, a host of new commercial applications for SS has evolved,
particularly in the area of cellular mobile communications. This shift from military to commercial applications
of SS has demonstrated that the basic concepts that make SS techniques to useful in the military can also be
put to practical peacetime use. In the next section, we give a simple description of these basic concepts using
the original military application as the basis of explanation. The extension of these concepts to the mentioned
commercial applications will be treated later on in the chapter.
Why Spread Spectrum?
Spread spectrum is a communication technique wherein the transmitted modulation is spread (increased) in
bandwidth prior to transmission over the channel and then despread (decreased) in bandwidth by the same
amount at the receiver. If it were not for the fact that the communication channel introduces some form of
narrowband (relative to the spread bandwidth) interference, the receiver performance would be transparent to
the spreading and despreading operations (assuming that they are identical inverses of each other). That is,
after despreading the received signal would be identical to the transmitted signal prior to spreading. In the
presence of narrowband interference, however, there is a significant advantage to employing the spread-
ing/despreading procedure described. The reason for this is as follows. Since the interference is introduced after
the transmitted signal is spread, then, whereas the despreading operation at the receiver shrinks the desired
signal back to its original bandwidth, at the same time it spreads the undesired signal (interference) in bandwidth
by the same amount, thus reducing its power spectral density. This, in turn, serves to diminish the effect of
the interference on the receiver performance, which depends on the amount of interference power in the spread
bandwidth. It is indeed this very simple explanation, which is at the heart of all spread spectrum techniques.
Basic Concepts and Terminology
To describe this process analytically and at the same time introduce some terminology that is common in spread
spectrum parlance, we proceed as follows. Consider a communicator that desires to send a message using a
transmitted power S Watts (W) at an information rate R
b
bits/s (bps). By introducing a SS modulation, the
bandwidth of the transmitted signal is increased from R
b
Hz to W
ss
Hz where W
ss
@ R
b
denotes the spread
spectrum bandwidth. Assume that the channel introduces, in addition to the usual thermal noise (assumed
to have a single-sided power spectral density (PSD) equal to N
0
W/Hz), an additive interference (jamming)
having power J distributed over some bandwidth W
J
. After despreading, the desired signal bandwidth is once
again now equal to R
b
Hz and the interference PSD is now N
J
= J/W
ss
. Note that since the thermal noise is
assumed to be white, i.e., it is uniformly distributed over all frequencies, its PSD is unchanged by the despreading
operation and, thus, remains equal to N
0
. Regardless of the signal and interferer waveforms, the equivalent bit
energy-to-total noise ratio is, in terms of the given parameters,
1
The material in this article was previously published by CRC Press in The Mobile Communications Handbook, Jerry P.
Gibson, Editor-in-Chief, 1996.
? 2000 by CRC Press LLC
(70.1)
For most practical scenarios, the jammer limits performance and, thus, the effects of receiver noise in the
channel can be ignored. Thus, assuming N
J
@ N
0
, we can rewrite Eq. (70.1) as
(70.2)
where the ratio J/S is the jammer-to-signal power ratio and the ratio W
ss
/R
b
is the spreading ratio and is defined
as the processing gain of the system. Since the ultimate error probability performance of the communication
receiver depends on the ratio E
b
/N
J
, we see that from the communicator’s viewpoint his goal should be to
minimize J/S (by choice of S) and maximize the processing gain (by choice of W
ss
for a given desired information
rate). The possible strategies for the jammer will be discussed in the section on military applications dealing
with AJ communications.
Spread Spectrum Techniques
By far the two most popular spreading techniques are direct sequence (DS) modulation and frequency hopping
(FH) modulation. In the following subsections, we present a brief description of each.
Direct Sequence Modulation
A direct sequence modulation c(t) is formed by linearly modulating the output sequence {c
n
} of a pseudorandom
number generator onto a train of pulses, each having a duration T
c
called the chip time. In mathematical form,
(70.3)
where p(t) is the basic pulse shape and is assumed to be of rectangular form. This type of modulation is usually
used with binary phase-shift-keyed (BPSK) information signals, which have the complex form d(t) exp{j(2pf
c
t
+ q
c
)}, where d(t) is a binary-valued data waveform of rate 1/T
b
bit/s and f
c
and q
c
are the frequency and phase
of the data-modulated carrier, respectively. As such, a DS/BPSK signal is formed by multiplying the BPSK signal
by c(t) (see Fig. 70.14), resulting in the real transmitted signal
(70.4)
Since T
c
is chosen so that T
b
@ T
c
, then relative to the bandwidth of the BPSK information signal, the bandwidth
of the DS/BPSK signal
2
is effectively increased by the ratio T
b
/T
c
= W
ss
/2R
b
, which is one-half the spreading
factor or processing gain of the system. At the receiver, the sum of the transmitted DS/BPSK signal and the
channel interference I(t) (as discussed before, we ignore the presence of the additive thermal noise) are ideally
multiplied by the identical DS modulation (this operation is known as despreading), which returns the DS/BPSK
signal to its original BPSK form whereas the real interference signal is now the real wideband signal Re{(t)c(t)}.
In the previous sentence, we used the word ideally, which implies that the PN waveform used for despreading
at the receiver is identical to that used for spreading at the transmitter. This simple implication covers up a
2
For the usual case of a rectangular spreading pulse p(t), the PSD of the DS/BPSK modulation will have (sin x/x)
2
form
with first zero crossing at 1/T
c
, which is nominally taken as one-half the spread spectrum bandwidth W
ss
.
E
N
E
NN
SR
NJW
b
t
b
J
b
=
+
=
+
00s
E
N
E
N
SR
JW
S
J
W
R
b
t
b
J
b
b
@= =
ss
ss
ct cpt nT
nc
n
()
=-
( )
=-¥
¥
?
xt ctdt j ft
cc
()
=
()()
+
( )
[]{}
Re exp 2pq
? 2000 by CRC Press LLC
multitude of tasks that a practical DS receiver must perform. In particular, the receiver must first acquire the
PN waveform. That is, the local PN random generator that generates the PN waveform at the receiver used for
despreading must be aligned (synchronized) to within one chip of the PN waveform of the received DS/BPSK
signal. This is accomplished by employing some sort of search algorithm which typically steps the local PN
waveform sequentially in time by a fraction of a chip (e.g., half a chip) and at each position searches for a high
degree of correlation between the received and local PN reference waveforms. The search terminates when the
correlation exceeds a given threshold, which is an indication that the alignment has been achieved. After bringing
the two PN waveforms into coarse alignment, a tracking algorithm is employed to maintain fine alignment.
The most popular forms of tracking loops are the continuous time delay-locked loop and its time-multiplexed
version of the tau-dither loop. It is the difficulty in synchronizing the receiver PN generator to subnanosecond
accuracy that limits PN chip rates to values on the order of hundreds of Mchips/s, which implies the same
limitation on the DS spread spectrum bandwidth W
ss
.
Frequency Hopping Modulation
A frequency hopping (FH) modulation c(t) is formed by nonlinearly modulating a train of pulses with a
sequence of pseudorandomly generated frequency shifts {f
n
}. In mathematical terms, c(t) has the complex form
(70.5)
where p(t) is again the basic pulse shape having a duration T
h
, called the hop time and {f
n
} is a sequence of
random phases associated with the generation of the hops. FH modulation is traditionally used with multiple-
frequency-shift-keyed (MFSK) information signals, which have the complex form exp{j[2p(f
c
+ d(t))t]}, where
d(t) is an M-level digital waveform (M denotes the symbol alphabet size) representing the information frequency
modulation at a rate 1/T
s
symbols/s (sps). As such, an FH/MFSK signal is formed by complex multiplying the
MFSK signal by c(t) resulting in the real transmitted signal
(70.6)
In reality, c(t) is never generated in the transmitter. Rather, x(t) is obtained by applying the sequence of
pseudorandom frequency shifts {f
n
} directly to the frequency synthesizer that generates the carrier frequency f
c
(see Fig. 70.15). In terms of the actual implementation, successive (not necessarily disjoint) k-chip segments
of a PN sequence drive a frequency synthesizer, which hops the carrier over 2
k
frequencies. In view of the large
bandwidths over which the frequency synthesizer must operate, it is difficult to maintain phase coherence from
hop to hop, which explains the inclusion of the sequence {f
n
} in the Eq. (70.5) model for c(t). On a short term
basis, e.g., within a given hop, the signal bandwidth is identical to that of the MFSK information modulation,
which is typically much smaller than W
ss
. On the other hand, when averaged over many hops, the signal
bandwidth is equal to W
ss
, which can be on the order of several GHz, i.e., an order of magnitude larger than
that of implementable DS bandwidths. The exact relation between W
ss
, T
h
, T
s
and the number of frequency
shifts in the set {f
n
} will be discussed shortly.
FIGURE 70.14A DS-BPSK system (complex form).
ct j f pt nT
n
nn h
()
=+
( ){}
-
( )
=-¥
¥
?
exp 2pf
xt ct j f dtt
c
()
=
()
+
()( )
[]{}{}
Re exp 2p
? 2000 by CRC Press LLC
At the receiver, the sum of the transmitted FH/MFSK signal and the channel interference I(t) is ideally
complex multiplied by the identical FH modulation (this operation is known as dehopping), which returns
the FH/MFSK signal to its original MFSK form, whereas the real interference signal is now the wideband (in
the average sense) signal Re{I(t)c(t)}. Analogous to the DS case, the receiver must acquire and track the FH
signal so that the dehopping waveform is as close to the hopping waveform c(t) as possible.
FH systems are traditionally classified in accordance with the relationship between T
h
and T
s
. Fast frequency-
hopped (FFH) systems are ones in which there exists one or more hops per data symbol, that is, T
s
= NT
h
(N
an integer) whereas slow frequency-hopped (SFH) systems are ones in which there exists more than one symbol
per hop, that is, T
h
= NT
s
. It is customary in SS parlance to refer to the FH/MFSK tone of shortest duration as
a “chip”, despite the same usage for the PN chips associated with the code generator that drives the frequency
synthesizer. Keeping this distinction in mind, in an FFH system where, as already stated, there are multiple
hops per data symbol, a chip is equal to a hop. For SFH, where there are multiple data symbols per hop, a chip
is equal to an MFSK symbol. Combining these two statements, the chip rate R
c
in an FH system is given by
the larger of R
h
= 1/T
h
and R
s
= 1/T
s
and, as such, is the highest system clock rate.
The frequency spacing between the FH/MFSK tones is governed by the chip rate R
c
and is, thus, dependent
on whether the FH modulation is FFH or SFH. In particular, for SFH where R
c
= R
s
, the spacing between
FH/MFSK tones is equal to the spacing between the MFSK tones themselves. For noncoherent detection (the
most commonly encountered in FH/MFSK systems), the separation of the MFSK symbols necessary to provide
orthogonality
3
is an integer multiple of R
s
. Assuming the minimum spacing, i.e., R
s
, the entire spread spectrum
band is then partitioned into a total of N
t
= W
ss
/R
s
=W
ss
/R
c
equally spaced FH tones. One arrangement, which
is by far the most common, is to group these N
t
tones into N
b
= N
t
/M contiguous, nonoverlapping bands, each
with bandwidth M R
s
= M R
c
; see Fig. 70.16(a). Assuming symmetric MFSK modulation around the carrier
frequency, then the center frequencies of the N
b
= 2
k
bands represent the set of hop carriers, each of which is
assigned to a given k-tuple of the PN code generator. In this fixed arrangement, each of the N
t
FH/MFSK tones
corresponds to the combination of a unique hop carrier (PN code k-tuple) and a unique MFSK symbol. Another
arrangement, which provides more protection against the sophisticated interferer (jammer), is to overlap
adjacent M -ary bands by an amount equal to R
c
; see Fig. 70.16(b). Assuming again that the center frequency
of each band corresponds to a possible hop carrier, then since all but M – 1 of the N
t
tones are available as
center frequencies, the number of hop carriers has been increased from N
t
/M to N
t
– (M – 1), which for N
t
@
M is approximately an increase in randomness by a factor of M.
3
An optimum noncoherent MFSK detector consists of a bank of energy detectors each matched to one of the M frequencies
in the MFSK set. In terms of this structure, the notion of orthogonality implies that for a given transmitted frequency there
will be no crosstalk (energy spillover) in any of the other M-1 energy detectors.
FIGURE 70.15An FH-MFSK system.
? 2000 by CRC Press LLC
For FFH, where R
c
= R
h
, the spacing between FH/MFSK tones is equal to the hop rate. Thus, the entire
spread spectrum band is partitioned into a total of N
t
= W
ss
/R
h
= W
ss
/R
c
equally spaced FH tones, each of which
is assigned to a unique k-tuple of the PN code generator that drives the frequency synthesizer. Since for FFH
there are R
h
/R
s
hops per symbol, then the metric used to make a noncoherent decision on a particular symbol
is obtained by summing up R
h
/R
s
detected chip (hop) energies, resulting in a so-called noncoherent combining loss.
Time Hopping Modulation
Time hopping (TH) is to spread spectrum modulation what pulse position modulation (PPM) is to information
modulation. In particular, consider segmenting time into intervals of T
f
seconds and further segment each T
f
interval into M
T
increments of width T
f
/M
T
. Assuming a pulse of maximum duration equal to T
f
/M
T
, then a
time hopping spread spectrum modulation would take the form
FIGURE 70.16(a)Frequency distribution for FH-4FSK
—nonoverlapping bands. Dashed lines indicate lo cation
of hop frequencies.
FIGURE 70.16(b)Frequency distribution for FH-4FSK —
overlapping bands.
? 2000 by CRC Press LLC
(70.7)
where a
n
denotes the pseudorandom position (one of M
T
uniformly spaced locations) of the pulse within the
T
r
-second interval.
For DS and FH, we saw that multiplicative modulation, that is the transmitted signal is the product of the
SS and information signals, was the natural choice. For TH, delay modulation is the natural choice. In particular,
a TH-SS modulation takes the form
(70.8)
where d(t) is a digital information modulation at a rate 1/T
s
. Finally, the dehopping procedure at the receiver
consists of removing the sequence of delays introduced by c(t), which restores the information signal back to
its original form and spreads the interferer.
Hybrid Modulations
By blending together several of the previous types of SS modulation, one can form hybrid modulations that,
depending on the system design objectives, can achieve a better performance against the interferer than can
any of the SS modulations acting alone. One possibility is to multiply several of the c(t) wideband waveforms
[now denoted by c
(i)
(t) to distinguish them from one another] resulting in a SS modulation of the form
(70.9)
Such a modulation may embrace the advantages of the various c
(i)
(t), while at the same time mitigating their
individual disadvantages.
Applications of Spread Spectrum
Military
Antijam (AJ) Communications.As already noted, one of the key applications of spread spectrum is for
antijam communications in a hostile environment. The basic mechanism by which a direct sequence spread
spectrum receiver attenuates a noise jammer was illustrated in Sec. 70.3. Therefore, in this section, we will
concentrate on tone jamming.
Assume the received signal, denoted r(t), is given by
(70.10)
where x(t) is given in Eq. (70.4), A is a constant amplitude,
(70.11)
and n
w
(t) is additive while Gaussian noise (AWGN) having two sided spectral density N
0
/2. In Eq. (70.11), a
is the amplitude of the tone jammer and q is a random phase uniformly distributed in [0, 2p].
If we employ the standard correlation receiver of Fig. 70.17, it is straightforward to show that the final test
statistic out of the receiver is given by
(70.12)
ct pt n
a
M
T
n
n
T
f
()
=-+
?
è
?
?
?
÷
é
?
ê
ê
ù
?
ú
ú
=-¥
¥
?
xt ct dt j f
cT
()
=-
()( )
+
( )
[]{}
Re exp 2pf
ct ct
i
i
()
=
()
()
?
rt Axt It nt
w
()
=
()
+
()
+
()
It ft
c
()
=+
( )
apqcos2
gT AT ctt NT
bb b
T
b
()
=+
()
+
()
ò
aqcos d
0
? 2000 by CRC Press LLC
where N(T
b
) is the contribution to the test statistic due to the AWGN. Noting that, for rectangular chips, we
can express
(70.13)
where
(70.14)
is one-half of the processing gain. it is straightforward to show that, for a given value of q, the signal-to-noise-
plus-interference ratio, denoted by S/N
total
, is given by
(70.15)
In Eq. (70.15), the jammer power is
(70.16)
and the signal power is
(70.17)
If we look at the second term in the denominator of Eq. (70.15), we see that the ratio J/S is divided by M.
Realizing that J/S is the ratio of the jammer power to the signal power before despreading, and J/MS is the
ratio of the same quantity after despreading, we see that, as was the case for noise jamming, the benefit of
employing direct sequence spread spectrum signalling in the presence of tone jamming is to reduce the effect
of the jammer by an amount on the order of the processing gain.
Finally, one can show that an estimate of the average probability of error of a system of this type is given by
(70.18)
where
(70.19)
FIGURE 70.17
ct t T c
c
T
i
i
M
b
( )
=
ò
?
=
d
0
1
M
T
T
b
c
=
D
S
N N
E
J
MS
b
total
=
+
?
è
?
?
?
÷
1
2
0 2
cos q
J =
D a
2
2
S
A
=
D
2
2
P
S
N
d
e
=-
?
è
?
?
?
÷
ò
1
2 0
2
p
fq
p
total
f
p
xe
yy
x
( )
=
-
-¥
ò
D 1
2
22
d
? 2000 by CRC Press LLC
If Eq. (70.18) is evaluated numerically and plotted, the results are as shown in Fig. 70.18. It is clear from this
figure that a large initial power advantage of the jammer can be overcome by a sufficiently large value of the
processing gain.
Low-Probability of Intercept (LPI).The opposite side of the AJ problem is that of LPI, that is, the desire to
hide your signal from detection by an intelligent adversary so that your transmissions will remain unnoticed
and, thus, neither jammed nor exploited in any manner. This idea of designing an LPI system is achieved in a
variety of ways, including transmitting at the smallest possible power level, and limiting the transmission time
to as short an interval in time as is possible. The choice of signal design is also important, however, and it is
here that spread spectrum techniques become relevant.
The basic mechanism is reasonably straightforward; if we start with a conventional narrowband signal, say
a BPSK waveform having a spectrum as shown in Fig. 70.19(a), and then spread it so that its new spectrum is
as shown in Fig. 70.19(b), the peak amplitude of the spectrum after spreading has been reduced by an amount
on the order of the processing gain relative to what it was before spreading. Indeed, a sufficiently large processing
gain will result in the spectrum of the signal after spreading falling below the ambient thermal noise level.
Thus, there is no easy way for an unintended listener to determine that a transmission is taking place.
That is not to say the spread signal cannot be detected, however, merely that it is more difficult for an
adversary to learn of the transmission. Indeed, there are many forms of so-called intercept receivers that are
specifically designed to accomplish this very task. By way of example, probably the best known and simplest
to implement is a radiometer, which is just a device that measures the total power present in the received signal.
FIGURE 70.18
? 2000 by CRC Press LLC
In the case of our intercept problem, even though we have lowered the power spectral density of the transmitted
signal so that it falls below the noise floor, we have not lowered its power (i.e., we have merely spread its power
over a wider frequency range). Thus, if the radiometer integrates over a sufficiently long period of time, it will
eventually determine the presence of the transmitted signal buried in the noise. The key point, of course, is
that the use of the spreading makes the interceptor’s task much more difficult, since he has no knowledge of
the spreading code and, thus, cannot despread the signal.
Commercial
Multiple Access Communications. From the perspective of commercial applications, probably the most
important use of spread spectrum communications is as a multiple accessing technique. When used in this
manner, it becomes an alternative to either frequency division multiple access (FDMA) or time division multiple
access (TDMA) and is typically referred to as either code division multiple access (CDMA) or spread spectrum
multiple access (SSMA). When using CDMA, each signal in the set is given its own spreading sequence. As
opposed to either FDMA, wherein all users occupy disjoint frequency bands but are transmitted simultaneously
in time, or TDMA, whereby all users occupy the same bandwidth but transmit in disjoint intervals of time, in
CDMA, all signals occupy the same bandwidth and are transmitted simultaneously in time; the different
waveforms in CDMA are distinguished from one another at the receiver by the specific spreading codes they
employ.
Since most CDMA detectors are correlation receivers, it is important when deploying such a system to have
a set of spreading sequences that have relatively low-pairwise cross-correlation between any two sequences in
the set. Further, there are two fundamental types of operation in CDMA, synchronous and asynchronous. In
the former case, the symbol transition times of all of the users are aligned; this allows for orthogonal sequences
to be used as the spreading sequences and, thus, eliminates interference from one user to another. Alternatively,
if no effort is made to align the sequences, the system operates asynchronously; in this latter mode, multiple
access interference limits the ultimate channel capacity, but the system design exhibits much more flexibility.
CDMA has been of particular interest recently for applications in wireless communications. These applica-
tions include cellular communications, personal communications services (PCS), and wireless local area net-
works. The reason for this popularity is primarily due to the performance that spread spectrum waveforms
display when transmitted over a multipath fading channel.
To illustrate this idea, consider DS signalling. As long as the duration of a single chip of the spreading
sequence is less than the multipath delay spread, the use of DS waveforms provides the system designer with
FIGURE 70.19
? 2000 by CRC Press LLC
one or two options. First, the multipath can be treated as a form of interference, which means the receiver
should attempt to attenuate it as much as possible. Indeed, under this condition, all of the multipath returns
that arrive at the receiver with a time delay greater than a chip duration from the multipath return to which
the receiver is synchronized (usually the first return) will be attenuated because of the processing gain of the
system.
Alternately, the multipath returns that are separated by more than a chip duration from the main path
represent independent “looks” at the received signal and can be used constructively to enhance the overall
performance of the receiver. That is, because all of the multipath returns contain information regarding the
data that is being sent, that information can be extracted by an appropriately designed receiver. Such a receiver,
typically referred to as a RAKE receiver, attempts to resolve as many individual multipath returns as possible
and then to sum them coherently. This results in an implicit diversity gain, comparable to the use of explicit
diversity, such as receiving the signal with multiple antennas.
The condition under which the two options are available can be stated in an alternate manner. If one envisions
what is taking place in the frequency domain, it is straightforward to show that the condition of the chip
duration being smaller than the multipath delay spread is equivalent to requiring that the spread bandwidth
of the transmitted waveform exceed what is called the coherence bandwidth of the channel. This latter quantity
is simply the inverse of the multipath delay spread and is a measure of the range of frequencies that fade in a
highly correlated manner. Indeed, anytime the coherence bandwidth of the channel is less than the spread
bandwidth of the signal, the channel is said to be frequency selective with respect to the signal. Thus, we see
that to take advantage of DS signalling when used over a multipath fading channel, that signal should be
designed such that it makes the channel appear frequency selective.
In addition to the desirable properties that spread spectrum signals display over multipath channels, there
are two other reasons why such signals are of interest in cellular-type applications. The first has to do with a
concept known as the reuse factor. In conventional cellular systems, either analog or digital, in order to avoid
excessive interference from one cell to its neighbor cells, the frequencies used by a given cell are not used by
its immediate neighbors (i.e., the system is designed so that there is a certain spatial separation between cells
that use the same carrier frequencies). For CDMA, however, such spatial isolation is typically not needed, so
that so-called universal reuse is possible.
Further, because CDMA systems tend to be interference limited, for those applications involving voice
transmission, an additional gain in the capacity of the system can be achieved by the use of voice activity
detection. That is, in any given two-way telephone conversation, each user is typically talking only about 50%
of the time. During the time when a user is quiet, he is not contributing to the instantaneous interference.
Thus, if a sufficiently large number of users can be supported by the system, statistically only about one-half
of them will be active simultaneously, and the effective capacity can be doubled.
Interference Rejection. In addition to providing multiple accessing capability, spread spectrum techniques
are of interest in the commercial sector for basically the same reasons they are in the military community,
namely their AJ and LPI characteristics. However, the motivations for such interest differ. For example, whereas
the military is interested in ensuring that systems they deploy are robust to interference generated by an
intelligent adversary (i.e., exhibit jamming resistance), the interference of concern in commercial applications
is unintentional. It is sometimes referred to as co-channel interference (CCI) and arises naturally as the result
of many services using the same frequency band at the same time. And while such scenarios almost always
allow for some type of spatial isolation between the interfering waveforms, such as the use of narrow-beam
antenna patterns, at times the use of the inherent interference suppression property of a spread spectrum signal
is also desired. Similarly, whereas the military is very much interested in the LPI property of a spread spectrum
waveform, as indicated in Sec. 70.3, there are applications in the commercial segment where the same charac-
teristic can be used to advantage.
To illustrate these two ideas, consider a scenario whereby a given band of frequencies is somewhat sparsely
occupied by a set of conventional (i.e., nonspread) signals. To increase the overall spectral efficiency of the
band, a set of spread spectrum waveforms can be overlaid on the same frequency band, thus forcing the two
sets of users to share common spectrum. Clearly, this scheme is feasible only if the mutual interference that
one set of users imposes on the other is within tolerable limits. Because of the interference suppression properties
? 2000 by CRC Press LLC
of spread spectrum waveforms, the despreading process at each spread spectrum receiver will attenuate the
components of the final test statistic due to the overlaid narrowband signals. Similarly, because of the LPI
characteristics of spread spectrum waveforms, the increase in the overall noise level as seen by any of the
conventional signals, due to the overlay, can be kept relatively small.
Defining Terms
Antijam communication system: A communication system designed to resist intentional jamming by the
enemy.
Chip time (interval): The duration of a single pulse in a direct sequence modulation; typically much smaller
than the formation symbol interval.
Coarse alignment: The process whereby the received signal and the despreading signal are aligned to within
a single chip interval.
Dehopping: Despreading using a frequency-hopping modulation.
Delay-locked loop: A particular implementation of a closed-loop technique for maintaining fine alignment.
Despreading: The notion of decreasing the bandwidth of the received (spread) signal back to its information
bandwidth.
Direct sequence modulation: A signal formed by linearly modulating the output sequence of a pseudorandom
number generator onto a train of pulses.
Direct sequence spread spectrum: A spreading technique achieved by multiplying the information signal by
a direct sequence modulation.
Fast frequency-hopping: A spread spectrum technique wherein the hop time is less than or equal to the
information symbol interval, i.e., there exist one or more hops per data symbol.
Fine alignment: The state of the system wherein the received signal and the despreading signal are aligned
to within a small fraction of a single chip interval.
Frequency-hopping modulation: A signal formed by nonlinearly modulating a train of pulses with a sequence
of pseudorandomly generated frequency shifts.
Hop time (interval): The duration of a single pulse in a frequency-hopping modulation.
Hybrid spread spectrum: A spreading technique formed by blending together several spread spectrum tech-
niques, e.g., direct sequence, frequency-hopping, etc.
Low-probability-of-intercept communication system: A communication system designed to operate in a
hostile environment wherein the enemy tries to detect the presence and perhaps characteristics of the
friendly communicator’s transmission.
Processing gain (spreading ratio): The ratio of the spread spectrum bandwidth to the information data rate.
Radiometer: A device used to measure the total energy in the received signal.
Slow frequency-hopping: A spread spectrum technique wherein the hop time is greater than the information
symbol interval, i.e., there exists more than one data symbol per hop.
Spread spectrum bandwidth: The bandwidth of the transmitted signal after spreading.
Spreading: The notion of increasing the bandwidth of the transmitted signal by a factor far in excess of its
information bandwidth.
Search algorithm: A means for coarse aligning (synchronizing) the despreading signal with the received
spread spectrum signal.
Tau-dither loop: A particular implementation of a closed-loop technique for maintaining fine alignment.
Time-hopping spread spectrum: A spreading technique that is analogous to pulse position modulation.
Tracking algorithm: An algorithm (typically closed loop) for maintaining fine alignment.
Related Topics
69.1 Modulation and Demodulation ? 73.2 Noise
Reference
J.D. Gibson, The Mobile Communications Handbook, Boca Raton, FL: CRC Press, 1996.
? 2000 by CRC Press LLC
Further Information
M.K. Simon, J. K. Omura, R. A. Scholtz, and B. K. Levitt, Spread Spectrum Communications Handbook, New
York: McGraw Hill, 1994 (previously published as Spread Spectrum Communications, Computer Science
Press, 1985).
R.E. Ziemer and R. L. Peterson, Digital Communications and Spread Spectrum Techniques, New York: Macmillan,
1985.
J.K. Holmes, Coherent Spread Spectrum Systems, New York: John Wiley & Sons, 1982.
R.C. Dixon, Spread Spectrum Systems, 3rd ed., New York: John Wiley & Sons, 1994.
C.F. Cook, F. W. Ellersick, L. B. Milstein, and D. L. Schilling, Spread Spectrum Communications, IEEE Press, 1983.
? 2000 by CRC Press LLC