Dorf, R.C., Wan, Z., Lindsey III, J.F., Doelitzsch, D.F., Whitaker J., Roden, M.S., Salek,
S., Clegg, A.H. “Broadcasting”
The Electrical Engineering Handbook
Ed. Richard C. Dorf
Boca Raton: CRC Press LLC, 2000
69
Broadcasting
69.1 Modulation and Demodulation
Modulation?Superheterodyne Technique?Pulse-Code
Modulation?Frequency-Shift Keying? M-ary Phase-Shift
Keying?Quadrature Amplitude Modulation
69.2 Radio
Standard Broadcasting (Amplitude Modulation)?Frequency
Modulation
69.3 Television Systems
Scanning Lines and Fields?Interlaced Scanning
Fields?Synchronizing Video Signals?Television Industry
Standards?Transmission Equipment?Television Reception
69.4 High-Definition Television
Proposed Systems
69.5 Digital Audio Broadcasting
The Need for DAB?DAB System Design Goals?Historical
Background?Technical Overview of DAB?Audio Compression
and Source Encoding?System Example:Eureka-147/DAB
69.1 Modulation and Demodulation
Richard C. Dorf and Zhen Wan
Modulation is the process of impressing the source information onto a bandpass signal with a carrier frequency
f
c
. This bandpass signal is called the modulated signal s(t), and the baseband source signal is called the
modulating signal m(t). The modulated signal could be represented by
s(t) = Re{g(t)e
j wct
} (69.1)
or, equivalently,
s(t) = R(t) cos [w
c
t + q(t)] (69.2)
and
s(t) = x(t) cos w
c
t – y(t) sin w
c
t (69.3)
where w
c
= 2pf
c
. The complex envelope is
g(t) = R(t)e
jq(t)
= x(t) + jy(t) (69.4)
and g(t) is a function of the modulating signal m(t). That is,
g(t) = g[m(t)]
Richard C. Dorf
University of California, Davis
Zhen Wan
University of California, Davis
Jefferson F. Lindsey III
Southern Illinois University at
Carbondale
Dennis F. Doelitzsch
3-D Communications
Jerry Whitaker
Technical Press
Martin S. Roden
California State University
Stanley Salek
Hammett & Edison
Almon H. Clegg
CCi
? 2000 by CRC Press LLC
Thus g[·] performs a mapping operation on m(t). The particular relationship that is chosen for g(t) in terms
of m(t) defines the type of modulation used.
In Table 69.1, examples of the mapping function g(m) are given for the following types of modulation:
?AM: amplitude modulation
?DSB-SC: double-sideband suppressed-carrier modulation
?PM: phase modulation
?FM: frequency modulation
?SSB-AM-SC: single-sideband AM suppressed-carrier modulation
?SSB-PM: single-sideband PM
?SSB-FM: single-sideband FM
?SSB-EV: single-sideband envelope-detectable modulation
?SSB-SQ: single-sideband square-law-detectable modulation
?QM: quadrature modulation
Modulation
In Table 69.1, a generalized approach may be taken to obtain universal transmitter models that may be reduced
to those used for a particular modulation type. We also see that there are equivalent models which correspond
to different circuit configurations, yet they may be used to produce the same type of modulated signal at their
outputs. It is up to communication engineers to select an implementation method that will optimize perfor-
mance, yet retain low cost based on the state of the art in circuit development.
There are two canonical forms for the generalized transmitter. Figure 69.1 is an AM-PM type circuit as described
in Eq. (69.2). In this figure, the baseband signal processing circuit generates R(t) and q(t) from m(t). The R and q
are functions of the modulating signal m(t) as given in Table 69.1 for the particular modulation type desired.
Figure 69.2 illustrates the second canonical form for the generalized transmitter. This uses in-phase and
quadrature-phase (IQ) processing. Similarly, the formulas relating x(t) and y(t) are shown in Table 69.1, and
the baseband signal processing may be implemented by using either analog hardware or digital hardware with
software. The remainder of the canonical form utilizes radio frequency (RF) circuits as indicated.
Any type of signal modulation (AM, FM, SSB, QPSK, etc.) may be generated by using either of these two
canonical forms. Both of these forms conveniently separate baseband processing from RF processing.
Superheterodyne Technique
Most receivers employ the superheterodyne receiving technique (see Fig. 69.3). This technique consists of
either down-converting or up-converting the input signal to some convenient frequency band, called the
intermediate frequency (IF) band, and then extracting the information (or modulation) by using the appropriate
detector. This basic receiver structure is used for the reception of all types of bandpass signals, such as television,
FM, AM, satellite, and radar signals.
FIGURE 69.1 Generalized transmitter using the AM-PM generation technique.
? 2000 by CRC Press LLC
? 2000 by CRC Press LLC
TABLE 69.1
Type of
Modulation
Linearity Remarks
AM
L
b
m(t) > –1 required for
envelope detection.
DSB-SC L Coherent detection required.
PM NLD
p
is the phase deviation
constant (radian/volts).
FM NLD
f
is the frequency deviation
constant (radian/volt-sec).
SSB-AM-SC
a
)]
L Coherent detection required.
SSB-PM
a
N
SSB-FM
a
NL
SSB-EV
a
NL m(t) > –1 is required so that
the ln will have a real value.
SSB-SQ
a
NL m(t) > –1 is required so that
the ln will have a real value.
QM )] L Used in NTSC color tele-
vision: requires coherent
detection.
L = linear, NL = nonlinea
a
Use upper sig
b
In the strict sense
1
–
–
ü
y
t
0
ü
y
t
]
()x
t
d
-
-¥
¥
ò
l
l
l
Complex Envelope Functions for Various Types of Modulation
Corresponding Quadrature Corresponding Amplitude and
Mapping Functions
Modulation Phase Modulation
g[m] x(t) y(t) R(t) q(t)
1 + m(t)1 + m(t)0 *1 + m(t)*
m(t) m(t *m(t)*
e
jD
p
m(t)
cos[D
p
m(t)] sin[D
p
m(t)] 1 D
p
m(t)
m(t) ± j
?
m(t)
m(t)
±
?
m(t) tan
–1
[±
?
m(t)/m(t
e
jD
p
[m(t)±j?m(t)]
e
7D
p
?
m(t)
cos[D
p
m(t)] e
7D
p
?
m(t)
sin[D
p
m(t)] e
7D
p
?
m(t) D
p
m(t)
e
{ln[1 + m(t)]±jl
?
n[1 + m(t)]}
[1 + m(t)] cos {l?n[1 + m(t)]} ±[1 + m(t)]sin{l?n[1 + m(t)]} 1 + m(t) ±l?n[1 + m(t)]
e
(1/2){ln[1 + m(t)]±jl?n[1 + m(t)]}
m
1
(t) + jm
2
(t) m
1
(t) m
2
(t) tan
–1
[m
2
(t)/m
1
(t
r, [?.] is the Hilbert transform (i.e., –90° phase-shifted version) of [·]. The Hilbert transform is
ns for upper sideband signals and lower signs for lower sideband signals.
, AM signals are not linear because the carrier term does not satisfy the linearity (superposition) condition.
01
180
,()
,()
mt
mt
>
°<
ì
í
?
00
180
,()
,()
mt
mt
>
°<
ì
í
?
e
jDf m d
t
()ss
-¥
ò
cos ()
–
Dmd
f
t
ss
¥
ò
é
?
ê
ù
?
ú
sin ()
–
Dmd
f
t
ss
¥
ò
é
?
ê
ù
?
ú
Dmd
f
t
()
–
ss
¥
ò
[() [
?
()]mt mt
22
+
e
jDf m jm d
t
[()?()]sss±
-¥
ò
eDmd
Dmd
f
t
f
t
m ?()
–
–
cos ()
¥
ò
¥
ò
é
?
ê
ù
?
ú
ss
ss eDmd
Dmd
f
t
f
t
m ?()
–
–
sin ()
¥
ò
¥
ò
é
?
ê
ù
?
ú
ss
ss e
Dmd
f
t
m ?()
–¥
ò ss
Dmd
f
t
()
–
ss
¥
ò
1
1
2
1+=
ì
í
?
ü
y
t
mt mt()cos l
?
n[ ()] ±+ +
ì
í
?
ü
y
t
1
1
2
1mt mt()sin l
?
n[ ()]
1+mt()
±+
1
2
1l
?
n[ ()mt
mt mt
1
2
2
2
() ()+
D
?
() ()*xt xt
t
= =
11
pp
If the complex envelope g(t) is desired for generalized signal detection or for optimum reception in digital
systems, the x(t) and y(t) quadrature components, where x(t) + jy(t) = g(t), may be obtained by using quadrature
product detectors, as illustrated in Fig. 69.4. x(t) and y(t) could be fed into a signal processor to extract the
modulation information. Disregarding the effects of noise, the signal processor could recover m(t) from x(t)
and y(t) (and, consequently, demodulate the IF signal) by using the inverse of the complex envelope generation
functions given in Table 69.1.
The generalized modulation techniques are shown in Table 69.1. In digital communication systems, discrete
modulation techniques are usually used to modulate the source information signal. Discrete modulation includes:
?PCM = pulse-code modulation
?DM = differential modulation
?DPCM = differential pulse-code modulation
?FSK = frequency-shift keying
?PSK = phase-shift keying
?DPSK = differential phase-shift keying
?MPSK = M-ary phase-shift keying
?QAM = quadrature amplitude modulation
FIGURE 69.2Generalized transmitter using the quadrature generation technique.
FIGURE 69.3Superheterodyne receiver.
? 2000 by CRC Press LLC
Pulse-Code Modulation
PCM is essentially analog-to-digital conversion of a special type, where the information contained in the
instantaneous samples of an analog signal is represented by digital words in a serial bit stream. The PCM signal
is generated by carrying out three basic operations: sampling, quantizing, and encoding (see Fig. 69.5). The
sampling operation generates a flat-top pulse amplitude modulation (PAM) signal. The quantizing converts
the actual sampled value into the nearest of the M amplitude levels. The PCM signal is obtained from the
quantized PAM signal by encoding each quantized sample value into a digital word.
Frequency-Shift Keying
The FSK signal can be characterized as one of two different types. One type is called discontinuous-phase FSK
since q(t) is discontinuous at the switching times. The discontinuous-phase FSK signal is represented by
(69.5)
FIGURE 69.4IQ (in-phase and quadrature-phase) detector.
TABLE 69.2Performance of a PCM System with Uniform Quantizing
and No Channel Noise
Recovered Analog
Number of Bandwidth of Signal Power-to-
Quantizer Length of the PCMSignal Quantizing Noise
Levels PCM Word, (First Null
Power Ratios
Used, Mn (bits) Bandwidth)
a
(S/N)
pk out
(S/N)
out
212B 10.8 6.0
424B 16.8 12.0
836B 22.8 18.1
16 4 8B 28.9 24.1
32 5 10B 34.9 30.1
64 6 12B 40.9 36.1
128 7 14B 46.9 42.1
256 8 16B 52.9 48.2
512 9 18B 59.0 54.2
1024 10 20B 65.0 60.2
a
B is the absolute bandwidth of the input analog signal.
st
At t
At t
c
c
()
=
+
( )
+
( )
ì
í
?
?
?
cos
cos
wq
wq
11
22
for in time interval when a binary 1 is sent
for in time interval when a binary 0 is sent
? 2000 by CRC Press LLC
? 2000 by CRC Press LLC
RADIO DISTANCE AND
DIRECTION INDICATOR
Luis W. Alvarez
Patented August 30, 1949
#2,480,208
n excerpt from Luis Alvarez’s patent application:
This invention relates to a
communications system and
more particularly to a system
for presenting in panoramic
form the location and disposi-
tion of objects as they might
be seen from the air. In par-
ticular, the system hereinafter
described is a radar or radio
echo detection system present-
ing objects and targets princi-
pally on the ground lying in the
path of flight of an airplane.
Ground radar systems were
already known and used by
the military. These involved
a highly directional antenna
alternately coupled to a
transmitter and receiver with
the antenna swept in a radial
fashion. The display con-
sisted of a cathode ray tube
with targets represented by
radial sweeps from the center
of the screen. Dr. Alvarez took
on the special problem of
panoramic presentation of
ground targets from aircraft.
He solved the computation
and display problems associ-
ated with the hyperbolic
shape of the radar beams as
transmitted and received from a moving aircraft. He also described handling pitch, roll, yaw, and other
disturbances. (Copyright ? 1995, DewRay Products, Inc. Used with permission.)
A
where f
1
is called the mark (binary 1) frequency and f
2
is called the space (binary 0) frequency. The other type
is continuous-phase FSK. The continuous-phase FSK signal is generated by feeding the data signal into a
frequency modulator, as shown in Fig. 69.6(b). This FSK signal is represented by
or
s(t) = Re{g(t)e
jwct
} (69.6)
where
g(t) = A
c
e
jq(t)
(69.7)
(69.8)
Detection of FSK is illustrated in Fig. 69.7.
M-ary Phase-Shift Keying
If the transmitter is a PM transmitter with an M-level digital modulation signal, MPSK is generated at the
transmitter output. A plot of the permitted values of the complex envelope, g(t) = A
c
e
jq(t)
, would contain M
points, one value of g (a complex number in general) for each of the M multilevel values, corresponding to the
M phases that q is permitted to have.
MPSK can also be generated using two quadrature carriers modulated by the x and y components of the
complex envelope (instead of using a phase modulator)
g(t) = A
c
e
jq(t)
= x(t) + jy(t) (69.9)
FIGURE 69.5A PCM transmission system.
st A t D md
ccf
t
() cos ()=+
é
?
ê
ù
?
ú
-¥
ò
wl
qll() ()tDmd
f
t
=
-¥
ò
for FSK
? 2000 by CRC Press LLC
where the permitted values of x and y are
x
i
= A
c
cos q
i
(69.10)
y
i
= A
c
sin q
i
(69.11)
for the permitted phase angles q
i
, i = 1, 2, ..., M, of the MPSK signal. This is illustrated by Fig. 69.8, where the
signal processing circuit implements Eqs. (69.10) and (69.11).
MPSK, where M = 4, is called quadrature-phase-shift-keyed (QPSK) signaling.
Quadrature Amplitude Modulation
Quadrature carrier signaling is called quadrature amplitude modulation (QAM). In general, QAM signal
constellations are not restricted to having permitted signaling points only on a circle (of radius A
c
, as was the
case for MPSK). The general QAM signal is
s(t) = x(t) cos w
c
t – y(t) sin w
c
t (69.12)
FIGURE 69.6Generation of FSK.
FIGURE 69.7Detection of FSK.
? 2000 by CRC Press LLC
where
g(t) = x(t) + jy(t) = R(t)e
jq(t)
(69.13)
The generation of QAM signals is shown in Fig. 69.8. The spectral efficiency for QAM signaling is shown in
Table 69.3.
FIGURE 69.8Generation of QAM signals.
TABLE 69.3Spectral Efficiency for QAM Signaling with Raised Cosine-Roll-Off
Pulse Shaping
Number of
Levels,
Size of
DAC, l
M (symbols) (bits) r = 0.0 r = 0.1 r = 0.25 r = 0.5 r = 0.75 r = 1.0
2 1 1.00 0.909 0.800 0.667 0.571 0.500
4 2 2.00 1.82 1.60 1.33 1.14 1.00
8 3 3.00 2.73 2.40 2.00 1.71 1.50
16 4 4.00 3.64 3.20 2.67 2.29 2.00
32 5 5.00 4.55 4.0 3.33 2.86 2.50
DAC = digital-to-analog converter.
h = R/B
T
= l/2 bits/s per hertz.
r is the roll-off factor of the filter characteristic.
h=
R
B
T
bits/s
Hz
? 2000 by CRC Press LLC
Defining Terms
Modulation:The process of impressing the source information onto a bandpass signal with a carrier frequency
f
c
. It can be expressed as
s(t) = Re{g(t) e
jwct
}
where g(t) is a function of the modulating signal m(t). That is,
g(t) = g[m(t)]
g[·] performs a mapping operation on m(t). The particular relationship that is chosen for g(t) in terms
of m(t) defines the type of modulation used.
Superheterodyne receiver:Most receivers employ the superheterodyne receiving technique, which consists
of either down-converting or up-converting the input signal to some convenient frequency band, called
the intermediate frequency band, and then extracting the information (or modulation) by using an
appropriate detector. This basic receiver structure is used for the reception of all types of bandpass signals,
such as television, FM, AM, satellite, and radar signals.
Related Topics
69.2 Radio Broadcasting?70.1 Coding
References
L. W. Couch, Digital and Analog Communication Systems, New York: Prentice-Hall, 1995.
F. Dejager, “Delta modulation of PCM transmission using a 1-unit code,” Phillips Res. Rep., no. 7, pp. 442–466,
Dec. 1952.
J.H. Downing, Modulation Systems and Noise, Englewood Cliffs, N.J.: Prentice-Hall, 1964.
J. Dunlop and D.G. Smith, Telecommunications Engineering, London: Van Nostrand, 1989.
B.P. Lathi, Modern Digital and Analog Communication Systems, New York: CBS College, 1983.
J.H. Park, Jr., “On binary DPSK detection,” IEEE Trans. Commun., COM-26, pp. 484–486, 1978.
M. Schwartz, Information Transmission, Modulation and Noise, New York: McGraw-Hill, 1980.
Further Information
The monthly journal IEEE Transactions on Communications describes telecommunication techniques. The
performance of M-ary QAM schemes is evaluated in its March 1991 issue, pp. 405–408. The IEEE magazine
IEEE Communications is a valuable source.
Another source is IEEE Transactions on Broadcasting, which is published quarterly by The Institute of
Electrical and Electronics Engineers, Inc.
The biweekly magazine Electronics Letters investigates the error probability of coherent PSK and FSK systems
with multiple co-channel interferences in its April 11, 1991, issue, pp. 640–642. Another relevant source regard-
ing the coherent detection of MSK is described on pp. 623–625 of the same issue. All subscriptions inquiries
and orders should be sent to IEE Publication Sales, P.O. Box 96, Stevenage, Herts, SG1 2SD, United Kingdom.
69.2 Radio Broadcasting
Jefferson F. Lindsey III and Dennis F. Doelitzsch
Standard Broadcasting (Amplitude Modulation)
Standard broadcasting refers to the transmission of voice and music received by the general public in the 535-
to 1705-kHz frequency band. Amplitude modulation is used to provide service ranging from that needed for
small communities to higher-power broadcast stations needed for larger regional areas. The primary service
? 2000 by CRC Press LLC
THE REVOLUTIONARY
TECHNOLOGY OF RADIO
he beginning of the present century saw the birth of several technologies that were to be revolu-
tionary in their impact. The most exciting of these was radio or, as it was generally called at the
time, “wireless”. No other technology would seem to obliterate the barriers of distance in human
communication or to bring individuals together with such immediacy and spontaneity. And seldom had
there emerged an activity that seemed so mysterious and almost magical to most of the population.
Radio was mysterious not only to the layman, but also to many engineers and technically informed
individuals. The mystery lay largely in radio’s application of principles and phenomena only recently
T
? 2000 by CRC Press LLC
area is defined as the area in which the groundwave signal is not subject to objectionable interference or
objectionable fading. The secondary service area refers to an area serviced by skywaves and not subject to
objectionable interference. Intermittent service area refers to an area receiving service from either a groundwave
or a skywave but beyond the primary service area and subject to some interference and fading.
Frequency Allocations
The carrier frequencies for standard broadcasting in the United States (referred to internationally as medium-
wave broadcasting) are designated in the Federal Communications Commission (FCC) Rules and Regulations,
Vol. III, Part 73. A total of 117 carrier frequencies are allocated from 540 to 1700 kHz in 10-kHz intervals. Each
carrier frequency is required by the FCC rules to deviate no more than ±20 Hz from the allocated frequency,
to minimize heterodyning from two or more interfering stations. Double-sideband full-carrier modulation,
commonly called amplitude modulation (AM), is used in standard broadcasting for sound transmission. Typical
modulation frequencies for voice and music range from 50 Hz to 10 kHz. Each channel is generally thought
of as 10 kHz in width, and thus the frequency band is designated from 535 to 1705 kHz; however, when the
modulation frequency exceeds 5 kHz, the radio frequency bandwidth of the channel exceeds 10 kHz and
identified by physicists and engineers working at the frontiers of their specialties. The existence of
electromagnetic waves that traveled like light had been predicted by the brilliant physicist James Clerk
Maxwell in the 1860s and proven by the young German Heinrich Hertz in the 1880s. The possible use
of these waves for communicating through space without wires occurred to many; however, the first
practical steps to making radio useful are generally attributed to Oliver Lodge in England, Guglielmo
Marconi in Italy, and Aleksandr Popov in Russia. Marconi’s broadcast of Morse code across the Atlantic
in 1901 first showed the world just what enormous potential radio had for changing the whole concept
of long-distance communication. The next few years saw feverish activity everywhere as men tried to
translate the achievements of the pioneers into the foundations of a practical technology.
By 1912, radio technology had attracted a small number of dedicated individuals who identified their
own future with the progress of their chosen field. Some of these had organized themselves into small,
localized societies, but it was clear to many that a broader vision was needed if radio practitioners were
to achieve the recognition and respect of technical professionals. It was with such a vision in mind that
representatives of two of these local societies met in New York City in May 1912 to form the Institute of
Radio Engineers. The IRE was to be an international society dedicated to the highest professional
standards and to the advancement of the theory and practice of radio technology.
The importance of radio lay not simply in its expansion of the means of human communication over
distances, but also in its exploitation and expansion of very novel scientific and technical capabilities. As
the century progressed, radio would give rise to the 20th century’s most revolutionary technology of
all — electronics. (Courtesy of the IEEE Center for the History of Electrical Engineering.)
adjacent channel interference may occur. To improve the high-frequency performance of transmission and to
compensate for the high-frequency roll-off of many consumer receivers, FCC rules require that stations boost
the high-frequency amplitude of transmitted audio using preemphasis techniques. In addition stations may
also use multiplexing to transmit stereophonic programming. The FCC adopted Motorola’s C-QUAM com-
patible quadrature amplitude modulation in 1994. Approximately 700 AM stations transmit in stereo.
Channel and Station Classifications
In standard broadcast (AM), stations are classified according to their operating power, protection from inter-
ference, and hours of operation. A Class A station operates with 10 to 50 kW of power servicing a large area
with primary, secondary, and intermittent coverage and is protected from interference both day and night.
These stations are called “clear channel” stations because the channel is cleared of nighttime interference over
a major portion of the country. Class B stations operate full time with transmitter powers of 0.25 to 50 kW
and are designed to render primary service only over a principal center of population and the rural area
contiguous thereto. While nearly all Class A stations operate with 50 kW, most Class B stations must restrict
their power to 5 kW or less to avoid interfering with other stations. Class B stations operating in the 1605 to
1705 kHz band are restricted to a power level of 10 kW daytime and 1 kW nighttime. Class C stations operate
on six designated channels (1230, 1240, 1340, 1400, 1450, and 1490) with a maximum power of 1 kW or less
full time and render primarily local service to smaller communities. Class D stations operate on Class A or B
frequencies with Class B transmitter powers during daytime, but nighttime operation, if permitted at all, must
be at low power (less than 0.25 kW) with no protection from interference.
Although Class A stations cover large areas at night, approximately in a 1220-km (750-mi) radius, the
nighttime coverage of Class B, C, and D stations is limited by interference from other stations, electrical devices,
and atmospheric conditions to a relatively small area. Class C stations, for example, have an interference-free
nighttime coverage radius of approximately 8 to 16 km. As a result, there may be large differences in the area
that the station covers daytime versus nighttime. With over 5200 AM stations licensed for operation by the
FCC, interference, both day and night, is a factor that significantly limits the service which stations may provide.
In the absence of interference, a daytime signal strength of 2 mV/m is required for reception in populated areas
of more than 2500, while a signal of 0.5 mV/m is generally acceptable in less populated areas. Secondary
nighttime service is provided in areas receiving a 0.5-mV/m signal 50% or more of the time without objec-
tionable interference. Table 69.4 indicates the daytime contour overlap limits. However, it should be noted that
these limits apply to new stations and modifications to existing stations. Nearly every station on the air was
allocated prior to the implementation of these rules when the interference criteria were less restrictive.
Field Strength
The field strength produced by a standard broadcast station is a key factor in determining the primary and
secondary service areas and interference limitations of possible future radio stations. The field strength limitations
are specified as field intensities by the FCC with the units volts per meter; however, measuring devices may
read volts or decibels referenced to 1 mW (dBm), and a conversion may be needed to obtain the field intensity.
The power received may be measured in dBm and converted to watts. Voltage readings may be converted to
watts by squaring the root mean square (rms) voltage and dividing by the field strength meter input resistance,
which is typically on the order of 50 or 75 W. Additional factors needed to determine electric field intensity
are the power gain and losses of the field strength receiving antenna system. Once the power gain and losses
are known, the effective area with loss compensation of the field strength receiver antenna may be obtained as
(69.14)
where A
eff
= effective area including loss compensation, m
2
; G = power gain of field strength antenna, W/W;
l = wavelength, m; and L = mismatch loss and cable loss factor, W/W.
From this calculation, the power density in watts per square meter may be obtained by dividing the received
power by the effective area, and the electric field intensity may be calculated as
AGL
eff
=
l
p
2
4
? 2000 by CRC Press LLC
(69.15)
where E = electric field intensity, V/m; 3 = power density, W/m
2
; and Z
fs
= 120p W, impedance of free space.
The protected service contours and permissible interference contours for standard broadcast stations shown
in Table 69.4, along with a knowledge of the field strength of existing broadcast stations, may be used in
determining the potential for establishing new standard broadcast stations.
Propagation
One of the major factors in the determination of field strength is the propagation characteristic that is described
by the change in electric field intensity with an increase in distance from the broadcast station antenna. This
variation depends on a number of factors including frequency, distance, surface dielectric constant, surface loss
tangent, polarization, local topography, and time of day. Generally speaking, groundwave propagation occurs
at shorter ranges both during day and night periods. Skywave propagation permits longer ranges and occurs
during night periods, and thus some stations must either reduce power or cease to operate at night to avoid
causing interference. Propagation curves in the broadcast industry are frequently referred to a reference level
of 100 mV/m at 1 km; however, a more general expression of groundwave propagation may be obtained by
using the Bremmer series [Bremmer, 1949]. A typical groundwave propagation curve for electric field strength
as a function of distance is shown in Fig. 69.9 for an operating frequency of 770–810 kHz. The ground
conductivity varies from 0.1 to 5000 mS/m, and the ground relative dielectric constant is 15.
The effective radiated power (ERP) refers to the effective power output from the antenna in a specified
direction and includes the transmitter power output, transmission line losses, and antenna power gain. The
ERP in most cases exceeds the transmitter output power, since that antenna power gain is normally 2 or more.
For a hypothetical perfect isotropic radiator with a power gain of 1, the ERP is found to be
(69.16)
TABLE 69.4Protected Service Signal Intensities for Standard Broadcasting (AM)
Signal Strength Contour of Area Permissible
Protected from Objectionable Interfering
Class of Power Class of
Interference* (mV/m) Signal
Station (kW) Channel Used Day
?
Night Day
?
Night
?
A 10–50 Clear SC 100 SC 500 50% SW SC 5 SC 25
AC 500 AC 500 GW AC 250 AC 250
B 0.25–50 Clear 500 2000
?
25 25
Regional AC 250 250
C 0.25–1 Local 500 Not precise
§
SC 25 Not precise
D 0.25–50 Clear 500 Not precise SC 25 Not precise
Regional AC 250
*When a station is already limited by interference from other stations to a contour of higher value than that normally protected
for its class, this higher-value contour shall be the established protection standard for such station. Changes proposed by Class A
and B stations shall be required to comply with the following restrictions. Those interferers that contribute to another station’s RSS
using the 50% exclusion method are required to reduce their contribution to that RSS by 10%. Those lesser interferers that contribute
to a station’s RSS using the 25% exclusion method but do not contribute to that station’s RSS using the 50% exclusion method may
make changes not to exceed their present contribution. Interferers not included in a station’s RSS using the 25% exclusion method
are permitted to increase radiation as long as the 25% exclusion threshold is not equaled or exceeded. In no case will a reduction
be required that would result in a contributing value that is below the pertinent value specified in the table.
?
Groundwave.
?
Skywave field strength for 10% or more of the time. For Alaska, Class SC is limited to 5 mV/m.
§
During nighttime hours, Class C stations in the contiguous 48 states may treat all Class B stations assigned to 1230, 1240, 1340,
1400, 1450, and 1490 kHz in Alaska, Hawaii, Puerto Rico and the U.S. Virgin Islands as if they were Class C stations.
Note:SC = same channel; AC = adjacent channel; SW = skywave; GW = groundwave; RSS = root of sum squares.
Source:FCC Rules and Regulations, Revised 1991; vol. III, pt. 73.182(a).
EZ
fs
= 3
ERP=
Er
22
30
? 2000 by CRC Press LLC
FIGURE 69.9 Typical groundwave propagation for standard AM broadcasting. (Source: 1986 National Association of
Broadcasters.)
? 2000 by CRC Press LLC
where E is the electric field intensity, V/m, and r is the distance, m. For a distance of 1 km (1000 m), the ERP
required to produce a field intensity of 100 mV/m is found to be 333.3 W. Since the field intensity is proportional
to the square root of the power, field intensities may be determined at other powers.
Skywave propagation necessarily involves some fading and less predictable field intensities and is most
appropriately described in terms of statistics or the percentage of time a particular field strength level is found.
Figure 69.10 shows skywave propagation for a 100-mV/m field strength at a distance of 1 km for midpoint path
latitudes of 35 to 50 degrees.
Transmitters
Standards that cover AM broadcast transmitters are given in the Electronic Industry Association (EIA) Standard
TR-101A, “Electrical Performance Standard for Standard Broadcast Transmitters.” Parameters and methods for
measurement include the following: carrier output rating, carrier power output capability, carrier frequency
range, carrier frequency stability, carrier shift, carrier noise level, magnitude of radio frequency (RF)harmonics,
normal load, transmitter output circuit adjustment facilities, RF and audio interface definitions, modulation
capability, audio input level for 100% modulation, audio frequency response, audio frequency harmonic
distortion, rated power supply, power supply variation, operating temperature characteristics, and power input.
Standard AM broadcast transmitters range in power output from 5 W up to 50 kW units. While solid-state
devices are used for many models (especially the lower-powered units), several manufacturers still retain tubes
in the final amplifiers of their high-powered models. This is changing, however, with the introduction in recent
years of 50-kW fully transistorized models. A block diagram of a typical 1-kW solid-state transmitter is shown
in Fig. 69.11.
Antenna Systems
The antenna system for a standard AM broadcast station typically consists of a quarter-wave vertical tower, a
ground system of 120 or more quarter-wave radials buried a few inches underground, and an antenna tuning
FIGURE 69.10Skywave propagation for standard AM broadcasting. (Source: FCC Rules and Regulations, 1982, vol. III,
pt. 73.190, fig. 2.)
? 2000 by CRC Press LLC
unit to “match” the complex impedance of the antenna system to the characteristic impedance of the transmitter
and transmission line so that maximum transfer of power may occur. Typical heights for AM broadcast towers
range from 150 to 500 ft. When the radiated signal must be modified to prevent interference to other stations
or to provide better service in a particular direction, additional towers may be combined in a phased array to
produce the desired field intensity contours. For example, if a station power increase would cause interference
with existing stations, a directional array could be designed that would tailor the coverage to protect the existing
stations while allowing increases in other directions. The protection requirements can generally be met with
arrays consisting of 4 towers or less, but complex arrays have been constructed consisting of 12 or more towers
to meet stringent requirements at a particular location. An example of a directional antenna pattern is shown
in Fig. 69.12. This pattern provides major coverage to the southwest and restricts radiation (and thus interfer-
ence) towards the northeast.
Frequency Modulation
Frequency-modulation (FM) broadcasting refers to the transmission of voice and music received by the general
public in the 88- to 108-MHz frequency band. FM is used to provide higher-fidelity reception than is available
with standard broadcast AM. In 1961 stereophonic broadcasting was introduced with the addition of a double-
sideband suppressed carrier for transmission of a left-minus-right difference signal. The left-plus-right sum
channel is sent with use of normal FM. Some FM broadcast systems also include a subsidiary communications
authorization (SCA) subcarrier for private commercial uses. FM broadcast is typically limited to line-of-sight
ranges. As a result, FM coverage is localized to a range of 75 mi (120 km) depending on the antenna height and ERP.
Frequency Allocations
The 100 carrier frequencies for FM broadcast range from 88.1 to 107.9 MHz and are equally spaced every 200
kHz. The channels from 88.1 to 91.9 MHz are reserved for educational and noncommercial broadcasting and
those from 92.1 to 107.9 MHz for commercial broadcasting. Each channel has a 200-kHz bandwidth. The
maximum frequency swing under normal conditions is ±75 kHz. Stations operating with an SCA may under
certain conditions exceed this level, but in no event may exceed a frequency swing of ±82.5 kHz. The carrier
frequency is required to be maintained within ±2000 Hz. The frequencies used for FM broadcasting generally
limit the coverage to the line-of-sight or a slightly greater distance. The actual coverage area is determined by
the ERP of the station and the height of the transmitting antenna above the average terrain in the area. Either
increasing the power or raising the antenna will increase the coverage area.
Station Classifications
In FM broadcast, stations are classified according to their maximum allowable ERP and the transmitting antenna
height above average terrain in their service area. Class A stations provide primary service to a radius of about
FIGURE 69.11Block diagram of typical 1-kW solid-state AM transmitter. (Source: Broadcast Electronics Inc., Quincy, Ill.
Reprinted with permission.)
? 2000 by CRC Press LLC
FIGURE 69.12 Directional AM antenna pattern for a six-element array. (Source: WDDD-AM, Marion, Ill., and Ralph
Evans Associates.)
? 2000 by CRC Press LLC
28 km with 6000 W of ERP at a maximum height of 100 m. The most powerful class, Class C, operates with
maximums of 100,000 W of ERP and heights up to 600 m with a primary coverage radius of over 92 km. The
powers and heights above average terrain (HAAT) for all of the classes are shown in Table 69.5. All classes may
operate at antenna heights above those specified but must reduce the ERP accordingly. Stations may not exceed
the maximum power specified, even if antenna height is reduced. The classification of the station determines
the allowable distance to other co-channel and adjacent channel stations.
Field Strength and Propagation
The field strength produced by an FM broadcast station depends on the ERP, antenna heights, local terrain,
tropospheric scattering conditions, and other factors. From a statistical point of view, however, an estimate of
the field intensity may be obtained from Fig. 69.13. A factor in the determination of new licenses for FM
broadcast is the separation between allocated co-channel and adjacent channel stations, the class of station,
and the antenna heights. The spacings are given in Table 69.6. The primary coverage of all classes of stations
(except B and B1, which are 0.5 mV/m and 0.7 mV/m, respectively) is the 1.0 mV/m contour. The distance to
the primary contour, as well as to the “city grade” or 3.16 mV/m contour may be estimated using Fig. 69.13.
Although FM broadcast propagation is generally thought of as line-of-sight, larger ERPs along with the effects
of diffraction, refraction, and tropospheric scatter allow coverage slightly greater than line-of-sight.
Transmitters
FM broadcast transmitters typically range in power output from 10 W to 50 kW. A block diagram of a dual
FM transmitter is shown in Fig. 69.14. This system consists of two 25-kW transmitters that are operated in
parallel and that provide increased reliability in the event of a failure in either the exciter or transmitter power
amplifier. The highest-powered solid-state transmitters are currently 10 kW, but manufacturers are developing
new devices that will make higher-power solid-state transmitters both cost-efficient and reliable.
Antenna Systems
FM broadcast antenna systems are required to have a horizontally polarized component. Most antenna systems,
however, are circularly polarized, having both horizontal and vertical components. The antenna system, which
usually consists of several individual radiating bays fed as a phased array, has a radiation characteristic that
concentrates the transmitted energy in the horizontal plane toward the population to be served, minimizing
the radiation out into space and down toward the ground. Thus, the ERP towards the horizon is increased with
gains up to 10 dB. This means that a 5-kW transmitter coupled to an antenna system with a 10-dB gain would have
an ERP of 50 kW. Directional antennas may be employed to avoid interference with other stations or to meet spacing
requirements. Figure 69.15 is a plot of the horizontal and vertical components of a typical nondirectional circularly
polarized FM broadcast antenna showing the effect upon the pattern caused by the supporting tower.
Preemphasis
Preemphasis is employed in an FM broadcast transmitter to improve the received signal-to-noise ratio. The
preemphasis upper-frequency limit shown is based on a time constant of 75 ms as required by the FCC for FM
TABLE 69.5FM Station Classifications, Powers, and Tower Heights
Station Class Maximum ERP HAAT, m (ft) Distance, km
A 6 kW (7.8 dBk) 100 (328) 28
B1 25 kW (14.0 dBk) 100 (328) 39
B 50 kW (17.0 dBk) 150 (492) 52
C3 25 kW (14.0 dBk) 100 (328) 39
C2 50 kW (17.0 dBk) 150 (492) 52
C1 100 kW (20.0 dBk) 299 (981) 72
C 100 kW (20.0 dBk) 600 (1968) 92
Source: FCC Rules and Regulations, Revised 1991; vol. III, Part
73.211(b)(1).
? 2000 by CRC Press LLC
FIGURE 69.13 Propagation for FM broadcasting. (Source: FCC Rules and Regulations, Revised 1990; vol. III, pt. 73.333.)
? 2000 by CRC Press LLC
broadcast transmitters. Audio frequencies from 50 to 2120 Hz are transmitted with normal FM, whereas audio
frequencies from 2120 Hz to 15 kHz are emphasized with a larger modulation index. There is significant signal-
to-noise improvement when the receiver is equipped with a matching deemphasis circuit.
TABLE 69.6 Distance Separation Requirement for FM Stations
Station Class
Minimum Distance Separation Requirements, km (mi)
Relation Co-Channel 200 kHz 400/600 kHz 10.6/10.8 MHz
A to A 115 (71) 72 (45) 31 (19) 10 (6)
A to B1 143 (89) 96 (60) 48 (30) 12 (7)
A to B 178 (111) 113 (70) 69 (43) 15 (9)
A to C3 142 (88) 89 (55) 42 (26) 12 (7)
A to C2 166 (103) 106 (66) 55 (34) 15 (9)
A to C1 200 (124) 133 (83) 75 (47) 22 (14)
A to C 226 (140) 165 (103) 95 (59) 29 (18)
B1 to B1 175 (109) 114 (71) 50 (31) 14 (9)
B1 to B 211 (131) 145 (90) 71 (44) 17 (11)
B1 to C3 175 (109) 114 (71) 50 (31) 14 (9)
B1 to C2 200 (124) 134 (83) 56 (35) 17 (11)
B1 to C1 233 (145) 161 (100) 77 (48) 24 (15)
B1 to C 259 (161) 193 (120) 105 (65) 31 (19)
B to B 241 (150) 169 (105) 74 (46) 20 (12)
B to C3 211 (131) 145 (90) 71 (44) 17 (11)
B to C2 211 (131) 145 (90) 71 (44) 17 (11)
B to C1 270 (168) 195 (121) 79 (49) 27 (17)
B to C 274 (170) 217 (135) 105 (65) 35 (22)
C3 to C3 153 (95) 99 (62) 43 (27) 14 (9)
C3 to C2 177 (110) 117 (73) 56 (35) 17 (11)
C3 to C1 211 (131) 144 (90) 76 (47) 24 (15)
C3 to C 237 (147) 176 (109) 96 (60) 31 (19)
C2 to C2 190 (118) 130 (81) 58 (36) 20 (12)
C2 to C1 224 (139) 158 (98) 79 (49) 27 (17)
C2 to C 237 (147) 176 (109) 96 (60) 31 (19)
C1 to C1 245 (152) 177 (110) 82 (51) 34 (21)
C1 to C 270 (168) 209 (130) 105 (65) 35 (22)
C to C 290 (180) 241 (150) 105 (65) 48 (30)
Source: FCC Rules and Regulations, Revised 1991; vol. III, pt. 73.207.
FIGURE 69.14 Block diagram of typical FM transmitter. (Source: Harris Corporation, Quincy, Ill.)
? 2000 by CRC Press LLC
FM Spectrum
The monophonic system was initially developed to allow sound transmissions for audio frequencies from 50
to 15,000 Hz to be contained within a ±75-kHz RF bandwidth. With the development of FM stereo, the original
FM signal (consisting of a left-plus-right channel) is transmitted in a smaller bandwidth to be compatible with
a monophonic FM receiver, and a left-minus-right channel is frequency-multiplexed on a subcarrier of 38-kHz
using double-sideband suppressed carrier. An unmodulated 19-kHz subcarrier is derived from the 38-kHz
subcarrier to provide a synchronous demodulation reference for the stereophonic receiver. The synchronous
detector at 38 kHz recovers the left-minus-right channel information, which is then combined with the left-
plus-right channel information in sum and difference combiners to produce the original left-channel and right-
channel signals. In addition stations may utilize an SCA in a variety of ways, such as paging, data transmission,
specialized foreign language programs, radio reading services, utility load management, and background music.
An FM stereo station may utilize multiplex subcarriers within the range of 53 to 99 kHz with up to 20%
modulation of the main carrier using any form of modulation. The only requirement is that the station does
not exceed its occupied bandwidth limitations.
Defining Terms
Effective radiated power: Refers to the effective power output from an antenna in a specified direction and
includes transmitter output power, transmission line loss and antenna power gain.
Electric field intensity: Measure of signal strength in volts per meter used to determine channel allocation
criteria and interference considerations.
FIGURE 69.15 Typical nondirectional 92.5-MHz FM antenna characteristics showing the effect of the tower structure.
(Source: Electronics Research, Inc., Newburgh, Ind.)
? 2000 by CRC Press LLC
Primary service:Refers to areas in which the groundwave signal is not subject to objectionable interference
or objectionable fading.
SCA:Subsidiary communications authorization for paging, data transmission, specialized foreign language
programs, radio readings services, utility load management and background music using multiplexed
subcarriers from 53–99 kHz in connection with broadcast FM.
Secondary service: Refers to areas serviced by skywaves and not subject to objectionable interference.
Related Topics
69.1 Modulation and Demodulation?38.1 Wire
References
A. F. Barghausen, “Medium frequency sky wave propagation in middle and low latitudes,” IEEE Trans. Broadcast,
vol. 12, pp. 1–14, June 1966.
G.W. Bartlett, Ed., National Association of Broadcasters Engineering Handbook, 6th ed., Washington: The National
Association of Broadcasters, 1975.
H. Bremmer, Terrestrial Radio Waves: Theory of Propagation, Amsterdam: Elsevier, 1949.
Electronic Industries Association, Standard TR-101A, Electrical Performance Standards for AM Broadcast Trans-
mitters, 1948.
Federal Communications Commission, Rules and Regulations, vol. III, parts 73 and 74, October 1982.
Further Information
Pike & Fischer, Inc., in Bethesda, Md., offers an updated FCC rule service for a fee.
Several trade journals are good sources for up-to-date information such as Broadcast Engineering, Overland
Park, Kan., and Radio World, Falls Church, Va.
Application-oriented computer software is available from R.F. Systems, Shawnee Mission, Kan.
The Society of Broadcast Engineers (SBE), Indianapolis, Ind., and the National Association of Broadcasters
(NAB), Washington, D.C., are sources of further information.
69.3 Television Systems
Jerry Whitaker
The technology of television is based on the conversion of light rays from still or moving scenes and pictures
into electronic signals for transmission or storage, and subsequent reconversion into visual images on a screen.
A similar function is provided in the production of motion picture film; however, where film records the
brightness variations of a complete scene on a single frame in a short exposure no longer than a fraction of a
second, the elements of a television picture must be scanned one piece at a time. In the television system, a
scene is dissected into a frame composed of a mosaic of picture elements (pixels). A pixel is defined as the
smallest area of a television image that can be transmitted within the parameters of the system. This process
is accomplished by:
?Analyzing the image with a photoelectric device in a sequence of horizontal scans from the top to the
bottom of the image to produce an electric signal in which the brightness and color values of the
individual picture elements are represented as voltage levels of a video waveform
?Transmitting the values of the picture elements in sequence as voltage levels of a video signal
?Reproducing the image of the original scene in a video signal display of parallel scanning lines on a
viewing screen
? 2000 by CRC Press LLC
? 2000 by CRC Press LLC
TELEVISION SYSTEM
Philo T. Farnsworth
Patented August 26, 1930
#1,773,980
n excerpt from Philo Farnsworth’s patent application:
In the process and apparatus of the present invention, light from all portions of the object whose image is
to be transmitted, is focused at one time upon a light sensitive plate of a photo-electrical cell to thereby develop
an electronic discharge from said plate, in which each portion of the cross-section of such electronic discharge
will correspond in electrical intensity with the intensity of light imposed on that portion of the sensitive plate
from which the electrical discharge originated. Such a discharge is herein termed an electrical image.
Up to this time, the television process attempted to transmit an image converted to an electrical signal
by scanning with mechanically moving apparatus during the brief time period the human eye would
retain a picture. Such equipment could not move at sufficient speed to provide full-shaded images to the
viewer. At the age of 20, Farnsworth succeeded in producing the first all-electronic television image. It
took more that two decades to be adopted for consumer use, but it is easy to see how important this
invention has become in today’s society. (Copyright ? 1995, DewRay Products, Inc. Used with permission.)
A
Scanning Lines and Fields
The image pattern of electrical charges on a camera tube target or CCD, corresponding to the brightness levels
of a scene, are converted to a video signal in a sequential order of picture elements in the scanning process. At
the end of each horizontal line sweep, the video signal is blanked while the beam returns rapidly to the left side
of the scene to start scanning the next line. This process continues until the image has been scanned from top
to bottom to complete one field scan.
After completion of this first field scan, at the midpoint of the last line, the beam again is blanked as it
returns to the top center of the target where the process is repeated to provide a second field scan. The spot
size of the beam as it impinges upon the target must be fine enough to leave unscanned areas between lines
for the second scan. The pattern of scanning lines covering the area of the target, or the screen of a picture
display, is called a raster.
Interlaced Scanning Fields
Because of the half-line offset for the start of the beam return to the top of the raster and for the start of the
second field, the lines of the second field lie in-between the lines of the first field. Thus, the lines of the two
are interlaced. The two interlaced fields constitute a single television frame. Figure 69.16 shows a frame scan
with interlacing of the lines of two fields.
Reproduction of the camera image on a cathode ray tube (CRT) or solid-state display is accomplished by
an identical operation, with the scanning beam modulated in density by the video signal applied to an element
of the electron gun or control element, in the case of a solid-state display device. This control voltage to the
display varies the brightness of each picture element on the screen.
Blanking of the scanning beam during the return trace is provided for in the video signal by a “blacker-
than-black” pulse waveform. In addition, in most receivers and monitors another blanking pulse is generated
from the horizontal and vertical scanning circuits and applied to the display system to ensure a black screen
during scanning retrace. The retrace lines are shown as diagonal dashed lines in Fig. 69.16.
The interlaced scanning format, standardized for monochrome and compatible color, was chosen primarily
for two partially related and equally important reasons:
?To eliminate viewer perception of the intermittent presentation of images, known as flicker
?To reduce video bandwidth requirements for an acceptable flicker threshold level
Perception of flicker is dependent primarily upon two conditions:
?The brightness level of an image
?The relative area of an image in a picture
The 30-Hz transmission rate for a full 525-line television frame is comparable to the highly successful 24-
frame-per-second rate of motion-picture film. However, at the higher brightness levels produced on television
screens, if all 483 lines (525 less blanking) of a television image were to be presented sequentially as single
FIGURE 69.16The interlaced scanning pattern (raster) of the television image. (Source: Electronic Industries Association.)
? 2000 by CRC Press LLC
frames, viewers would observe a disturbing flicker in picture areas of high brightness. For a comparison, motion-
picture theaters on average produce a screen brightness of 10 to 25 ft·L (footlambert), whereas a direct-view
CRT may have a highlight brightness of 50 to 80 ft·L. It should be noted also that motion-picture projectors
flash twice per frame to reduce the flicker effect.
Through the use of interlaced scanning, single field images with one-half the vertical resolution capability
of the 525-line system are provided at the high flicker-perception threshold rate of 60 Hz. Higher resolution
of the full 483 lines of vertical detail is provided at the lower flicker-perception threshold rate of 30 Hz. The result
is a relatively flickerless picture display at a screen brightness of well over 50 to 75 ft·L, more than double that of
motion-picture film projection. Both 60-Hz fields and 30-Hz frames have the same horizontal resolution capability.
The second advantage of interlaced scanning, compared to progressive scanning, where the frame is con-
structed in one pass over the display face (rather than in two through interlace), is a reduction in video
bandwidth for an equivalent flicker threshold level. Progressive scanning of 525 lines would have to be completed
in 1/60 s to achieve an equivalent level of flicker perception. This would require a line scan to be completed in
half the time of an interlaced scan. The bandwidth then would double for an equivalent number of pixels per line.
The standards adopted by the Federal Communications Commission (FCC) for monochrome television in
the United States specified a system of 525 lines per frame, transmitted at a frame rate of 30 Hz, with each
frame composed of two interlaced fields of horizontal lines. Initially in the development of television trans-
mission standards, the 60-Hz power line waveform was chosen as a convenient reference for vertical scan.
Furthermore, in the event of coupling of power line hum into the video signal or scanning/deflection circuits,
the visible effects would be stationary and less objectionable than moving hum bars or distortion of horizontal-
scanning geometry. In the United Kingdom and much of Europe, a 50-Hz interlaced system was chosen for
many of the same reasons. With improvements in television receivers, the power line reference was replaced
with a stable crystal oscillator, rendering the initial reason for the frame rate a moot point.
The existing 525-line monochrome standards were retained for color in the recommendations of the National
Television System Committee (NTSC) for compatible color television in the early 1950s. The NTSC system,
adopted in 1953 by the FCC, specifies a scanning system of 525 horizontal lines per frame, with each frame
consisting of two interlaced fields of 262.5 lines at a field rate of 59.94 Hz. Forty-two of the 525 lines in each
frame are blanked as black picture signals and reserved for transmission of the vertical scanning synchronizing
signal. This results in 483 visible lines of picture information. Because the vertical blanking interval represents
a significant amount of the total transmitted waveform, the television industry has sought ways to carry
additional data during the blanking interval. Such applications include closed captioning and system test signals.
Synchronizing Video Signals
In monochrome television transmission, two basic synchronizing signals are provided to control the timing of
picture-scanning deflection:
?Horizontal sync pulses at the line rate.
?Vertical sync pulses at the field rate in the form of an interval of wide horizontal sync pulses at the field
rate. Included in the interval are equalizing pulses at twice the line rate to preserve interlace in each
frame between the even and odd fields (offset by a half line).
In color transmissions, a third synchronizing signal is added during horizontal scan blanking to provide a
frequency and phase reference for color signal encoding circuits in cameras and decoding circuits in receivers.
These synchronizing and reference signals are combined with the picture video signal to form a composite
video waveform.
The scanning and color-decoding circuits in receivers must follow the frequency and phase of the synchro-
nizing signals to produce a stable and geometrically accurate image of the proper color hue and saturation.
Any change in timing of successive vertical scans can impair the interlace of the even and odd fields in a frame.
Small errors in horizontal scan timing of lines in a field can result in a loss of resolution in vertical line structures.
Periodic errors over several lines that may be out of the range of the horizontal scan automatic frequency
control circuit in the receiver will be evident as jagged vertical lines.
? 2000 by CRC Press LLC
Television Industry Standards
There are three primary color transmission standards in use today:
?NTSC (National Television Systems Committee): Used in the United States, Canada, Central America,
most of South America, and Japan. In addition, NTSC is used in various countries or possessions heavily
influenced by the United States.
?PAL (Phase Alternation each Line): Used in England, most countries and possessions influenced by the
British Commonwealth, many western European countries and China. Variation exists in PAL systems.
?SECAM (Sequential Color with [Avec] Memory): Used in France, countries and possessions influenced
by France, the USSR (generally the former Soviet Bloc nations), and other areas influenced by Russia.
The three standards are incompatible for a variety of reasons (see Benson and Whitaker, 1991).
Television transmitters in the United States operate in three frequency bands:
?Low-band VHF (very high frequency), channels 2 through 6
?High-band VHF, channels 7 through 13
?UHF (ultra-high frequency), channels 14 through 83 (UHF channels 70 through 83 currently are assigned
to mobile radio services)
Table 69.7 shows the frequency allocations for channels 2 through 83. Because of the wide variety of operating
parameters for television stations outside the United States, this section will focus primarily on TV transmission
as it relates to the Unites States.
TABLE 69.7Frequency Allocations for TV Channels 2 through 83 in the U.S.
Channel Frequency Channel Frequency Channel Frequency
Designation Band, MHz Designation Band, MHz Designation Band, MHz
2 54–60 30 566–572 58 734–740
3 60–66 31 572–578 59 740–746
4 66–72 32 578–584 60 746–752
5 76–82 33 584–590 61 752–758
6 82–88 34 590–596 62 758–764
7 174–180 35 596–602 63 764–770
8 180–186 36 602–608 64 770–776
9 186–192 37 608–614 65 776–782
10 192–198 38 614–620 66 782–788
11 198–204 39 620–626 67 788–794
12 204–210 40 626–632 68 794–800
13 210–216 41 632–638 69 800–806
14 470–476 42 638–644 70 806–812
15 476–482 43 644–650 71 812–818
16 482–488 44 650–656 72 818–824
17 488–494 45 656–662 73 824–830
18 494–500 46 662–668 74 830–836
19 500–506 47 668–674 75 836–842
20 506–512 48 674–680 76 842–848
21 512–518 49 680–686 77 848–854
22 518–524 50 686–692 78 854–860
23 524–530 51 692–698 79 860–866
24 530–536 52 698–704 80 866–872
25 536–542 53 704–710 81 872–878
26 542–548 54 710–716 82 878–884
27 548–554 55 716–722 83 884–890
28 554–560 56 722–728
29 560–566 57 728–734
? 2000 by CRC Press LLC
Maximum power output limits are specified by the FCC for each type of service. The maximum effective
radiated power (ERP) for low-band VHF is 100 kW; for high-band VHF it is 316 kW; and for UHF it is 5 MW.
The ERP of a station is a function of transmitter power output (TPO) and antenna gain. ERP is determined
by multiplying these two quantities together and subtracting transmission line loss.
The second major factor that affects the coverage area of a TV station is antenna height, known in the
broadcast industry as height above average terrain (HAAT). HAAT takes into consideration the effects of the
geography in the vicinity of the transmitting tower. The maximum HAAT permitted by the FCC for a low- or
high-band VHF station is 1000 ft (305 m) east of the Mississippi River and 2000 ft (610 m) west of the Missis-
sippi. UHF stations are permitted to operate with a maximum HAAT of 2000 ft (610 m) anywhere in the United
States (including Alaska and Hawaii).
The ratio of visual output power to aural output power can vary from one installation to another; however,
the aural is typically operated at between 10 and 20% of the visual power. This difference is the result of the
reception characteristics of the two signals. Much greater signal strength is required at the consumer’s receiver
to recover the visual portion of the transmission than the aural portion. The aural power output is intended
to be sufficient for good reception at the fringe of the station’s coverage area but not beyond. It is of no use
for a consumer to be able to receive a TV station’s audio signal but not the video.
In addition to high power stations, two classifications of low-power TV stations have been established by
the FCC to meet certain community needs: They are:
?Translator: A low-power system that rebroadcasts the signal of another station on a different channel.
Translators are designed to provide “fill-in” coverage for a station that cannot reach a particular com-
munity because of the local terrain. Translators operating in the VHF band are limited to 100 W power
output (ERP), and UHF translators are limited to 1 kW.
?Low-Power Television (LPTV):A service established by the FCC designed to meet the special needs
of particular communities. LPTV stations operating on VHF frequencies are limited to 100 W ERP, and
UHF stations are limited to 1 kW. LPTV stations originate their own programming and can be assigned
by the FCC to any channel, as long as sufficient protection against interference to a full-power station
is afforded.
Composite Video
The composite video waveform is shown in Fig. 69.17. The actual radiated signal is inverted, with modulation
extending from the synchronizing pulses at maximum carrier level (100%) to reference picture white at 7.5%.
Because an increase in the amplitude of the radiated signal corresponds to a decrease in picture brightness, the
polarity of modulation is termed negative. The term composite is used to denote a video signal that contains:
?Picture luminance and chrominance information
?Timing information for synchronization of scanning and color signal processing circuits
The negative-going portion of the waveform shown in Fig. 69.17 is used to transmit information for synchro-
nization of scanning circuits. The positive-going portion of the amplitude range is used to transmit luminance
information representing brightness and, for color pictures, chrominance.
At the completion of each line scan in a receiver or monitor, a horizontal synchronizing (H-sync) pulse in
the composite video signal triggers the scanning circuits to return the beam rapidly to the left of the screen for
the start of the next line scan. During the return time, a horizontal blanking signal at a level lower than that
corresponding to the blackest portion of the scene is added to avoid the visibility of the retrace lines. In a similar
manner, after completion of each field, a vertical blanking signal blanks out the retrace portion of the scanning
beam as it returns to the top of the picture to start the scan of the next field. The small-level difference between
video reference black and blanking level is called setup. Setup is used as a guard band to ensure separation of the
synchronizing and video-information functions and adequate blanking of the scanning retrace lines on receivers.
The waveforms of Fig. 69.18 show the various reference levels of video and sync in the composite signal. The
unit of measurement for video level was specified initially by the Institute of Radio Engineers (IRE). These IRE
units are still used to quantify video signal levels. The primary IRE values are given in Table 69.8.
? 2000 by CRC Press LLC
FIGURE 69.17 The principal components of the NTSC color television waveform. (Source: Electronic Industries Association.)
FIGURE 69.18 Sync pulse widths for the NTSC color system. (Source: Electronic Industries Association.)
? 2000 by CRC Press LLC
Color Signal Encoding
To facilitate an orderly introduction of color television broadcasting in the United States and other countries
with existing monochrome services, it was essential that the new transmissions be compatible. In other words,
color pictures would provide acceptable quality on unmodified monochrome receivers. In addition, because of
the limited availability of the RF spectrum, another related requirement was the need to fit approximately 2-
MHz bandwidth of color information into the 4.2-MHz video bandwidth of the existing 6-MHz broadcasting
channels with little or no modification of existing transmitters. This is accomplished by using the band-sharing
color signal system developed by the NTSC and by taking advantage of the fundamental characteristics of the
eye regarding color sensitivity and resolution.
The video-signal spectrum generated by scanning an image
consists of energy concentrated near harmonics of the 15,734-
Hz line scanning frequency. Additional lower-amplitude side-
band components exist at multiples of 60 Hz (the field scan
frequency) from each line scan harmonic. Substantially no
energy exists halfway between the line scan harmonics, that
is, at odd harmonics of one half line frequency. Thus, these
blank spaces in the spectrum are available for the transmis-
sion of a signal for carrying color information and its side-
band. In addition, a signal modulated with color information
injected at this frequency is of relatively low visibility in the reproduced image because the odd harmonics are
of opposite phase on successive scanning lines and in successive frames, requiring four fields to repeat. Fur-
thermore, the visibility of the color video signal is reduced further by the use of a subcarrier frequency near
the cutoff of the video bandpass.
In the NTSC system, color is conveyed using two elements:
?A luminance signal
?A chrominance signal
The luminance signal is derived from components of the three primary colors — red, green, and blue — in
the proportions for reference white, E
y
, as follows:
E
y
= 0.3E
R
+ 0.59E
G
+ 0.11E
B
These transmitted values equal unity for white and thus result in the reproduction of colors on monochrome
receivers at the proper luminance level. This is known as the constant-luminance principle.
The color signal consists of two chrominance components, I and Q, transmitted as amplitude-modulated
sidebands of two 3.579545-MHz subcarriers in quadrature. The subcarriers are suppressed, leaving only the
sidebands in the color signal. Suppression of the carriers permits demodulation of the color signal as two
separate color signals in a receiver by reinsertion of a carrier of the phase corresponding to the desired color
signal (synchronous demodulation).
I and Q signals are composed of red, green, and blue primary color components produced by color cameras
and other signal generators. The phase relationship among the I and Q signals, the derived primary and
complementary colors, and the color synchronizing burst can be shown graphically on a vectorscope display.
The horizontal and vertical sweep signals on a vectorscope are produced from R-Y and B-Y subcarrier sine
waves in quadrature, producing a circular display. The chrominance signal controls the intensity of the display.
A vectorscope display of an Electronic Industries Association (EIA) standard color bar signal is shown in
Fig. 69.19.
Color-Signal Decoding
Each of the two chroma signal carriers can be recovered individually by means of synchronous detection. A
reference subcarrier of the same phase as the desired chroma signal is applied as a gate to a balanced demod-
ulator. Only the modulation of the signal in the same phase as the reference will be present in the output. A
TABLE 69.8Video and Sync Levels in IRE Units
Signal Level IRE Level
Reference white 100
Blanking level width measurement 20
Color burst sine wave peak +20 to –20
Reference black 7.5
Blanking 0
Sync pulse width measurement –20
Sync level –40
? 2000 by CRC Press LLC
low-pass filter may be added to remove second harmonic components of the chroma signal generated in the
process.
Transmission Equipment
Television transmitters are classified in terms of their operating band, power level, type of final amplifier stage,
and cooling method. The transmitter is divided into two basic subsystems:
? The visual section, which accepts the video input, amplitude modulates an RF carrier, and amplifies the
signal to feed the antenna system
? The aural section, which accepts the audio input, frequency modulates a separate RF carrier and amplifies
the signal to feed the antenna system
The visual and aural signals are combined to feed a single radiating system.
Transmitter Design Considerations
Each manufacturer has a particular philosophy with regard to the design and construction of a broadcast TV
transmitter. Some generalizations can, however, be made with respect to basic system design.
When the power output of a TV transmitter is discussed, the visual section is the primary consideration.
Output power refers to the peak power of the visual section of the transmitter (peak of sync). The FCC-licensed
ERP is equal to the transmitter power output minus feedline losses times the power gain of the antenna.
A low-band VHF station can achieve its maximum 100-kW power output through a wide range of transmitter
and antenna combinations. A 35-kW transmitter coupled with a gain-of-4 antenna would work, as would a
10-kW transmitter feeding an antenna with a gain of 12. Reasonable pairings for a high-band VHF station
would range from a transmitter with a power output of 50 kW feeding an antenna with a gain of 8, to a 30-kW
transmitter connected to a gain-of-12 antenna. These combinations assume reasonable feedline losses. To reach
the exact power level, minor adjustments are made to the power output of the transmitter, usually by a front
panel power trim control.
FIGURE 69.19 Vectorscope representation for chroma and vector amplitude relationships in the NTSC system. (Source:
Electronic Industries Association.)
? 2000 by CRC Press LLC
UHF stations that want to achieve their maximum licensed power output are faced with installing a very
high-power transmitter. Typical pairings include a transmitter rated for 220 kW and an antenna with a gain of
25, or a 110-kW transmitter and a gain-of-50 antenna. In the latter case, the antenna could pose a significant
problem. UHF antennas with gains in the region of 50 are possible, but not advisable for most installations
because of the coverage problems that can result. High-gain antennas have a narrow vertical radiation pattern
that can reduce a station’s coverage in areas near the transmitter site.
At first examination, it might seem reasonable and economical to achieve licensed ERP using the lowest
transmitter power output possible and highest antenna gain. Other factors, however, come into play that make
the most obvious solution not always the best solution. Factors that limit the use of high-gain antennas include:
?The effects of high-gain designs on coverage area and signal penetration
?Limitations on antenna size because of tower restrictions, such as available vertical space, weight, and
windloading
?The cost of the antenna
The amount of output power required of a transmitter will have a fundamental effect on system design.
Power levels dictate whether the unit will be of solid-state or vacuum-tube design; whether air, water, or vapor
cooling must be used; the type of power supply required; the sophistication of the high-voltage control and
supervisory circuitry; and many other parameters.
Solid-state devices are generally used for VHF transmitters below 35 kW and for low-power UHF transmitters
(below 10 kW). Tetrodes may also be used in these ranges. As solid-state technology advances, the power levels
possible in a reasonable transmitter design steadily increase. In the realm of high power UHF transmitters, the
klystron is a common power output device. Klystrons use an electron bunching technique to generate high
power (55 kW from a single tube is not uncommon) at microwave frequencies. The klystron, however, is
relatively inefficient in its basic form. A stock klystron with no efficiency-optimizing circuitry might be only
40 to 50% efficient, depending on the type of device used. Various schemes have been devised to improve
klystron efficiency, the best known of which is beam pulsing. Two types of pulsing are in common used:
?Mod-anode pulsing, a technique designed to reduce power consumption of the klystron during the color
burst and video portion of the signal (and thereby improve overall system efficiency)
?Annular control electrode (ACE) pulsing, which accomplishes basically the same thing by incorporating
the pulsing signal into a low-voltage stage of the transmitter, rather than a high-voltage stage (as with
mod-anode pulsing).
Still another approach to improving UHF transmitter efficiency involves entirely new classes of vacuum
tubes: the Klystrode (also known as the inductive output tube, IOT) and the multistage depressed collector
(MSDC) klystron. (The Klystrode is a registered trademark of Varian.) The IOT is a device that essentially
combines the cathode/grid structure of the tetrode with the drift tube/collector structure of the klystron. The
MSDC klystron incorporates a collector assembly that operates at progressively lower voltage levels. The net
effect for the MSDC is to recover energy from the electron stream rather than dissipating the energy as heat.
Elements of the Transmitter
A television transmitter can be divided into four major subsystems:
?The exciter
?Intermediate power amplifier (IPA)
?Power amplifier (PA)
?High-voltage power supply
Figure 69.20 shows the audio, video, and RF paths for a typical television transmitter.
The modulated visual intermediate frequency (IF) signal is band-shaped in a vestigial sideband filter, typically
a surface-acoustic-wave (SAW) filter. Envelope-delay correction is not required for the SAW filter because of
the uniform delay characteristics of the device. Envelope-delay compensation may, however, be needed for
other parts of the transmitter. The SAW filter provides many benefits to transmitter designers and operators.
? 2000 by CRC Press LLC
A SAW filter requires no adjustments and is stable with respect to temperature and time. A color-notch filter is
required at the output of the transmitter because imperfect linearity of the IPA and PA stages introduces
unwanted modulation products.
The power amplifier raises the output energy of the transmitter to the desired RF operating level. Tetrodes
in television service are operated in the class B mode to obtain reasonable efficiency while maintaining a linear
transfer characteristic. Class B amplifiers, when operated in tuned circuits, provide linear performance because
of the flywheel effect of the resonance circuit. This allows a single tube to be used instead of two in push-pull
fashion. The bias point of the linear amplifier is chosen so that the transfer characteristic at low modulation
levels matches that at higher modulation levels. The plate (anode) circuit of a tetrode PA is usually built around
a coaxial resonant cavity, which provides a stable and reliable tank circuit.
Solid state transmitters typically incorporate a massively parallel design to achieve the necessary power levels.
So-called power blocks of 1 kW or greater are combined as required to meet the target transmitter power
output. Most designs use MOSFETs running in a class D (or higher) switching mode. Any one of several
combiner schemes may be used to couple the power blocks to the load. Depending on the design, high-reliability
features may be incorporated into the transmitter, including automatic disconnection of failed power blocks
and hot-changing of defective modules.
UHF transmitters using a klystron in the final output stage must operate class A, the most linear but also
most inefficient operating mode for a vacuum tube. Two types of klystrons have traditionally been used: integral
cavity and external cavity devices. The basic theory of operation is identical for each tube, but the mechanical
approach is radically different. In the integral cavity klystron, the cavities are built into the device to form a
FIGURE 69.20Simplified block diagram of a VHF television transmitter.
? 2000 by CRC Press LLC
single unit. In the external cavity klystron, the cavities are outside the vacuum envelope and are bolted around
the tube when the klystron is installed in the transmitter. A number of factors come into play in a discussion
of the relative merits of integral vs. external cavity designs. Primary considerations include operating efficiency,
purchase price, and life expectancy.
Transmitters based on IOT or MSDC klystron final tubes have much in common with traditional klystron-
based systems. There are, however, a number of significant differences, including:
?Low-level video waveform precorrection circuitry
?Drive power requirements
?Power supply demands and complexity
?Fault/arc suppression and protection
?Cooling system design and complexity
?Overall system efficiency
The transmitter block diagram of Fig. 69.20 shows separate visual and aural PA stages. This configuration
is normally used for high-power transmitters. Low-power designs often use a combined mode (common
amplification) in which the aural and visual signals are added prior to the PA. This approach offers a simplified
system but at the cost of additional precorrection of the input video signal.
PA stages often are configured so that the circuitry of the visual and aural amplifiers is identical, providing
backup protection in the event of a visual PA failure. The aural PA can then be reconfigured to amplify both
the aural and the visual signals at reduced power.
The aural output stage of a television transmitter is similar in basic design to a frequency modulated (FM)
broadcast transmitter. Tetrode output devices generally operate class C; solid-state devices operate in one of
many possible switching modes for high efficiency. The aural PA for a UHF transmitter may use a klystron,
IOT, MSDC, tetrode, or a group of solid-state power blocks.
Harmonic filters are employed to attenuate out-of-band radiation of the aural and visual signals to ensure
compliance with FCC requirements. Filter designs vary depending upon the manufacturer; however, most are
of coaxial construction utilizing L and C components housed within a prepackaged assembly. Stub filters are
also used, typically adjusted to provide maximum attenuation at the second harmonic of the operating frequency
of the visual carrier and the aural carrier.
The filtered visual and aural outputs are fed to a hybrid diplexer where the two signals are combined to feed
the antenna. For installations that require dual-antenna feedlines, a hybrid combiner with quadrature-phased
outputs is used. Depending upon the design and operating power, the color-notch filter, aural and visual
harmonic filters, and diplexer may be combined into a single mechanical unit.
Antenna System
Broadcasting is accomplished by the emission of coherent electromagnetic waves in free space from one or
more radiating-antenna elements that are excited by modulated RF currents. Although, by definition, the
radiated energy is composed of mutually dependent magnetic and electric vector fields, conventional practice
in television engineering is to measure and specify radiation characteristics in terms of the electric field only.
The field vectors may be polarized horizontally, vertically, or circularly. Television broadcasting, however,
has used horizontal polarization for the majority of installations worldwide. More recently interest in the
advantages of circular polarization has resulted in an increase in this form of transmission, particularly for
VHF channels. Both horizontal and circular polarization designs are suitable for tower-top or side-mounted
installations. The latter option is dictated primarily by the existence of a previously installed tower-top antenna.
On the other hand, in metropolitan areas where several antennas must be located on the same structure, either
a stacking or candelabra-type arrangement is feasible. Another approach to TV transmission involves combining
the RF outputs of two or more stations and feeding a single wideband antenna. This approach is expensive and
requires considerable engineering analysis to produce a combiner system that will not degrade the performance
of either transmission system.
? 2000 by CRC Press LLC
Television Reception
The broadcast channels in the United States are 6 MHz wide for transmission on conventional 525-line stan-
dards. The minimum signal level at which a television receiver will provide usable pictures and sound is called
the sensitivity level. The FCC has set up two standard signal level classifications, Grades A and B, for the purpose
of licensing television stations and allocating coverage areas. Grade A refers to urban areas relatively near the
transmitting tower; Grade B use ranges from suburban to rural and other fringe areas a number of miles from
the transmitting antenna.
Many sizes and form factors of receivers are manufactured. Portable personal types include pocket-sized or
hand-held models with picture sizes of 2 to 4 in. diagonal for monochrome and 5 to 6 in. for color. Large
screen sizes are available in monochrome where low cost and light weight are prime requirements. However,
except where portability is important, the majority of television program viewing is in color. The 19- and 27-
in. sizes dominate the market.
Television receiver functions may be broken down into several interconnected blocks. With the increasing
use of large-scale integrated circuits, the isolation of functions has become less obvious in the design of receivers.
The typical functional configuration of a receiver using a trigun picture tube is shown in Fig. 69.21.
Display Systems
Color video displays may be classified under the following categories:
?Direct-view CRT
?Large-screen display, optically projected from a CRT
?Large-screen display, projected from a modulated light beam
?Large-area display of individually driven light-emitting CRTs or incandescent picture elements
?Flat-panel matrix of transmissive or reflective picture elements
?Flat-panel matrix of light-emitting picture elements
The CRT remains the dominant type of display for both consumer and professional 525-/625-line television
applications. The Eidophor and light-valve systems using a modulated light source have found wide application for
presentations to large audiences in theater environments, particularly where high screen brightness is required.
Matrix-driven flat-panel displays are used in increasing numbers for small-screen personal television receivers and
for portable projector units. Video and data projectors using LCD technology have gained wide acceptance.
FIGURE 69.21Simplified schematic block diagram of a color television receiver.
? 2000 by CRC Press LLC
Cathode Ray Tube Display
The direct-view CRT is the dominant display device in television. The attributes offered by CRTs include the
following:
? High brightness
? High resolution
? Excellent gray-scale reproduction
? Low cost compared to other types of displays
From the standpoint of television receiver manufacturing simplicity and low cost, packaging of the display
device as a single component is attractive. The tube itself is composed of only three basic parts: an electron
gun, an envelope, and a shadow-mask phosphor screen. The luminance efficiency of the electron optical system
and the phosphor screen is high. A peak beam current of under 1 mA in a 25-in. tube will produce a highlight
brightness of up to 100 ft·L. The major drawback is the power required to drive the horizontal sweep circuit
and the high accelerating voltage necessary for the electron beam. This requirement is partially offset through
generation of the screen potential and other lower voltages by rectification of the scanning flyback voltage.
As consumer demands drive manufacturers to produce larger picture sizes, the weight and depth of the CRT
and the higher power and voltage requirements become serious limitations. These are reflected in sharply
increasing receiver costs. To withstand the atmospheric pressures on the evacuated glass envelope, CRT weight
increases exponentially with the viewable diagonal. Nevertheless, manufacturers have continued to meet the
demand for increased screen sizes with larger direct-view tubes. Improved versions of both tridot delta and in-
line guns have been produced. The tridot gun provides small spot size at the expense of critical convergence
adjustments for uniform resolution over the full-tube faceplate. In-line guns permit the use of a self-converging
deflection yoke that will maintain dynamic horizontal convergence over the full face of the tube without the
need for correction waveforms. The downside is slightly reduced resolution.
Defining Terms
Aural: The sound portion of a television signal.
Beam pulsing: A method used to control the power output of a klystron in order to improve the operating
efficiency of the device.
Blanking: The portion of a television signal that is used to blank the screen during the horizontal and vertical
retrace periods.
Composite video: A single video signal that contains luminance, color, and synchronization information.
NTSC, PAL, and SECAM are all examples of composite video formats.
Effective radiated power: The power supplied to an antenna multiplied by the relative gain of the antenna
in a given direction.
Equalizing pulses: In an encoded video signal, a series of 2X line frequency pulses occurring during vertical
blanking, before and after the vertical synchronizing pulse. Different numbers of equalizing pulses are
inserted into different fields to ensure that each field begins and ends at the right time to produce proper
interlace. The 2X line rate also serves to maintain horizontal synchronization during vertical blanking.
External cavity klystron: A klystron device in which the resonant cavities are located outside the vacuum
envelope of the tube.
Field: One of the two (or more) equal parts of information into which a frame is divided in interlace video
scanning. In the NTSC system, the information for one picture is divided into two fields. Each field
contains one-half the lines required to produce the entire picture. Adjacent lines in the picture are
contained in alternate fields.
Frame: The information required for one complete picture in an interlaced video system. For the NTSC
system, there are two fields per frame.
H (horizontal): In television signals, H may refer to any of the following: the horizontal period or rate,
horizontal line of video information, or horizontal sync pulse.
? 2000 by CRC Press LLC
Hue: One of the characteristics that distinguishes one color from another. Hue defines color on the basis of
its position in the spectrum (red, blue, green, yellow, etc.). Hue is one of the three characteristics of
television color. Hue is often referred to as tint. In NTSC and PAL video signals, the hue information at
any particular point in the picture is conveyed by the corresponding instantaneous phase of the active
video subcarrier.
Hum bars: Horizontal black and white bars that extend over the entire TV picture and usually drift slowly
through it. Hum bars are caused by an interfering power line frequency or one of its harmonics.
Integral cavity klystron: A klystron device in which the resonant cavities are located inside the vacuum
envelope of the tube.
Interlaced: A shortened version of interlaced scanning (also called line interlace). Interlaced scanning is a
system of video scanning whereby the odd- and even-numbered lines of a picture are transmitted
consecutively as two separate interleaved fields.
IRE: A unit equal to 1/140 of the peak-to-peak amplitude of a video signal, which is typically 1 V. The 0 IRE
point is at blanking level, with the sync tip at –40 IRE and white extending to +100 IRE. IRE stands for
Institute of Radio Engineers, an organization preceding the IEEE, which defined the unit.
Klystrode: An amplifier device for UHF-TV signals that combines aspects of a tetrode (grid modulation)
with a klystron (velocity modulation of an electron beam). The result is a more efficient, less expensive
device for many applications. (Klystrode is a trademark of EIMAC, a division of Varian Associates.) The
term Inductive Output Tube (IOT) is a generic name for this class of device.
Klystron: An amplifier device for UHF and microwave signals based on velocity modulation of an electron
beam. The beam is directed through an input cavity, where the input RF signal polarity initializes a
bunching effect on electrons in the beam. The bunching effect excites subsequent cavities, which increase
the bunching through an energy flywheel concept. Finally, the beam passes an output cavity that couples
the amplified signal to the load (antenna system). The beam falls onto a collector element that forms
the return path for the current and dissipates the heat resulting from electron beam bombardment.
Low-power TV (LPTV): A television service authorized by the FCC to serve specific confined areas. An LPTV
station may typically radiate between 100 and 1000 W of power, covering a geographic radius of 10 to 15 mi.
Multistage depressed collector (MSDC) klystron: A specially designed klystron in which decreasing voltage
zones cause the electron beam to be reduced in velocity before striking the collector element. The effect
is to reduce the amount of heat that must be dissipated by the device, improving operating efficiency.
Pixel: The smallest distinguishable and resolvable area in a video image. A pixel is a single point on the screen.
The word pixel is derived from picture element.
Raster: A predetermined pattern of scanning the screen of a CRT. Raster may also refer to the illuminated
area produced by scanning lines on a CRT when no video is present.
Saturation: The intensity of the colors in the active picture, the voltage levels of the colors. Saturation relates
to the degree by which the eye perceives a color as departing from a gray or white scale of the same
brightness. A 100% saturated color does not contain any white; adding white reduces saturation. In NTSC
and PAL video signals, the color saturation at any particular instant in the picture is conveyed by the
corresponding instantaneous amplitude of the active video subcarrier.
Scan: One sweep of the target area in a camera tube or of the screen in a picture tube.
Setup: A video term relating to the specified base of an active picture signal. In NTSC, the active picture
signal is placed 7.5 IRE units above blanking (0 IRE). Setup is the separation in level between the video
blanking and reference black levels.
Synchronous detection: A demodulation process in which the original signal is recovered by multiplying the
modulated signal by the output of a synchronous oscillator locked to the carrier.
Translator: An unattended television or FM broadcast repeater that receives a distant signal and retransmits
the picture and/or audio locally on another channel.
Vectorscope: An oscilloscope-type device used to display the color parameters of a video signal. A vectorscope
decodes color information into R-Y and B-Y components, which are then used to drive the X and Y axis
of the scope. The total lack of color in a video signal is displayed as a dot in the center of the vectorscope.
The angle, distance around the circle, magnitude, and distance away from the center indicate the phase
and amplitude of the color signal.
? 2000 by CRC Press LLC
Related Topics
69.2 Radio?69.4 High-Definition Television
References
K. B. Benson and J. Whitaker, Eds., Television Engineering Handbook, rev. ed., New York: McGraw-Hill, 1991.
K. B. Benson and J. Whitaker, Television and Audio Handbook for Technicians and Engineers, New York: McGraw-
Hill, 1990.
J. Whitaker, Radio Frequency Transmission Systems: Design and Operation, New York: McGraw-Hill, 1991.
J. Whitaker, Maintaining Electronic Systems, Boca Raton: CRC Press, 1991.
Further Information
Additional information on the topic of television system technology is available from the following sources:
Broadcast Engineering magazine, a monthly periodical dealing with television technology. The magazine,
published by Intertec Publishing, located in Overland Park, Kan., is free to qualified subscribers.
The Society of Motion Picture and Television Engineers, which publishes a monthly journal and holds
conferences in the fall and winter. The SMPTE is headquartered in White Plains, N.Y.
The Society of Broadcast Engineers, which holds an annual technical conference in the spring. The SBE is
located in Indianapolis, Ind.
The National Association of Broadcasters, which holds an annual engineering conference and trade show in
the spring. The NAB is headquartered in Washington, D.C.
In addition, the following books are recommended:
K.B. Benson and J. Whitaker, Eds., Television Engineering Handbook, rev. ed., New York: McGraw-Hill, 1991.
K.B. Benson and J. Whitaker, Eds., Television and Audio Handbook for Technicians and Engineers, New York:
McGraw-Hill, 1990.
National Association of Broadcasters Engineering Handbook, 8th ed., Washington, D.C.: NAB, 1992.
69.4 High-Definition Television
Martin S. Roden
When standards were developed for television, few people dreamed of its evolution into a type of universal
communication terminal. While these traditional standards are acceptable for entertainment video, they are
not adequate for many emerging applications, such as videotext. We must evolve into a high-resolution standard.
High-definition TV (HDTV) is a term applied to a broad class of new systems whose developments have received
worldwide attention.
We begin with a brief review of the current television standards. The reader is referred to Section 69.3 for a
more detailed treatment of conventional television.
Japan and North America use the National Television Systems Committee (NTSC) standard that specifies
525 scanning lines per picture, a field rate of 59.94 per second (nominally 60 Hz), and 2:1 interlaced scanning
(although there are about 60 fields per second, there are only 30 new frames per second). The aspect ratio
(ratio of width to height) is 4:3. The bandwidth of the television signal is 6 MHz, including the sound signal.
In Europe and some other countries, the phase-alternation line (PAL) or the sequential color and memory
(SECAM) standard is used. This specifies 625 scanning lines per picture and a field rate of 50 per second. The
bandwidth of this type of television signal is 8 MHz.
HDTV systems nominally double the number of scan lines in a frame and change the aspect ratio to 16:9.
Of course, if we were willing to start from scratch and abandon all existing television systems, we could set the
bandwidth of each channel to a number greater than 6 (or 8) MHz, thereby achieving higher resolution. The
Japan Broadcasting Corporation (NHK) has done just this in their HDTV system. This system permits 1125
lines per frame with 30 frames per second and 60 fields per second (2:1 interlaced scanning). The aspect ratio
? 2000 by CRC Press LLC
is 16:9. The system is designed for a bandwidth of 10 MHz per channel. With the 1990 launching of the BS-3
satellite, two channels were devoted to this form of HDTV. To fit the channel within a 10-MHz bandwidth
(instead of the approximately 50 MHz that would be needed to transmit using traditional techniques), band-
width compression was required. It should be noted that the Japanese system is primarily analog frequency
modulation (FM) (the sound is digital). The approach to decreasing bandwidth is multiple sub-Nyquist
encoding (MUSE). The sampling below Nyquist lowers the bandwidth requirement, but moving images suffer
from less resolution.
Europe began its HDTV project in mid-1986 with a joint initiative involving West Germany (Robert Bosch
GmbH), the Netherlands (NV Phillips), France (Thomson SA), and the United Kingdom (Thorn/EMI Plc.).
The system, termed Eureka 95 or D2-MAC, has 1152 lines per frame, 50 fields per second, 2:1 interlaced
scanning, and a 16:9 aspect ratio. A more recent European proposed standard is for 1250 scanning lines at 50
fields per second. This is known as the Eureka EU95. It is significant to note that the number of lines specified
by Eureka EU95 is exactly twice that of the PAL and SECAM standard currently in use. The field rate is the same,
so it is possible to devise compatible systems that would permit reception of HDTV by current receivers (of course,
with adapters and without enhanced definition). The HDTV signal requires nominally 30 MHz of bandwidth.
In the United States, the FCC has ruled (in March 1990) that any new HDTV system must permit continuation
of service to contemporary NTSC receivers. This significant constraint applies to terrestrial broadcasting (as
opposed to videodisk, videotape, and cable television). The HDTV signals will be sent on “taboo channels,”
those that are not used in metropolitan areas to provide adequate separation. Thus, these currently unused
channels would be used for simulcast signals. Since the proposed HDTV system for the United States uses
digital transmission, transmitter power can be less than that used for conventional television — this reduces
interference with adjacent channels. Indeed, in heavily populated urban areas (where many stations are licensed
for broadcast), the HDTV signals will have to be severely limited in power.
When a color television signal is converted from analog to digital (A/D), the luminance, hue, and saturation
signals must each be digitized using 8 bits of A/D per sample. Digital transmission of conventional television
therefore requires a nominal bit rate of about 216 megabits/s, while uncompressed HDTV nominally requires
about 1200 megabits/s. If we were to use a digital modulation system that transmits 1 bit per hertz of bandwidth,
we see that the HDTV signal requires over 1 GHz of bandwidth, yet only 6 MHz is allocated. Clearly significant
data compression is required!
Proposed Systems
In the early 1990s, four digital HDTV approaches were submitted for FCC testing. The four were proposed by
General Instrument Corporation, the Advanced Television Research Consortium (composed of NBC, David
Sarnoff Research Center, Philips Consumer Electronics, and Thomson Consumer Electronics, Inc.), Zenith
Electronics in cooperation with AT&T Bell Labs and AT&T Microelectronics, and the American Television
Alliance (General Instrument Corporation and MIT). There were many common aspects to the four proposals,
but major differences existed in the data compression approaches. The data compression techniques can be
viewed as two-dimensional extensions of techniques used in voice encoding.
Something unprecedented happened in Spring 1993. The various competing parties decided, with some
encouragement from an FCC advisory committee, to merge to form a Grand Alliance. The Alliance consists
of seven members: AT&T, General Instrument Corp., MIT, Philips, Sarnoff, Thomson, and Zenith. This per-
mitted the selection of the “best” features of each of the proposals. The advisory committee was then able to
spend Fall 1995 on completion of the proposed HDTV standard. In the following, we describe a generic system.
The reader is referred to the references for details.
Figure 69.22 shows a general block diagram of a digital HDTV transmitter. Each frame from the camera is
digitized, and the system has the capability of storing one entire frame. Thus the processor works with two
inputs—the current frame (A) and the previous frame (B). The current frame and the previous frame are
compared in a motion detector that generates coded motion information (C). Algorithms used for motion
estimation attempt to produce three-dimensional parameters from sequential two-dimensional information.
Parameters may include velocity estimates for blocks of the picture.
? 2000 by CRC Press LLC
The parameters from the motion detector are processed along with the previous frame to produce a prediction
of the current frame (D). Since the motion detector parameters are transmitted, the receiver can perform a
similar prediction of the current frame.
The predicted current frame is compared to the actual current frame, and a difference signal (E) is generated.
This difference signal will generally have a smaller dynamic range than the original signal. For example, if the
television image is static (is not changing with time), the difference signal will be zero.
The difference signal is compressed to form the transmitted video signal (F). This compression is performed
both in the time and transform domains. Entropy coding of the type used in facsimile can be incorporated to
take spatial continuity into account (i.e., a picture usually does not change over the span of a single picture
element, so variations of “run length” coding can often compress the data). The compression technique
incorporates the MPEG-2 syntax. The actual compression algorithms (based on the discrete cosine transform)
are adaptive so a variety of formats can be accommodated (e.g., 1080-line interlaced scanning, 720-line pro-
gressive, bi-directional). The main feature is that the data rate is decreased by extracting essential parameters
that describe the waveform.
Four data streams are asynchronously multiplexed to form the information to be transmitted (G). These
four signals consist of the coded differential video, the motion detector parameters, the digital audio signal
(using Dolby Labs’ AC-3 digital audio), and the synchronizing signals. Other information can be multiplexed,
including various control signals that may be needed by cable operators.
Forward error correction is applied to the multiplexed digital signal to produce an encoded signal (H) that
makes the transmission less susceptible to uncorrected bit errors. This is needed because of the anticipated low
transmission power rates. Error control is also important because compression can amplify error effects—a
single bit error can affect many picture elements.
The encoded data signal forms the input to the modulator. To further conserve bandwidth, a type of
quadrature modulation is employed. The actual form is 8-VSB, a variation of digital vestigial sideband that
includes trellis coding. This possesses many of the advantages of guadrature amplitude modulation (QAM).
The corresponding receiver is shown in Fig. 69.23. The receiver simply forms the inverse of each transmitter
operation. The received signal is first demodulated. The resulting data signal is decoded to remove the redun-
dancy and correct errors. A demultiplexer separates the signal into the original four (or more) data signals.
The audio and synchronization signals need no further processing.
The demultiplexed video signal is, hopefully, the same as the transmitted signal (“F”). We use letters with
quotation marks to indicate that the signals are estimates of their transmitted counterpart. This reproduced
video signal is decompressed, using the inverse algorithm of that used in the transmitter, to yield an estimate
of the original differential picture signal (“E”). The predict block in the receiver implements the same algorithm
as that of the transmitter. Its inputs are the reconstructed motion signal (“C”) and the previous reconstructed
frame (“B”). When the predictor output (“D”) is added to the reconstructed differential picture signal (“E”),
the result is a reconstructed version of the current frame.
FIGURE 69.22Block diagram of HDTV transmitter.
? 2000 by CRC Press LLC
Defining Terms
Aspect ratio: Ratio of frame width to height.
Digital vestigial sideband: A form of digital modulation where a portion of one of the sidebands is partially
suppressed.
Discrete cosine transform: A popular format for video compression. The spatial signal is expanded in a
cosine series, where the higher frequencies represent increased video resolution.
Entropy coding: A form of data compression that reduces a transmission to a shorter length by reducing
signal redundancy.
Eureka 95 and EU95: European proposed HDTV systems.
Grand Alliance: A consortium formed of seven of the organizations proposing HDTV systems.
Interlaced scanning: A bandwidth reduction technique wherein every other scan line is first transmitted
followed by the “in between” lines.
Motion detector: A system that compares two adjacent frames to detect differences.
MPEG-2: Video compression standard devised by the Moving Picture Experts Group.
MUSE: Multiple sub-Nyquist encoding, a technique used in Japanese HDTV system.
Taboo channels: Channels that the FCC does not currently assign in order to avoid interference from adjacent
channels.
Trellis coding: A form of digital encoding which provides a constraint (i.e., a structure) to a stream of digital
data.
Related Topic
69.3 Television Systems
References
G.W. Beakley, “Channel coding for digital HDTV terrestrial broadcasting,” IEEE Transactions on Broadcasting,
vol. 37, no. 4, 1991.
Grand Alliance, “Proposed HDTV standard”. May be obtained as ftp from ga-doc.sarnoff.com. May also be
obtained by sending an e-mail to grand_alliance@sarnoff.com.
R. Hopkins, “Digital HDTV broadcasting,” IEEE Transactions on Broadcasting, vol. 37, no. 4, 1991.
R.K. Jurgen, Ed., “High-definition television update,” IEEE Spectrum, April 1988.
R.K. Jurgen, Ed., “Consumer electronics,” IEEE Spectrum, January 1989.
R.K. Jurgen, Ed., “The challenges of digital HDTV,” IEEE Spectrum, April 1991.
J.C. McKinney, “HDTV approaches the end game,” IEEE Transactions on Broadcasting, vol. 37, no. 4, 1991.
S. Prentiss, HDTV, Blue Ridge Summit, Pa.: TAB Books, 1990.
M.S. Roden, Analog and Digital Communication Systems, 4th ed., Englewood Cliffs, N.J.: Prentice-Hall, 1996.
W.Y. Zou, “Digital HDTV compression techniques,” IEEE Transactions on Broadcasting, vol. 37, no. 4, 1991.
FIGURE 69.23 Block diagram of HDTV receiver.
? 2000 by CRC Press LLC
Further Information
As HDTV transitions from a proposed system to a commercially available product, you can expect information
to appear in a variety of places from the most esoteric research publications to popular business and entertain-
ment publications. During the development process, the best places to look are the IEEE publications (IEEE,
NY) and the broadcasting industry journals. The IEEE Transactions on Broadcasting and the IEEE Transactions
on Consumer Electronics continue to have periodic articles relating to the HDTV standards and implementation
of these standards. Another source of information, though not overly technical, is the periodical Broadcasting
and Cable (Cahners Publishing, NY).
69.5 Digital Audio Broadcasting
Stanley Salek and Almon H. Clegg
Digital audio broadcasting (DAB) is a developing technology that promises to give consumers a new and better
aural broadcast system. DAB will offer dramatically better reception quality over existing AM and FM broadcasts
by better audio quality and by superior resistance to interference in stationary and mobile/portable reception
environments. Additionally, the availability of a digital data stream direct to consumers will open the prospects
of providing extra services to augment basic sound delivery.
As of this writing, seven proponents have announced DAB transmission and reception systems. From the
data available describing these potential systems, it is clear that there is only partial agreement on which
transmission method will provide the best operational balance. This chapter provides a general overview of the
common aspects of DAB systems, as well as a description of one of the proposed transmission methods.
The Need for DAB
In the years since the early 1980s, the consumer marketplace has undergone a great shift toward digital electronic
technology. The explosion of personal computer use has led to greater demands for information, including
multimedia integration. Over the same time period, compact disc (CD) digital audio technology has overtaken
long-playing records (and has nearly overtaken analog tape cassettes) as the consumer audio playback media
of choice. Similar digital transcription methods and effects also have been incorporated into commonly available
audio and video equipment. Additionally, it is virtually certain that the upcoming transition to a high-definition
television broadcast system will incorporate full digital methods for video and audio transmission. Because of
these market pressures, the radio broadcast industry has determined that the existing analog methods of
broadcasting must be updated to keep pace with the advancing audio marketplace.
In addition to providing significantly enhanced audio quality, DAB systems are being developed to overcome
the technical deficiencies of existing AM and FM analog broadcast systems. The foremost problem of current
broadcast technology, as perceived by the industry, is its susceptibility to interference. AM medium-wave
broadcasts, operating in the 530- to 1700-kHz frequency range, are prone to disruption by fluorescent lighting
and by power system distribution networks, as well as by numerous other manufactured unintentional radiators,
including computer and telephone systems. Additionally, natural effects, such as nighttime skywave propagation
interference between stations and lightning, cause irritating service disruption to AM reception. FM broadcast
transmissions in the 88- to 108-MHz band are much more resistant to these types of interference. However,
multipath propagation and abrupt signal fading, especially found in urban and mountainous areas containing
a large number of signal reflectors and shadowers (e.g., buildings and terrain), can seriously degrade FM
reception, particularly in automobiles.
DAB System Design Goals
DAB systems are being designed with several technical goals in mind. The first goal is to create a service that
delivers compact disc quality stereo sound for broadcast to consumers. The second is to overcome the inter-
ference problems of current AM and FM broadcasts, especially under portable and mobile reception conditions.
? 2000 by CRC Press LLC
Third, DAB must be spectrally efficient in that total bandwidth should be no greater than that currently used
for FM broadcasts. Fourth, the DAB system should provide space in its data stream to allow for the addition
of ancillary services, such as program textual information display or software downloading. Finally, DAB
receivers must not be overly cumbersome, complex, or expensive, to foster rapid consumer acceptance.
In addition to these goals, desired features include the reduced RF transmission power requirements (when
compared to AM and FM broadcast stations with the same signal coverage), the mechanism to seamlessly fill
in coverage areas that are shadowed from the transmitted signal, and the ability to easily integrate DAB receivers
into personal, home, and automotive sound systems.
Historical Background
DAB development work began in Europe in 1986, with the initial goal to provide high-quality audio services
to consumers directly by satellite. Companion terrestrial systems were developed to evaluate the technology
being considered, as well as to provide fill-in service in small areas where the satellite signals were shadowed.
A consortium of European technical organizations known as Eureka-147/DAB demonstrated the first working
terrestrial DAB system in Geneva in September 1988. Subsequent terrestrial demonstrations of the system
followed in Canada in the summer of 1990, and in the United States in April and September of 1991.
For the demonstrations, VHF and UHF transmission frequencies between 200 and 900 MHz were used with
satisfactory results. Because most VHF and UHF frequency bands suitable for DAB are already in use (or
reserved for high-definition television and other new services), an additional Canadian study in 1991 evaluated
frequencies near 1500 MHz (L-band) for use as a potential worldwide DAB allocation. This study concluded
that L-band frequencies would support a DAB system such as Eureka-147, while continuing to meet the overall
system design goals.
In early 1992, the World Administrative Radio Conference (WARC-92) was held, during which frequency
allocations for many different radio systems were debated. As a result of WARC-92, a worldwide L-band standard
of 1452 to 1492 MHz was designated for both satellite and terrestrial digital radio broadcasting. However,
because of existing government and military uses of L-band, the United States was excluded from the standard.
Instead, an S-band allocation of 2310 to 2360 MHz was substituted. Additionally, Asian nations including Japan,
China, and CIS opted for an extra S-band allocation in the 2535- to 2655-MHz frequency range.
In mid-1991, because of uncertainty as to the suitability of using S-band frequencies for terrestrial broad-
casting, most DAB system development work in the United States shifted from out-band (i.e., UHF, L-band,
and S-band) to in-band. In-band terrestrial systems would merge DAB services with existing AM and FM
broadcasts, using novel adjacent- and co-channel modulating schemes. Since 1992, two system proponents have
demonstrated proprietary methods of extracting a compatible digital RF signal from co-channel analog FM
broadcast transmissions. Thus, in-band DAB could permit a logical transition from analog to digital broad-
casting for current broadcasters, within the current channel allocation scheme.
In 1991, a digital radio broadcasting standards committee was formed by the Electronic Industries Association
(EIA). Present estimates are that the committee may complete its testing and evaluation of the various proposed
systems by 1997. As of mid-1996, laboratory testing of several proponent systems had been completed, and
field testing of some of those systems, near San Francisco, Calif. was getting underway.
Technical Overview of DAB
Regardless of the actual signal delivery system used, all DAB systems share a common overall topology.
Figure 69.24 presents a block diagram of a typical DAB transmission system.
To maintain the highest possible audio quality, program material would be broadcast from digital sources,
such as CD players and digital audio recorders, or digital audio feeds from network sources. Analog sources,
such as microphones, are converted to a digital audio data stream using an analog-to-digital (A/D) converter,
prior to switching or summation with the other digital sources.
The linear digital audio data stream from the studio is then applied to the input of a source encoder. The
purpose of this device is to reduce the required bandwidth of the audio information, helping to produce a
spectrally efficient RF broadcast signal. For example, 16-bit linear digital audio sampled at 48 kHz (the standard
? 2000 by CRC Press LLC
professional rate) requires a data stream of 1.536 megabits/s to transmit a stereo program in a serial format.
This output represents a bandwidth of approximately 1.5 MHz, much greater than that used by an equivalent
analog audio modulating signal [Smyth, 1992]. Source encoders can reduce the data rate by factors of 8:1 or
more, yielding a much more efficient modulating signal.
Following the source encoder, the resulting serial digital signal is applied to the input of the channel encoder,
a device that modulates the transmitted RF wave with the reduced-rate audio information. Auxiliary serial data,
such as program information and/or receiver control functions, also can be input to the channel encoder for
simultaneous transmission.
The channel encoder uses sophisticated modulating techniques to accomplish the goals of interference
cancellation and high spectral efficiency. Methods of interference cancellation include expansion of time and
frequency diversity of the transmitted information, as well as the inclusion of error correction codes in the data
stream. Time diversity involves transmitting the same information multiple times by using a predetermined
time interval. Frequency diversity, such as that produced by spread-spectrum, multiple-carrier, or frequency-
hopping systems, provides the means to transmit identical data on several different frequencies within the
bandwidth of the system. At the receiver, real-time mathematical processes are used to locate the required data
on a known frequency at a known time. If the initial information is found to be unusable because of signal
interference, the receiver simply uses the same data found on another frequency and/or at another time,
producing seamless demodulation.
Spectral efficiency is a function of the modulation system used. Among the modulation formats that have
been proposed for DAB transmission are QPSK, M-ary QAM, and MSK [Springer, 1992]. Using these and other
formats, digital transmission systems that use no more spectrum than their analog counterparts have been
designed.
The RF output signal of the channel encoder is amplified to the appropriate power level for transmission.
Because the carrier-to-noise (C/N) ratio of the modulated waveform is not generally so critical as that required
for analog communications systems, relatively low transmission power often can be used. Depending on the
sophistication of the data recovery circuits contained in the DAB receiver, the use of C/N ratios as low as 6 dB
are possible, without causing a degradation to the received signal.
DAB reception is largely the inverse of the transmission process, with the inclusion of sophisticated error
correction circuits. Fig. 69.25 shows a typical DAB receiver.
DAB reception begins in a similar manner as is used in virtually all receivers. A receiving antenna feeds an
appropriate stage of RF selectivity and amplification from which a sample of the coded DAB signal is derived.
This signal then drives a channel decoder, which reconstructs the audio and auxiliary data streams. To accom-
plish this task, the channel decoder must demodulate and de-interleave the data contained on the RF carrier
and then apply appropriate computational and statistical error correction functions.
The source decoder converts the reduced bit-rate audio stream back to pseudolinear at the original sampling
rate. The decoder computationally expands the mathematically reduced data and fills the gaps left from the
extraction of irrelevant audio information with averaged code or other masking data. The output of the source
FIGURE 69.24An example DAB transmission system. (Source: Hammett & Edison, Inc., Consulting Engineers.)
? 2000 by CRC Press LLC
decoder feeds audio digital-to-analog (D/A) converters, and the resulting analog stereo audio signal is amplified
for the listener.
In addition to audio extraction, DAB receivers likely will be capable of decoding auxiliary data. This data
can be used in conjunction with the user interface to control receiver functions, or for a completely separate
purpose. A typical user interface could contain a data display screen in addition to the usual receiver tuning
and audio controls. This data screen could be used to obtain information about the programming, news reports,
sports scores, advertising, or any other useful data sent by the station or an originating network. Also, external
interfaces could be used to provide a software link to personal computer systems.
Audio Compression and Source Encoding
The development of digital audio encoding started with research into pulse-code modulation (PCM) in the
late 1930s and evolved, shortly thereafter, to include work on the principles of digital PCM coding. Linear
predictive coding (LPC) and adaptive delta pulse-code modulation (ADPCM) algorithms had evolved in the
early 1970s and later were adopted into standards such as C.721 (published by the CCITT) and CD-I (Compact
Disc-Interactive). At the same time, algorithms were being invented for use with phoneme-based speech coding.
Phonetic coding, a first-generation “model-based” speech-coding algorithm, was mainly implemented for low
bit-rate speech and text-to-speech applications. These classes of algorithms for speech further evolved to include
both CELP (Code Excited Linear Predictive) and VSELP (Vector Selectable Excited Linear Predictive) algorithms
by the mid-1980s. In the late 1980s, these classes of algorithms were also shown to be useful for high-quality
audio music coding. These audio algorithms were put to commercial use from the late 1970s to the latter part
of the 1980s.
Subband coders evolved from the early work on quadrature mirror filters in the mid-1970s and continued
with polyphase filter-based schemes in the mid-1980s. Hybrid algorithms employing both subband and ADPCM
coding were developed in the latter part of the 1970s and standardized (e.g., CCITT G.722) in the mid- to late
1980s. Adaptive transform coders for audio evolved in the mid-1980s from speech coding work done in the
late 1970s.
By employing psychoacoustic noise-masking properties of the human ear, perceptual encoding evolved from
early work of the 1970s and where high-quality speech coders were employed. Music quality bit-rate reduction
schemes such as MPEG (Motion Picture Expert Group), PASC (Precision Adaptive Subband Coding), and
ATRAC (Adaptive TRansform Acoustic Coding) have been developed. Further refinements to the technology
will focus attention on novel approaches such as wavelet-based coding and the use of entropy coding schemes.
However, recent progress has been significant, and the various audio coding schemes that have been demon-
strated publicly over the time period from 1990 to 1995 have shown steady increases in compression ratios at
given audio quality levels.
Audio coding for digital broadcasting will likely use one of the many perceptual encoding schemes previously
mentioned or some variation thereof. Fundamentally, they all depend on two basic psychoacoustic phenomena:
FIGURE 69.25 An example DAB receiver. (Source: Hammett & Edison, Inc., Consulting Engineers.)
? 2000 by CRC Press LLC
(1) the threshold of human hearing, and (2) masking of nearby frequency components. In the early days of
hearing research, Harvey Fletcher, a researcher at Bell Laboratories, measured the hearing of many human
beings and published the well-known Fletcher-Munson threshold-of-hearing chart. Basically it states that,
depending on the frequency, audio sounds below certain levels cannot be heard by the human ear. Further, the
masking effect, simply stated, is when two frequencies are very close to each other and one is a higher level
than the other, the weaker of the two is masked and will not be heard. These two principles allow for as much
as 80% of the data representing a musical signal to be discarded.
Figure 69.26 shows how introduction of frequency components affects the ear’s threshold of hearing versus
frequency. Figure 69.27 shows how the revised envelope of audibility results in the elimination of components
that would not be heard.
The electronic implementation of these algorithms employs a digital filter that breaks the audio spectrum
into many subbands, and various coefficient elements are built into the program to decide when it is permissible
to remove one or more of the signal components. The details of how the bands are divided and how the
coefficients are determined are usually proprietary to the individual system developers. Standardization groups
have spent many worker-hours of evaluation attempting to determine the most accurate coding system.
System Example: Eureka-147/DAB
As of this writing, Eureka-147/DAB is the only fully developed DAB system that has demonstrated a capability
to meet virtually all the described system goals. Developed by a European consortium, it is an out-band system
FIGURE 69.26An example of the masking effect. Based on the hearing threshold of the human ear (dashed line), a 500-
Hz sinusoidal acoustic waveform, shown at A on the left graph, is easily audible at relatively low levels. However, it can be
masked by adding nearby higher-amplitude components, as shown on the right. (Source: CCi.)
FIGURE 69.27Source encoders use an empirically derived masking threshold to determine which audio components can
be discarded (left). As shown on the right, only the audio components with amplitudes above the masking threshold are
retained. (Source: CCi.)
? 2000 by CRC Press LLC
in that its design is based on the use of a frequency spectrum outside the AM and FM radio broadcast bands.
Out-band operation is required because the system packs up to 16 stereophonic broadcast channels (plus
auxiliary data) into one contiguous band of frequencies, which can occupy a total bandwidth of up to 4 MHz.
Thus, overall efficiency is maintained, with 16 digital program channels occupying about the same total
bandwidth as 16 equivalent analog FM broadcast channels. System developers have promoted Eureka-147/DAB
for satellite transmission, as well as for terrestrial applications in locations that have a suitable block of unused
spectrum in the L-band frequency range or below.
In recent tests and demonstrations, the ISO/MPEG-2 source encoding/decoding system has been used.
Originally developed by IRT (Institut für Rundfunktecknik) in Germany as MUSICAM (Masking pattern-
adapted Universal Subband Integrated Coding And Multiplexing), the system works by dividing the original
digital audio source into 32 subbands. As with the source encoders described earlier, each of the bands is digitally
processed to remove redundant information and sounds that are not perceptible to the human ear. Using this
technique, the original audio, sampled at a rate of 768 kilobits/s per channel, is reduced to as little as 96 kilobits/s
per channel, representing a compression ratio of 8:1.
The Eureka-147/DAB channel encoder operates by combining the transmitted program channels into a large
number of adjacent narrowband RF carriers, which are each modulated using QPSK and grouped in a way
that maximizes spectrum efficiency known as orthogonal frequency-division multiplex (OFDM). The infor-
mation to be transmitted is distributed among the RF carriers and is also time-interleaved to reduce the effects
of selective fading. A guard interval is inserted between blocks of transmitted data to improve system resistance
to intersymbol interference caused by multipath propagation. Convolutional coding is used in conjunction
with a Viterbi maximum-likelihood decoding algorithm at the receiver to make constructive use of echoed
signals and to correct random errors [Alard and Lassalle, 1988].
RF power levels of just a few tens of watts per program channel have been used in system demonstrations,
providing a relatively wide coverage area, depending on the height of the transmitting antenna above surround-
ing terrain. This low power level is possible because the system can operate at a C/N ratio of less than 10 dB,
as opposed to the more than 30 dB that is required for high-fidelity demodulation of analog FM broadcasts.
Another demonstrated capability of the system is its ability to use “gap filler” transmitters to augment signal
coverage in shadowed areas. A gap filler is simply a system that directly receives the DAB signal at an unob-
structed location, provides RF amplification, and retransmits the signal, on the same channel, into the shadowed
area. Because the system can make constructive use of signal reflections (within a time window defined by the
guard interval and other factors), the demodulated signal is uninterrupted on a mobile receiver when it travels
between an area served by the main signal into the service area of the gap filler.
Defining Terms
Channel encoder:A device that converts source-encoded digital information into an analog RF signal for
transmission. The type of modulation used depends on the particular digital audio broadcasting (DAB)
system, although most modulation techniques employ methods by which the transmitted signal can be
made more resistant to frequency-selective signal fading and multipath distortion effects.
Gap filler:A low-power transmitter that boosts the strength of transmitted DAB RF signals in areas which
normally would be shadowed due to terrain obstruction. Gap fillers can operate on the same frequency
as DAB transmissions or on alternate channels that can be located by DAB receivers using automatic
switching.
Source encoder: A device that substantially reduces the data rate of linearly digitized audio signals by taking
advantage of the psychoacoustic properties of human hearing, eliminating redundant and subjectively
irrelevant information from the output signal. Transform source encoders work entirely within the
frequency domain, while time-domain source encoders work primarily in the time domain. Source
decoders reverse the process, using various masking techniques to simulate the properties of the original
linear data.
Related Topics
69.2 Radio?73.6 Data Compression
? 2000 by CRC Press LLC
References
M. Alard and R. Lassalle, “Principles of modulation and channel coding for digital broadcasting for mobile
receivers,” in Advanced Digital Techniques for UHF Satellite Sound Broadcasting (collected papers), Euro-
pean Broadcasting Union, pp. 47–69, 1988.
R. Bruno, “Digital audio and video compression, present and future,” presented to the Delphi Club, Tokyo,
Japan, July 1992.
G. Chouinard and F. Conway, “Broadcasting systems concepts for digital sound,” in Proceedings of the 45th
Annual Broadcast Engineering Conference, National Association of Broadcasters, 1991, pp. 257–266.
F. Conway, R. Voyer, S. Edwards, and D. Tyrie, “Initial experimentation with DAB in Canada,” in Proceedings
of the 45th Annual Broadcast Engineering Conference, National Association of Broadcasters, 1991, pp.
281–290.
S. Kuh and J. Wang, “Communications systems engineering for digital audio broadcast,” in Proceedings of the
45th Annual Broadcast Engineering Conference, National Association of Broadcasters, 1991, pp. 267–272.
P. H. Moose and J.M. Wozencraft, “Modulation and coding for DAB using multi-frequency modulation,” in
Proceedings of the 45th Annual Broadcast Engineering Conference, National Association of Broadcasters,
1991, pp. 405–410.
M. Rau, L. Claudy, and S. Salek, Terrestrial Coverage Considerations for Digital Audio Broadcasting Systems,
National Association of Broadcasters, 1990.
S. Smyth, “Digital audio data compression,” Broadcast Engineering Magazine, pp. 52–60, Feb. 1992.
K.D. Springer, Interference Between FM and Digital M-PSK Signals in the FM Band, National Association of
Broadcasters, 1992.
Further Information
The National Association of Broadcasters publishes periodic reports on the technical, regulatory, and political
status of DAB in the United States. Additionally, their Broadcast Engineering Conference proceedings published
since 1990 contain a substantial amount of information on emerging DAB technologies.
IEEE Transactions on Broadcasting, published quarterly by the Institute of Electrical and Electronics Engineers,
Inc., periodically includes papers on digital broadcasting.
Additionally, the biweekly newspaper publication Radio World provides continuous coverage of DAB tech-
nology, including proponent announcements, system descriptions, field test reports, and broadcast industry
reactions.
? 2000 by CRC Press LLC