Tranter, W.H., Kosbar, K.L. “Computer-Aided Design and Analysis of Communication
Systems”
The Electrical Engineering Handbook
Ed. Richard C. Dorf
Boca Raton: CRC Press LLC, 2000
8574/ch078/frame Page 1749 Wednesday, May 6, 1998 11:08 AM
78
Computer-Aided Design
and Analysis of
Communication Systems
78.1 Introduction
78.2 The Role of Simulation
78.3 Motivation for the Use of Simulation
78.4 Limitations of Simulation
78.5 Simulation Structure
78.6 The Interdisciplinary Nature of Simulation
78.7 Model Design
78.8 Low-Pass Models
78.9 Pseudorandom Signal and Noise Generators
78.10Transmitter, Channel, and Receiver Modeling
78.11Symbol Error Rate Estimation
78.12Validation of Simulation Results
78.13A Simple Example Illustrating Simulation Products
78.14Conclusions
78.1 Introduction
It should be clear from the preceding chapters that communication systems exist to perform a wide variety of
tasks. The demands placed on today’s communication systems necessitate higher data rates, greater flexibility,
and increased reliability. Communication systems are therefore becoming increasingly complex, and the result-
ing systems cannot usually be analyzed using traditional (pencil and paper) analysis techniques. In addition,
communication systems often operate in complicated environments that are not analytically tractable. Examples
include channels that exhibit severe bandlimiting, multipath, fading, interference, non-Gaussian noise, and
perhaps even burst noise. The combination of a complex system and a complex environment makes the design
and analysis of these communication systems a formidable task. Some level of computer assistance must usually
be invoked in both the design and analysis process. The appropriate level of computer assistance can range
from simply using numerical techniques to solve a differential equation defining an element or subsystem to
developing a computer simulation of the end-to-end communication system.
There is another important reason for the current popularity of computer-aided analysis and simulation
techniques. It is now practical to make extensive use of these techniques. The computing power of many personal
computers and workstations available today exceeds the capabilities of many large mainframe computers of
only a decade ago. The low cost of these computing resources make them widely available. As a result, significant
computing resources are available to the communications engineer within the office or even the home environment.
William H. Tranter
University of Missouri–Rolla
Kurt L. Kosbar
University of Missouri–Rolla
? 2000 by CRC Press LLC
8574/ch078/frame Page 1750 Wednesday, May 6, 1998 11:08 AM
Personal computers and workstations tend to be resources dedicated to a specific individual or project. Since
the communications engineer working at his or her desk has control over the computing resource, lengthy
simulations can be performed without interfering with the work of others. Over the past few years a number
of software packages have been developed that allow complex communication systems to be simulated with
relative ease [Shanmugan, 1988]. The best of these packages contains a wide variety of subsystem models as
well as integrated graphics packages that allow waveforms, spectra, histograms, and performance characteristics
to be displayed without leaving the simulation environment. For those motivated to generate their own
simulation code, the widespread availability of high-quality C, Pascal, and FORTRAN compilers makes it
possible for large application-specific simulation programs to be developed for personal computers and work-
stations. When computing tools are both available and convenient to use, they will be employed in the day-to-
day efforts of system analysts and designers.
The purpose of this chapter is to provide a brief introduction to the subject of computer-aided design and
analysis of communication systems. Since computer-aided design and analysis almost always involves some
level of simulation, we focus our discussion on the important subject of the simulation of communication
systems.
Computer simulations can, of course, never replace a skilled engineer, although they can be a tremendous
help in both the design and analysis process. The most powerful simulation program cannot solve all the
problems that arise, and the process of making trade-off decisions will always be based on experience. In
addition, evaluating and interpreting the results of a complex simulation require considerable skill and insight.
While these remarks seem obvious, as computer-aided techniques become more powerful, one is tempted to
replace experience and insight with computing power.
78.2 The Role of Simulation
The main purposes of simulation are to help us understand the operation of a complex communication system,
to determine acceptable or optimum parameters for implementation of a system, and to determine the per-
formance of a communication system. There are basically two types of systems in which communication
engineers have interest: communication links and communication networks.
A communication link is usually a single source, a single user, and the components and channel between
source and user. A typical link architecture is shown in Fig. 78.1. The important performance parameter in a
digital communication link is typically the reliability of the communication link as measured by the symbol
or bit error rate (BER). In an analog communication link the performance parameter of interest is typically
the signal-to-noise ratio (SNR) at the receiver input or the mean-square error of the receiver output. The
simulation is usually performed to determine the effect of system parameters, such as filter bandwidths or code
rate, or to determine the effect of environmental parameters, such as noise levels, noise statistics, or power
spectral densities.
A communication network is a collection of communication links with many signal sources and many users.
Computer simulation programs for networks often deal with problems of routing, flow and congestion control,
and the network delay. While this chapter deals with the communication link, the reader is reminded that
network simulation is also an important area of study. The simulation methodologies used for communication
networks are different from those used on links because, in a communication link simulation, each waveform
present in the system is sampled using a constant sampling frequency. In contrast, network simulations are
event-driven, with the important events being such quantities as the time of arrival of a message.
FIGURE 78.1 Basic communication link.
? 2000 by CRC Press LLC
8574/ch078/frame Page 1751 Wednesday, May 6, 1998 11:08 AM
Simulations can be developed to investigate either transient phenomena or steady-state properties of a system.
The study of the acquisition time of a phase-lock loop receiver is an example of a transient phenomenon.
Simulations that are performed to study transient behavior often focus on a single subsystem such as a receiver
synchronization system. Simulations that are developed to study steady-state behavior often model the entire
system. An example is a simulation to determine the BER of a system.
78.3 Motivation for the Use of Simulation
As mentioned previously, simulation is a reasonable approach to many design and analysis problems because
complex problems demand that computer-based techniques be used to support traditional analytical
approaches. There are many other motivations for making use of simulation.
A carefully developed simulation is much like having a breadboard implementation of the communication system
available for study. Experiments can be performed using the simulation much like experiments can be performed
using hardware. System parameters can be easily changed, and the impact of these changes can be evaluated. By
continuing this process, parameteric studies can easily be conducted and acceptable, or perhaps even optimum,
parameter values can be determined. By changing parameters, or even the system topology, one can play “what if”
games much more quickly and economically using a simulation than with a system realized in hardware.
It is often overlooked that simulation can be used to support analysis. Many people incorrectly view simu-
lation as a tool to be used only when a system becomes too complex to be analyzed using traditional analysis
techniques. Used properly, simulation goes hand in hand with traditional techniques in that simulation can
often be used to guide analysis. A properly developed simulation provides insight into system operation. As an
example, if a system has many parameters, these can be varied in a way that allows the most important
parameters, in terms of system performance, to be identified. The least important parameters can then often
be discarded, with the result being a simpler system that is more tractable analytically. Analysis also aids
simulation. The development of an accurate and efficient simulation is often dependent upon a careful analysis
of various portions of the system.
78.4 Limitations of Simulation
Simulation, useful as it is, does have limitations. It must be remembered that a system simulation is an
approximation to the actual system under study. The nature of the approximations must be understood if one
is to have confidence in the simulation results. The accuracy of the simulation is limited by the accuracy to
which the various components and subsystems within the system are modeled. It is often necessary to collect
extensive experimental data on system components to ensure that simulation models accurately reflect the
behavior of the components. Even if this step is done with care, one can only trust the simulation model over
the range of values consistent with the previously collected experimental data. A main source of error in a
simulation results because models are used at operating points beyond which the models are valid.
In addition to modeling difficulties, it should be realized that the digital simulation of a system can seldom
be made perfectly consistent with the actual system under study. The simulation is affected by phenomena not
present in the actual system. Examples are the aliasing errors resulting from the sampling operation and the
finite word length (quantization) effects present in the simulation. Practical communication systems use a
number of filters, and modeling the analog filters present in the actual system by the digital filters required by
the simulation involves a number of approximations. The assumptions and approximations used in modeling
an analog filter using impulse-invariant digital filter synthesis techniques are quite different from the assump-
tions and approximations used in bilinear z-transform techniques. Determining the appropriate modeling
technique requires careful thought.
Another limitation of simulation lies in the excessive computer run time that is often necessary for estimating
performance parameters. An example is the estimation of the system BER for systems having very low nominal
bit error rates. We will expand on this topic later in this chapter.
? 2000 by CRC Press LLC
8574/ch078/frame Page 1752 Wednesday, May 6, 1998 11:08 AM
78.5 Simulation Structure
As illustrated in Fig. 78.1, a communication system is a collection of subsystems such that the overall system
provides a reliable path for information flow from source to user. In a computer simulation of the system, the
individual subsystems must first be accurately modeled by signal processing operations. The overall simulation
program is a collection of these signal processing operations and must accurately model the overall commu-
nication system. The important subject of subsystem modeling will be treated in a following section.
The first step in the development of a simulation program is to define the topology of the system, which
specifies the manner in which the individual subsystems are connected. The subsystem models must then be
defined by specifying the signal processing operation to be performed by each of the various subsystems. A
simulation structure may be either fixed topology or free topology. In a fixed topology simulation, the basic
structure shown in Fig. 78.1 is modeled. Various subsystems can be bypassed if desired by setting switches, but
the basic topology cannot be modified. In a free topology structure, subsystems can be interconnected in any
way desired and new additional subsystems can be added at will.
A simulation program for a communication system is a collection of at least three operations, shown in
Fig. 78.2, although in a well-integrated simulation these operations tend to merge together. The first operation,
sometimes referred to as the preprocessor, defines the parameters of each subsystem and the intrinsic parameters
that control the operation of the simulation. The second operation is the simulation exercisor, which is the
simulation program actually executed on the computer. The third operation performed in a simulation program
is that of postprocessing. This is a collection of routines that format the simulation output in a way which
provides insight into system operations and allows the performance of the communication system under study
to be evaluated. A postprocessor usually consists of a number of graphics-based routines, allowing the user to
view waveforms and other displays generated by the simulation. The postprocessor also consists of a number of
routines that allow estimation of the bit error rate, signal-to-noise ratios, histograms, and power spectral densities.
When faced with the problem of developing a simulation of a communication system, the first fundamental
choice is whether to develop a custom simulation using a general-purpose high-level language or to use one
of the many special-purpose communication system simulation languages available. If the decision is made to
develop a dedicated simulation using a general-purpose language, a number of resources are needed beyond a
quality compiler and a mathematics library. Also needed are libraries for filtering routines, software models for
each of the subsystems contained in the overall system, channel models, and the waveform display and data
analysis routines needed for the analysis of the simulation results (postprocessing). While at least some of the
required software will have to be developed at the time the simulation is being written, many of the required
routines can probably be obtained from digital signal processing (DSP) programs and other available sources.
As more simulation projects are completed, the database of available routines becomes larger.
The other alternative is to use a dedicated simulation language, which makes it possible for one who does
not have the necessary skills to create a custom simulation using a high-level language to develop a commu-
nication system simulation. Many simulation languages are available for both personal computers and work-
stations [Shanmugan, 1988]. While the use of these resources can speed simulation development, the user must
ensure that the assumptions used in developing the models are well understood and applicable to the problem
of interest. In choosing a dedicated language from among those that are available, one should select a language
that has an extensive model library, an integrated postprocessor with a wide variety of data analysis routines,
on-line help and documentation capabilities, and extensive error-checking routines.
FIGURE 78.2 Typical structure of a simulation program.
? 2000 by CRC Press LLC
8574/ch078/frame Page 1753 Wednesday, May 6, 1998 11:08 AM
78.6 The Interdisciplinary Nature of Simulation
The subject of computer-aided design and analysis of communication systems is very much interdisciplinary
in nature. The major disciplines that bear on the subject are communication theory, DSP, numerical analysis,
and stochastic process theory. The roles played by these subjects is clear. The simulation user must have
knowledge of the behavior of communication theory if the simulation results are to be understood. The analysis
techniques of communication theory allow simulation results to be verified. Since each subsystem in the overall
communication system is a signal processing operation, the tools of DSP provide the algorithms to realize filters
and other subsystems. Numerical analysis techniques are used extensively in the development of signal pro-
cessing algorithms. Since communication systems involve random data signals, as well as noise and other
disturbances, the concepts of stochastic process theory are important in developing models of these quantities
and also for determining performance estimates.
78.7 Model Design
Practicing engineers frequently use models to investigate the behavior of complex systems. Traditionally, models
have been physical devices or a set of mathematical expressions. The widespread use of powerful digital
computers now allows one to generate computer programs that model physical systems. Although the detailed
development and use of computer models differs significantly from their physical and mathematical counter-
parts, the computer models share many of the same design constraints and trade-offs. For any model to be
useful one must guarantee that the response of the model to stimuli will closely match the response of the
target system, the model must be designed and fabricated in much less time and at significantly less expense
than the target system, and the model must be reasonably easy to validate and modify. In addition to these
constraints, designers of computer models must assure that the amount of processor time required to execute
the model is not excessive. The optimal model is the one that appropriately balances these conflicting require-
ments. Figure 78.3 describes the typical design trade-off faced when developing computer models. A somewhat
surprising observation is that the optimal model is often not the one that most closely approximates the target
system. A highly detailed model will typically require a tremendous amount of time to develop, will be difficult
to validate and modify, and may require prohibitive processor time to execute. Selecting a model that achieves
a good balance between these constraints is as much an art as a science. Being aware of the trade-offs which
exist, and must be addressed, is the first step toward mastering the art of modeling.
FIGURE 78.3 Design constraints and trade-offs.
? 2000 by CRC Press LLC
8574/ch078/frame Page 1754 Wednesday, May 6, 1998 11:08 AM
78.8 Low-Pass Models
In most cases of practical interest the physical layer of the communication system will use continuous time
(CT) signals, while the simulation will operate in discrete time (DT). For the simulation to be useful, one must
develop DT signals and systems that closely match their CT counterparts. This topic is discussed at length in
introductory DSP texts. A prominent result in this field is the Nyquist sampling theorem, which states that if
a CT signal has no energy above frequency f
h
Hz, one can create a DT signal that contains exactly the same
information by sampling the CT signal at any rate in excess of 2 f
h
samples per second. Since the execution
time of the simulation is proportional to the number of samples it must process, one naturally uses the lowest
sampling rate possible. While the Nyquist theorem should not be violated for arbitrary signals, when the CT
signal is bandpass one can use low-pass equivalent (LPE) waveforms that contain all the information of the
CT signal but can be sampled slower than 2 f
h
.
Assume the energy in a bandpass signal is centered about a carrier frequency of f
c
Hz and ranges from f
l
to
f
h
Hz, resulting in a bandwidth of f
h
– f
l
=W Hz, as in Fig. 78.4. It is not unusual for W to be many orders of
magnitude less than f
c
. The bandpass waveform x(t) can be expressed as a function of two low-pass signals.
Two essentially equivalent LPE expansions are known as the envelope/phase representation [Davenport and
Root, 1958],
x(t) = A(t) cos[2p f
c
t + q(t)] (78.1)
and the quadrature representation,
x(t) = x
c
(t) cos(2pf
c
t) – x
s
(t) sin(2p f
c
t) (78.2)
All four real signals A(t), q(t), x
c
(t), and x
s
(t) are low pass and have zero energy above W/2 Hz. A computer
simulation that replaces x(t) with a pair of LPE signals will require far less processor time since the LPE
waveforms can be sampled at W as opposed to 2 f
h
samples per second. It is cumbersome to work with two
signals rather than one signal. A more mathematically elegant LPE expansion is
x(t) = Re{v(t)e
j 2pfct
} (78.3)
where v(t) is a low-pass, complex-time domain signal that has no energy above W/2 Hz. Signal v(t) is known as
the complex envelope of x(t) [Haykin, 1983]. It contains all the information of x(t) and can be sampled at W
samples per second without aliasing. This notation is disturbing to engineers accustomed to viewing all time
domain signals as real. However, a complete theory exists for complex time domain signals, and with surprisingly
little effort one can define convolution, Fourier transforms, analog-to-digital and digital-to-analog conversions,
and many other signal processing algorithms for complex signals. If f
c
and W are known, the LPE mapping is
one-to-one so that x(t) can be completely recovered from v(t). While it is conceptually simpler to sample the
CT signals at a rate in excess of 2f
h
and avoid the mathematical difficulties of the LPE representation, the
tremendous difference between f
c
and W makes the LPE far more efficient for computer simulation. This type
FIGURE 78.4 Amplitude spectrum of a bandpass signal.
? 2000 by CRC Press LLC
of trade-off frequently occurs in computer simulation. A careful mathematical analysis of the modeling problem
8574/ch078/frame Page 1755 Wednesday, May 6, 1998 11:08 AM
conducted before any computer code is generated can yield substantial performance improvements over a
conceptually simpler, but numerically inefficient approach.
The fundamental reason the LPE representation outlined above is popular in simulation is that one can
easily generate LPE models of linear time-invariant bandpass filters. The LPE of the output of a bandpass filter
is merely the convolution of the LPE of the input signal and the LPE of the impulse response of the filter. It is
far more difficult to determine a LPE model for nonlinear and time-varying systems. There are numerous
approaches that trade off flexibility and simplicity. If the system is nonlinear and time invariant, a Volterra
series can be used. While this series will exactly represent the nonlinear device, it is often analytically intractable
and numerically inefficient. For nonlinear devices with a limited amount of memory the AM/AM, AM/PM
[Shimbo, 1971] LPE model is useful. This model accurately describes the response of many microwave amplifiers
including traveling-wave tubes, solid-state limiting amplifiers, and, under certain conditions, devices which
exhibit hysteresis. The Chebyshev transform [Blachman, 1964] is useful for memoryless nonlinearities such as
hard and soft limiters. If the nonlinear device is so complex that none of the conventional LPE models can be
used, one may need to convert the LPE signal back to its bandpass representation, route the bandpass signal
through a model of the nonlinear device, and then reconvert the output to a LPE signal for further processing.
If this must be done, one has the choice of increasing the sampling rate for the entire simulation or using
different sampling rates for various sections of the simulation. The second of these approaches is known as a
multirate simulation [Cochiere and Rabiner, 1983]. The interpolation and decimation operations required to
convert between sampling rates can consume significant amounts of processor time. One must carefully examine
this trade-off to determine if a multirate simulation will substantially reduce the execution time over a single,
high sampling rate simulation. Efficient and flexible modeling of nonlinear devices is in general a difficult task
and continues to be an area of active research.
78.9 Pseudorandom Signal and Noise Generators
The preceding discussion was motivated by the desire to efficiently model filters and nonlinear amplifiers. Since
these devices often consume the majority of the processor time, they are given high priority. However, there
are a number of other subsystems that do not resemble filters. One example is the data source that generates
the message or waveform which must be transmitted. While signal sources may be analog or digital in nature,
we will focus exclusively on binary digital sources. The two basic categories of signals produced by these devices
are known as deterministic and random. When performing worst-case analysis, one will typically produce known,
repetitive signal patterns designed to stress a particular subsystem within the overall communication system.
For example, a signal with few transitions may stress the symbol synchronization loops, while a signal with
many regularly spaced transitions may generate unusually wide bandwidth signals. The generation of this type
of signal is straightforward and highly application dependent. To test the nominal system performance one
typically uses a random data sequence. While generation of a truly random signal is arguably impossible [Knuth,
1981], one can easily generate pseudorandom (PN) sequences. PN sequence generators have been extensively
studied since they are used in Monte Carlo integration and simulation [Rubinstein, 1981] programs and in a
variety of wideband and secure communication systems. The two basic structures for generating PN sequences
are binary shift registers (BSRs) and linear congruential algorithms (LCAs).
Digital data sources typically use BSRs, while noise generators often use LCAs. A logic diagram for a simple
BSR is shown in Fig. 78.5. This BSR consists of a clock, six D-type flip-flops (F/F), and an exclusive OR gate
denoted by a modulo-two adder. If all the F/F are initialized to 1, the output of the device is the waveform
shown in Fig. 78.6. Notice that the waveform is periodic with period 63 = 2
6
– 1, but within one cycle the
output has many of the properties of a random sequence. This demonstrates all the properties of the BSR, LCA,
and more advanced PN sequence generators. All PN generators have memory and must therefore be initialized
by the user before the first sample is generated. The initialization data is typically called the seed. One must
choose this seed carefully to ensure the output will have the desired properties (in this example, one must avoid
setting all F/F to zero). All PN sequence generators will produce periodic sequences. This may or may not be
? 2000 by CRC Press LLC
8574/ch078/frame Page 1756 Wednesday, May 6, 1998 11:08 AM
a problem. If it is a concern, one should ensure that one period of the PN sequence generator is longer than
the total execution time of the simulation. This is usually not a significant problem, since one can easily construct
BSRs that have periods greater than 10
27
clock cycles. The final concern is how closely the behavior of the PN
sequence generator matches a truly random sequence. Standard statistical analysis algorithms have been applied
to many of these generators to validate their performance.
Many digital communication systems use m bit (M-ary) sources where m > 1. Figure 78.7 depicts a simple
algorithm for generating a M-ary random sequence from a binary sequence. The clock must now cycle through
m cycles for every generated symbol, and the period of the generator has been reduced by a factor of m. This
may force the use of a longer-period BSR. Another common application of PN sequence generators is to produce
FIGURE 78.5Six-stage binary shift register PN generator.
FIGURE 78.6Output of a six-stage maximal length BSR.
FIGURE 78.7M-ary PN sequence generator.
FIGURE 78.8Generation of Gaussian noise.
? 2000 by CRC Press LLC
samples of a continuous stochastic process, such as Gaussian noise. A structure for producing these samples is
8574/ch078/frame Page 1757 Wednesday, May 6, 1998 11:08 AM
shown in Fig. 78.8. In this case the BSR has been replaced by an LCA [Knuth, 1981]. The LCA is very similar
to BSR in that it requires a seed value, is clocked once for each symbol generated, and will generate a periodic
sequence. One can generate a white noise process with an arbitrary first-order probability density function
(pdf) by passing the output of the LCA through an appropriately designed nonlinear, memoryless mapping.
Simple and well-documented algorithms exist for the uniform to Gaussian mapping. If one wishes to generate
a nonwhite process, the output can be passed through the appropriate filter. Generation of a wide-sense
stationary Gaussian stochastic process with a specified power spectral density is a well-understood and
-documented problem. It is also straightforward to generate a white sequence with an arbitrary first-order pdf
or to generate a specified power spectral density if one does not attempt to control the pdf. However, the
problem of generating a noise source with an arbitrary pdf and an arbitrary power spectral density is a significant
challenge [Sondhi, 1983].
78.10 Transmitter, Channel, and Receiver Modeling
Most elements of transmitters, channels, and receivers are implemented using standard DSP techniques. Effects
that are difficult to characterize using mathematical analysis can often be included in the simulation with little
additional effort. Common examples include gain and phase imbalance in quadrature circuits, nonlinear
amplifiers, oscillator instabilities, and antenna platform motion. One can typically use LPE waveforms and
devices to avoid translating the modulator output to the carrier frequency. Signal levels in physical systems
often vary by many orders of magnitude, with the output of the transmitters being extremely high energy
signals and the input to receivers at very low energies. To reduce execution time and avoid working with
extremely large and small signal level simulations, one often omits the effects of linear amplifiers and attenuators
and uses normalized signals. Since the performance of most systems is a function of the signal-to-noise ratio,
and not of absolute signal level, normalization will have no effect on the measured performance. One must be
careful to document the normalizing constants so that the original signal levels can be reconstructed if needed.
Even some rather complex functions, such as error detecting and correcting codes, can be handled in this
manner. If one knows the uncoded error rate for a system, the coded error rate can often be closely approximated
by applying a mathematical mapping. As will be pointed out below, the amount of processor time required to
produce a meaningful error rate estimate is often inversely proportional to the error rate. While an uncoded
error rate may be easy to measure, the coded error rate is usually so small that it would be impractical to execute
a simulation to measure this quantity directly. The performance of a coded communication system is most
often determined by first executing a simulation to establish the channel symbol error rate. An analytical
mapping can then be used to determine the decoded BER from the channel symbol error rate.
Once the signal has passed though the channel, the original message is recovered by a receiver. This can
typically be realized by a sequence of digital filters, feedback loops, and appropriately selected nonlinear devices.
A receiver encounters a number of clearly identifiable problems that one may wish to address independently.
For example, receivers must initially synchronize themselves to the incoming signal. This may involve detecting
that an input signal is present, acquiring an estimate of the carrier amplitude, frequency, phase, symbol
synchronization, frame synchronization, and, in the case of spread spectrum systems, code synchronization.
Once acquisition is complete, the receiver enters a steady-state mode of operation, where concerns such as
symbol error rate, mean time to loss of lock, and reaction to fading and interference are of primary importance.
To characterize the system, the user may wish to decouple the analysis of these parameters to investigate
relationships that may exist.
For example, one may run a number of acquisition scenarios and gather statistics concerning the probability
of acquisition within a specified time interval or the mean time to acquisition. To isolate the problems faced
in synchronization from the inherent limitation of the channel, one may wish to use perfect synchronization
information to determine the minimum possible BER. Then the symbol or carrier synchronization can be held
at fixed errors to determine sensitivity to these parameters and to investigate worst-case performance. Noise
processes can be used to vary these parameters to investigate more typical performance. The designer may also
? 2000 by CRC Press LLC
8574/ch078/frame Page 1758 Wednesday, May 6, 1998 11:08 AM
wish to investigate the performance of the synchronization system to various data patterns or the robustness
of the synchronization system in the face of interference. The ability to measure the system response to one
parameter while a wide range of other parameters are held fixed and the ability to quickly generate a wide
variety of environments are some of the more significant advantages that simulation enjoys over more conven-
tional hardware and analytical models.
78.11 Symbol Error Rate Estimation
One of the most fundamental parameters to measure in a digital communication system is the steady-state
BER. The simplest method for estimating the BER is to perform a Monte Carlo (MC) simulation. The
simulation conducts the same test one would perform on the physical system. All data sources and noise sources
produce typical waveforms. The output of the demodulator is compared to the output of the message source,
and the BER is estimated by dividing the number of observed errors by the number of bits transmitted. This
is a simple technique that will work with any system that has ergodic [Papoulis, 1965] noise processes. The
downside of this approach is that one must often pass a very large number of samples through the system to
produce a reliable estimate of the BER. The question of how many samples must be collected can be answered
using confidence intervals. The confidence interval gives a measure of how close the true BER will be to the
estimate produced by the MC simulation. A typical confidence interval curve is shown in Fig. 78.9. The ratio
of the size of the confidence interval to the size of the estimate is a function of the number of errors observed.
Convenient rules of thumb for this work are that after one error is observed the point estimate is accurate to
within 3 orders of magnitude, after 10 errors the estimate is accurate to within a factor of 2, and after 100 errors
the point estimate will be accurate to a factor of 1.3. This requirement for tens or hundreds of errors to occur
frequently limits the usefulness of MC simulations for systems that have low error rates and has motivated
research into more efficient methods of estimating BER.
Perhaps the fastest method of BER estimation is the semi-analytic (SA) or quasi-analytic technique [Jeruchim,
1984]. This technique is useful for systems that resemble Fig. 78.10. In this case the mean of the decision metric
is a function of the transmitted data pattern and is independent of the noise. All other parameters of the pdf
of the decision metric are a function of the noise and are independent of the data. This means that one can
analytically determine the conditional pdf of the decision metric given the transmitted data pattern. By using
total probability one can then determine the unconditional error rate. The problem with conventional math-
ematical analysis is that when the channel has a significant amount of memory or the nonlinearity is rather
FIGURE 78.9Typical confidence interval (BER) point estimate = 10
–6
.
? 2000 by CRC Press LLC
8574/ch078/frame Page 1759 Wednesday, May 6, 1998 11:08 AM
complex, one must compute a large number of conditional density functions. Simulation can easily solve this
problem for most practical systems. A noise-free simulation is executed, and the value of the decision metric
is recorded in a data file. Once the simulation is complete, this information can be used to reconstruct the
conditional and ultimately the unconditional error rate. This method generates highly accurate estimates of
the BER and makes very efficient use of computer resources, but can only be used in the special cases where
one can analytically determine the conditional pdf.
The MC and SA techniques fall at the two extremes of BER estimation. MC simulations require no a priori
information concerning the system performance or architecture but may require tremendous amounts of
computer time to execute. SA techniques require an almost trivial amount of computer time for many cases
but require the analyst to have a considerable amount of information concerning the system. There is a
continuing search for algorithms that fall in between these extremes. These variance reduction algorithms all
share the property of making a limited number of assumptions concerning system performance and architecture,
then using this information to reduce the variance of the MC estimate. Popular techniques are summarized in
[Jeruchim, 1984] and include importance sampling, large deviation theory, extremal statistics, and tail extrap-
olation. To successfully use one of these techniques one must first understand the basic concept behind the
technique. Then one should carefully determine what assumptions were made concerning the system architec-
ture to determine if the system under study satisfies the requirements. This can be a difficult task since it is not
always clear what assumptions are required for a specified technique to be applicable. Finally, one should always
determine the accuracy of the measurement through some technique similar to confidence interval estimation.
78.12 Validation of Simulation Results
One often constructs a simulation to determine the value of a single parameter, such as the system BER. However
the estimate of this parameter has little or no value unless one can ensure that the simulation model closely
resembles the physical system. A number of methods can be used to validate a simulation. Individually, none
of them will guarantee that the simulation results are accurate, but taken as a group, they form a convincing
argument that the results are realistic. Seven methods of validation are mathematical analysis, comparison with
hardware, bounding techniques, degenerate case studies, reasonable relationship tests, subsystem tests, and
redundant simulation efforts.
If one has access to mathematical analysis or hardware that predicts or approximates the performance of the
system, one should obviously compare the simulation and mathematical results. Unfortunately, in most cases
these results will not be available. Even though exact mathematical analysis of the system is not possible, it may
be possible to develop bounds on the system performance. If these bounds are tight, they may accurately
FIGURE 78.10 Typical digital communication system.
? 2000 by CRC Press LLC
8574/ch078/frame Page 1760 Wednesday, May 6, 1998 11:08 AM
characterize the system performance, but even loose bounds will be useful since they help verify the simulation
results. Most systems have parameters that can be varied. While it may be mathematically difficult to determine
the performance of the system for arbitrary values, it is often possible to mathematically determine the results
when parameters assume extreme or degenerate values.
Other methods of validation are decidedly less mathematical. One may wish to vary parameters and ascertain
whether the performance parameter changes in a reasonable manner. For example, small changes in SNR rarely
cause dramatic changes in system performance. When constructing a simulation, each subsystem, such as filters,
nonlinear amplifiers, and noise and data sources, should be thoroughly tested before being included in a larger
simulation. Be aware, however, that correct operation of all the various subsystems that make up a communi-
cation system does not imply that the overall system performs correctly. If one is writing his or her own code,
one must verify that there are no software bugs or fundamental design errors. Even if one purchases a commercial
software package, there is no guarantee that the designer of the software models made the same assumptions
the user will make when using the model. In most cases it will be far easier to test a module before it is inserted
into a simulation than it will be to isolate a problem in a complex piece of code. The final check one may wish
to perform is a redundant simulation. There are many methods of simulating a system. One may wish to have
two teams investigate a problem or have a single team implement a simulation using two different techniques
to verify that the results are reasonable.
78.13 A Simple Example Illustrating Simulation Products
To illustrate the output that is typically generated by a communication system simulation, a simple example is
considered. The system is that considered in Fig. 78.10. An OQPSK (offset quadrature phase-shift keyed)
modulation format is assumed so that one of four waveforms is transmitted during each symbol period. The
data source may be viewed as a single binary source, in which the source symbols are taken two at a time when
mapped onto a transmitted waveform, or as two parallel data sources, with one source providing the direct
channel modulation and the second source providing the quadrature channel modulation. The signal constel-
lation at the modulator output appears as shown in Fig. 78.11(a), with the corresponding eye diagram appearing
as shown in Fig. 78.11(b). The eye diagram is formed by overlaying successive time intervals of a time domain
waveform onto a single graph, much as would be done with a common oscilloscope. Since the simulation
sampling frequency used in generating Fig. 78.11(b) was 10 samples per data symbol, it is easily seen that the
eye diagram was generated by retracing every 2 data symbols or 20 simulation samples. Since Fig. 78.11(a) and
(b) correspond to the modulator output, which has not yet been filtered, the transitions between binary states
occur in one simulation step. After filtering, the eye diagram appears as shown in Fig. 78.11(c). A seventh-order
Butterworth bilinear z-transform digital filter was assumed with a 3-dB bandwidth equal to the bit rate. It
should be noted that the bit transitions shown in Fig. 78.11(c) do not occur at the same times as the bit
transitions shown in Fig. 78.11(b). The difference is due to the group delay of the filter. Note in Fig. 78.10 that
the transmitter also involves a nonlinear amplifier. We will see the effects of this component later in this section.
Another interesting point in the system is within the receiver. Since the communication system is being
modeled as a baseband system due to the use of the complex-envelope representation of the bandpass waveforms
generated in the simulation, the data detector is represented as an integrate-and-dump detector. The detector
is then modeled as a sliding-average integrator, in which the width of the integration window is one bit time.
The integration is therefore over a single bit period when the sliding window is synchronized with a bit period.
The direct-channel and quadrature-channel waveforms at the output of the sliding-average integrator are shown
in Fig. 78.12(a). The corresponding eye diagrams are shown in Fig. 78.12(b). In order to minimize the error
probability of the system, the bit decision must be based on the integrator output at the time for which the
eye opening is greatest. Thus the eye diagram provides important information concerning the sensitivity of the
system to timing errors.
The signal constellation at the sliding integrator output is shown in Fig. 78.12(c) and should be carefully
compared to the signal constellation shown in Fig. 78.11(a) for the modulator output. Three effects are apparent.
First, the signal points exhibit some scatter, which, in this case, is due to intersymbol interference resulting
? 2000 by CRC Press LLC
8574/ch078/frame Page 1761 Wednesday, May 6, 1998 11:08 AM
from the transmitter filter and additive noise. It is also clear that the signal is both compressed and rotated.
These effects are due to the nonlinear amplifier that was mentioned previously. For this example simulation
the nonlinear amplifier is operating near the saturation point, and the compression of the signal constellation
is due to the AM/AM characteristic of the nonlinearity and the rotation is due to the AM/PM characteristic of
the nonlinearity.
The performance of the overall communication system is illustrated in Fig. 78.12(d). The error probability
curve is perhaps the most important simulation product. Note that both uncoded and coded results are shown.
The coded results were calculated analytically from the uncoded results assuming a (63, 55) Reed–Solomon
code. It should be mentioned that semi-analytic simulation was used in this example since, as can be seen in
Fig. 78.10, the noise is injected into the system on the receiver side of the nonlinearity so that linear analysis
may be used to determine the effects of the noise on the system performance.
This simple example serves to illustrate only a few of the possible simulation products. There are many other
possibilities including histograms, correlation functions, estimates of statistical moments, estimates of the power
spectral density, and estimates of the signal-to-noise ratio at various points in the system.
FIGURE 78.11Transmitter signal constellation and eye diagrams: (a) OQPSK signal constellation; (b) eye diagram of
modulator output; (c) eye diagram of filtered modulator output.
? 2000 by CRC Press LLC
8574/ch078/frame Page 1762 Wednesday, May 6, 1998 11:08 AM
A word is in order regarding spectral estimation techniques. Two basic techniques can be used for spectral
estimation: Fourier techniques and model-based techniques. In most simulation problems one is blessed with
a tremendous amount of data concerning sampled waveforms but does not have a simple model describing
how these waveforms are produced. For this reason model-based spectral estimation is typically not used. The
most common form of spectral estimation used in simulation is the Welch periodogram. While this approach
is straightforward, the effects of windowing the data sequence must be carefully considered, and tens or even
hundreds of data windows must be averaged to achieve an accurate estimate of the power spectral density.
78.14 Conclusions
We have seen that the analysis and design of today’s complex communication systems often requires the use
of computer-aided techniques. These techniques allow the solution of problems that are otherwise not tractable
and provide considerable insight into the operating characteristics of the communication system.
FIGURE 78.12 Integrator output signals and system error probability: (a) sliding integrator output signals; (b) sliding
integrator output eye diagram; (c) sliding integrator output signal constellation; (d) error probability.
? 2000 by CRC Press LLC
Defining Terms
Communication link: A point-to-point communication system that typically involves a single information
source and a single user. This is in contrast to a communications network, which usually involves many
8574/ch078/frame Page 1763 Wednesday, May 6, 1998 11:08 AM
sources and many users.
Computer-aided design and analysis: The process of using computer assistance in the design and analysis
of complex systems. In our context, the design and analysis of communication systems, computer-aided
design and analysis often involves the extensive use of simulation techniques. Computer-aided techniques
often allow one to address design and analysis problems that are not tractable analytically.
Computer simulation: A set of computer programs which allows one to imitate the important aspects of the
behavior of the specific system under study. Simulation can aid the design process by, for example,
allowing one to determine appropriate system design parameters or aid the analysis process by, for
example, allowing one to estimate the end-to-end performance of the system under study.
Dedicated simulation language: A computer language, either text based or graphics based, specifically devel-
oped to facilitate the simulation of the various systems under study, such as communication systems.
Low-pass equivalent (LPE) model: A method of representing bandpass signals and systems by low-pass signals
and systems. This technique is extremely useful when developing discrete time models of bandpass
continuous-time systems. It can substantially reduce the sampling rate required to prevent aliasing and
does not result in any loss of information. This in turn reduces the execution time required for the
simulation. This modeling technique is closely related to the quadrature representation of bandpass
signals.
Monte Carlo simulation: A technique for simulating systems that contain signal sources producing stochastic
or random signals. The signal sources are modeled by pseudorandom generators. Performance measures,
such as the symbol error rate, are then estimated by time averages. This is a general-purpose technique
that can be applied to an extremely wide range of systems. It can, however, require large amounts of
computer time to generate accurate estimates.
Pseudorandom generator: An algorithm or device that generates deterministic waveforms which in many
ways resemble stochastic or random waveforms. The power spectral density, autocorrelation, and other
time averages of pseudorandom signals can closely match the time and ensemble averages of stochastic
processes. These generators are useful in computer simulation where one may be unable to generate a
truly random process, and they have the added benefit of providing reproducible signals.
Semi-analytic simulation: A numerical analysis technique that can be used to efficiently determine the symbol
error rate of digital communication systems. It can be applied whenever one can analytically determine
the probability of demodulation error given a particular transmitted data pattern. Although this technique
can only be applied to a restricted class of systems, in these cases it is far more efficient, in terms of
computer execution time, than Monte Carlo simulations.
Simulation validation: The process of certifying that simulation results are reasonable and can be used with
confidence in the design or analysis process.
Symbol error rate: A fundamental performance measure for digital communication systems. The symbol
error rate is estimated as the number of errors divided by the total number of demodulated symbols.
When the communication system is ergodic, this is equivalent to the probability of making a demodu-
lation error on any symbol.
Related Topics
69.3 Television Systems ? 73.1 Signal Detection ? 102.1 Avionics Systems ? 102.2 Communications Satellite
Systems: Applications
? 2000 by CRC Press LLC
References
8574/ch078/frame Page 1764 Wednesday, May 6, 1998 11:08 AM
K. Shanmugan, “An update on software packages for simulation of communication systems (links),” IEEE J.
Selected Areas Commun., no. 1, 1988.
W. Davenport and W. Root, An Introduction to the Theory of Random Signals and Noise, New York: McGraw-
Hill, 1958.
S. Haykin, Communication Systems, New York: Wiley, 1983.
O. Shimbo, “Effects of intermodulation, AM-PM conversion, and additive noise in multicarrier TWT systems,”
Proc. IEEE, no. 2, 1971.
N. Blachman, “Bandpass nonlinearities,” IEEE Trans. Inf. Theory, no. 2, 1964.
R. Cochiere and L. Rabiner, Multirate Digital Signal Processing, Englewood Cliffs, N.J.: Prentice-Hall, 1983.
D. Knuth, The Art of Computer Programming, vol. 2, Seminumerical Algorithms, 2nd ed., Reading, Mass.:
Addison-Wesley, 1981.
R. Rubinstein, Simulation and the Monte Carlo Method, New York: Wiley, 1981.
M. Sondhi, “Random processes with specified spectral density and first-order probability density,” Bell Syst.
Tech. J., vol. 62, 1983.
A. Papoulis, Probability, Random Variables, and Stochastic Processes, New York: McGraw-Hill, 1965.
M. Jeruchim, P. Balaban, and K. Shanmugan, Simulation of Communication Systems, New York: Plenum, 1992.
M. Jeruchim, “Techniques for estimating the bit error rate in the simulation of digital communication systems,”
IEEE J. Selected Areas Commun., no. 1, January 1984.
P. Bratley, B. L. Fox, and L. E. Schrage, A Guide to Simulation, New York: Springer-Verlag, 1987.
P. Balaban, K. S. Shanmugan, and B. W. Stuck (eds.), “Special issue on computer-aided modeling, analysis and
design of communication systems,” IEEE J. Selected Areas Commun., no. 1, 1984.
P. Balaban, E. Biglieri, M. C. Jeruchim, H. T. Mouftah, C. H. Sauer, and K. S. Shanmugan (eds.), “Computer-aided
modeling, analysis and design of communication systems II,” IEEE J. Selected Areas Commun., no. 1, 1988.
H. T. Mouftah, J. F. Kurose, and M. A. Marsan (eds.), “Computer-aided modeling, analysis and design of
communication networks I,” IEEE J. Selected Areas Commun., no. 9, 1990.
H. T. Mouftah, J. F. Kurose, and M. A. Marsan (eds.), “Computer-aided modeling, analysis and design of
communication networks II,” IEEE J. Selected Areas Commun., no. 1, 1991.
J. Gibson, The Mobile Communications Handbook, Boca Raton, Fla.: CRC Press, 1996.
J. Gagliardi, Optical Communication, New York: Wiley, 1995.
S. Haykin, Communication Systems, New York: Wiley, 1994.
R. L. Freeman, Telecommunications Systems Engineering, New York: Wiley, 1996.
N. D. Sherali, “Optimal Location of Transmitters,” IEEE J. on Selected Areas in Communications, pp. 662–673,
May 1996.
Further Information
Until recently the subject of computer-aided analysis and simulation of communication systems was a very
difficult research area. There were no textbooks devoted to the subject, and the fundamental papers were
scattered over a large number of technical journals. While a number of excellent books treated the subject of
simulation of systems in which random signals and noise are present [Rubinstein, 1981; Bratley et al., 1987],
none of these books specifically focused on communication systems.
Starting in 1984, the IEEE Journal on Selected Areas in Communications (JSAC) initiated the publication of
a sequence of issues devoted specifically to the subject of computer-aided design and analysis of communication
systems. A brief study of the contents of these issues tells much about the rapid development of the discipline.
The first issue, published in January 1984 [Balaban et al., 1984], emphasizes communication links, although
there are a number of papers devoted to networks. The portion devoted to links contained a large collection
of papers devoted to simulation packages.
? 2000 by CRC Press LLC
The second issue of the series was published in 1988 and is roughly evenly split between links and networks
[Balaban et al., 1988]. In this issue the emphasis is much more on techniques than on simulation packages.
The third part of the series is a two-volume issue devoted exclusively to networks [Mouftah et al., 1990, 1991].
As of this writing, the book by Jeruchim et al. is the only comprehensive treatment of the simulation of
communication links [Jeruchim, 1984]. It treats the component and channel modeling problem and the
8574/ch078/frame Page 1765 Wednesday, May 6, 1998 11:08 AM
problems associated with using simulation techniques for estimating the performance of communication
systems in considerable detail. This textbook, together with the previously cited JSAC issues, gives a good
overview of the area.
? 2000 by CRC Press LLC