Lectures 8-9:
Signal Detection in Noise
Eytan Modiano
AA Dept.
Eytan Modiano
Slide 1
Noise in communication systems
S(t)
Channel
r(t) = S(t) + n(t)
r(t)
n(t)
?
Noise
is
additional
“unwanted
”
signal that interferes with the
transmitted signal
–
Generated by electronic devices
?
The noise is a random process
–
Each
“sample
” of n(t) is a random variable
?
Typically, the noise process is modeled as
“
Additive White
Gaussian
Noise”
(AWGN)
–
White: Flat frequency spectrum
–
Gaussian
: noise distribution
Eytan Modiano
Slide 2
τττ
τττ
τττ
τττ
τττ
000
τττ
Random Processes
?
The auto-correlation of a random process x(t) is defined as
–
R
xx
(t
1
,t
2
) = E[x(t
1
)x(t
2
)]
?
A random process is Wide-sense-stationary (WSS) if its mean and auto-correlation are not a function of time.
That is
–
m
x
(t) = E[x(t)] = m
–
R
xx
(t
1
,t
2
) = R
x
(
τ
), where
τ
= t
1
-t
2
?
If x(t) is WSS then:
–
R
x
(
τ
) = R
x
(-
τ
)
–
| R
x
(
τ
)| <= |R
x
(
0
)|
(max is achieved at
τ
= 0)
?
The power content of a WSS process is:
1
T
/
2
1
T
/
2
P
x
=
E
[
lim
2
(
)
t
→∞
T
∫
?
T
/
2
R
x
(
0
)
d
t
=
R
x
(
0
)
t
→∞
T
∫
?
T
/
2
xt
d
t
=
lim
Eytan Modiano
Slide 3
τττ
Power Spectrum of a random process
?
If x(t) is WSS then the power spectral density function is given by:
S
x
(f) = F[R
x
(
τ
)]
?
The total power in the process is also given by:
∞
∞
?
∞
?
P
x
=
∫
S
x
()
te
?
jf
t
d
t
??
df
f
d
f
=
∫
? ?
∫
R
x
(
)
2
π
?∞
?∞
?
?∞
?
∞
?
∞
?
x
()
2
π
=
∫
? ?
∫
Rt
e
?
jf
t
d
f
??
d
t
?∞
?
?∞
?
∞
?
∞
?
∞
=
∫
Rt
2
π
t
t
d
t
=
R
x
(
0
)
x
()
??
∫
e
?
jf
t
d
f
??
d
t
=
∫
R
x
()
δ
()
?∞
?
?∞
?
?∞
Eytan Modiano
Slide 4
White noise
?
The noise spectrum is flat over all relevant frequencies
–
White light contains all frequencies
S
n
(f)
N
o
/2
?
Notice that the total power over the entire frequency range is infinite
–
But in practice we only care about the noise content within the signal bandwidth, as the rest can be filtered out
?
After filtering the only remaining noise power is that contained within thefilter bandwidth (B)
Eytan Modiano
S
BP
(f)
N
o
/2
f
c
N
o
/2
-f
c
Slide 5
B
B
σσσ
σσσ
fx
()
AWGN
?
The effective noise content of
bandpass
noise is
BN
o
–
Experimental measurements show that
the pdf of
the noise samples can be
modeled as zero mean
gaussian
random variable
x
()
=
2
πσ
1
e
?
x
2
/
2
σ
2
–
AKA Normal r.v., N(0,
σ
2
)
–
σ
2
=
P
x
=
BN
o
?
The CDF of a
Gaussian
R.V.,
α
α
F
x
α
=
P
[
X
≤
α
]
=
∫
?∞
f
x
(
x
)
d
x
=
∫
?∞
πσ
2
1
e
?
x
2
/
2
σ
2
d
x
?
This integral requires numerical evaluation
–
Available in tables
Eytan Modiano
Slide 6
σσσ
σσσ
EX
AWGN, continued
?
X(t) ~ N(0,
σ
2
)
?
X(t
1
), X(t
2
) are independent unless t
1
= t
2
?
E
X
t
?
[(
t
+
τ
)
]
[(
)
]
τ
≠
0
τ
(
)
]
=
?
R
x
()
=
E
[
X
(
t
+
τ
)
Xt
?
EX
2
(
t
)
]
τ
=
0
[
?
0
τ
≠
0
=
? ?
σ
2
τ
=
0
?
R
x
(0) =
σ
2
= P
x
= BN
o
Eytan Modiano
Slide 7
Detection of signals in AWGN
Observe:
r(t) = S(t) + n(t),
t
∈
[0,T]
Decide which of S
1
, …, S
m
was sent
?
Receiver filter
–
Designed to maximize signal-to-noise power ratio (SNR)
h(t)
y(t)
filter
r(t)
“sample at t=T”
decide
?
Goal: find h(t) that maximized SNR
Eytan Modiano
Slide 8
yt
yT
yT
T
Receiver filter
t
()
=
r
t
(
)
=
∫
r
(
τ
)
h
(
t
?
τ
)
d
τ
()
*
ht
0
T
Sampling at t
=
T
?
()
=
∫
r
(
τ
)
h
(
T
?
τ
)
d
τ
0
r
()
=
s
()
+
n
()
?
τ
τ
τ
T
T
τ
()
=
∫
s
(
τ
)
h
(
T
?
τ
)
d
τ
+
∫
n
()
h
(
T
?
τ
)
d
τ
=
Y
s
(
T
)
+
Y
n
(
T
)
0
0
?
T
?
2
?
T
?
?
s
()
h
(
T
?
τ
)
d
τ
??
? ?
∫
h
()
s
(
T
?
τ
)
d
τ
??
?
∫
τ
τ
YT
SNR
=
s
2
()
=
?
0
?
=
?
0
?
[(
T
)
]
T
T
EY
n
2
N
0
∫
hT
?
t
)
d
t
N
0
∫
hT
?
t
)
d
t
2
(
2
(
2
2
0
0
Eytan Modiano
Slide 9
2
0
Matched filter: maximizes SNR
Caushy
-
Schwartz Inequality
:
2
∞
∞
?
∞
gt
g
2
()
?
∫
?∞
1
(
)
)
2
(
g
2
(
t
)
)
2
? ?
∫
?∞
1
()
t
d
t
? ?
≤
(
gt
∫
?∞
Above holds with equality iff
:
gt
t
1
()
=
c
g
2
()
for arbitrary constant c
2
?
T
?
T
T
?
s
()
h
(
T
?
τ
)
d
τ
??
∫
((
τ
)
)
2
d
τ
∫
hT
?
ττ
T
?
∫
τ
s
2
(
)
d
s
SNR
=
?
0
T
?
≤
0
T
0
=
2
∫
((
τ
)
)
2
d
τ
=
2
E
s
N
0
∫
hT
?
t
)
d
t
N
0
∫
hT
?
t
)
d
t
N
0
0
N
0
2
(
2
(
2
2
0
0
Above maximum is obtained
iff:
h(T-
τ
) =
cS(
τ
)
=> h(t) =
cS(T-t) = S(T-t)
Eytan Modiano
h(t) is said to be “matched” to the signal S(t)
Slide 1
0
Example: PAM
S
m
(t) = A
m
g(t),
t
∈
[0,T]
A
m
is a constant:
Binary PAM A
m
∈
{0,1}
Matched filter is matched to g(t)
g(t)
g(T-t)
“matched filter”
A
A
T
T
Eytan Modiano
Slide 1
1
Example, continued
t
Yt
s
()
=
∫
S
(
τ
)
h
(
t
?
τ
)
d
τ
,
h
(
t
?
)
=
g
(
T
?
t
?
)
?
h
(
t
?
τ
)
=
g
(
T
+
τ
?
t
)
0
t
t
s
()
=
∫
g
(
τ
)
g
(
T
+
τ
?
t
)
d
τ
=
∫
g
()
g
(
T
?
t
+
τ
)
d
τ
Yt
τ
0
0
T
s
()
=
∫
g
2
(
)
d
YT
ττ
0
A
2
T
?
Sample at t=T to obtain maximum value
Y
s
(t)
t
T
Eytan Modiano
Slide 1
2
Matched filter receiver
Sample at t=
kT
U(t)
r
x
(t)
g(T-t)
r
x
(kT)
2Cos(2
π
f
c
t)
Sample at t=
kT
U(t)
r
y
(t)
g(T-t)
r
y
(kT)
2Sin(2
π
f
c
t)
Eytan Modiano
Slide 1
3
Binary PAM example, continued
g(t)
0 => S
1
= g(t)
1 => S
2
= -g(t)
A
Eytan Modiano
S(t)
Y(t)
T
2T
3T
“S
1
(t)
”
T
“S
2
(t)
”
T
T
“Y
1
(t)”
“Y
2
(t)”
2T
2T
Slide 1
4
YT
Alternative implementation:
correlator
receiver
r(t) = S(t) + n(t)
Sample at t=
kT
r(t)
()
0
T
∫
Y(
kT)
S(t)
T
T
T
()
=
∫
0
r
t
S
t
t
()
()
=
Y
s
(
T
)
+
Y
n
(
T
)
(
)
()
=
∫
0
S
2
()
+
∫
0
nt
S
t
Notice resemblance to matched filter
Eytan Modiano
Slide 1
5
∈∈∈
Signal Detection
?
After matched filtering we receive r =
S
m
+ n
–
S
m
∈
{S
1
,..S
M
}
?
How do we determine from r which of the M possible symbols was sent?
–
Without the noise we would receive what sent, but the noise can transform one symbol into another
Hypothesis testing
?
Objective: minimize the probability of a decision error
?
Decision rule:
–
Choose
S
m
such that P(
S
m
sent | r received) is maximized
?
This is known as Maximum a posteriori probability (MAP) rule
?
MAP Rule:
Maximize the conditional probability that
S
m
was sent given that r
was received
Eytan Modiano
Slide 1
6
:
MAP detector
?
Notes:
(
MAP detector
:
max
PS
m
|
r
)
–
MAP rule requires prior
S
1
...
S
M
probabilities
–
MAP minimizes the
PS
m
|
r
)
=
(,
r
)
Pr
P
S
m
)
probability of a decision
(
PS
m
(|
S
m
)(
=
error
()
Pr
Pr
()
–
ML rule assumes equally likely symbols
PS
m
|)
=
f
rs
(|
S
m
)(
(
r
|
r
P
S
m
)
–
With equally likely
r
()
symbols MAP and ML are
fr
M
the same
fr
r
()
=
∑
f
r
|
s
(
r
|
S
m
)
P
(
S
m
)
m=
1
1
When P(
S
m
)
=
Map rule becomes
:
M
(
max
fr
|
S
m
)
(
AKA
Maximum Likelihood (ML) decision rule
)
S
1
...
S
M
Eytan Modiano
Slide 1
7
Detection in AWGN
(Single dimensional constellations)
(|
S
m
)
=
N
0
π
1
e
(
rS
m
)
2
/
N
0
fr
??
?
fr
ln(
(
|
S
m
))
=?
ln(
N
0
π
)
?
(
rS
m
)
2
N
0
d
rS
=
(
r
?
S
m
)
2
m
Maximum Likelihood decoding amounts to minimizing
d
rS
=
(
r
?
S
m
)
2
m
?
Also known as minimum distance decoding
–
Similar expression for multidimensional constellations
Eytan Modiano
Slide 1
8
Detection of binary PAM
?
S1(t) = g(t), S2(t) = -g(t)
–
S1 = - S2 =>
“antipodal
” signaling
?
Antipodal signals with energy
E
b
can be represented geometrically as
S2
S1
?
E
b
E
b
?
If S1 was sent then the received signal r = S1 + n
?
If S2 was sent then the received signal r = S2 + n
f
|
r
N
0
π
??
b
E
)
2
/
N
0
rs
(|
s
1
)
=
1
e
(
r
f
|
r
N
0
π
?+
b
E
)
2
/
N
0
rs
(|
s
2
)
=
1
e
(
r
Eytan Modiano
Slide 1
9
Detection of Binary PAM
S1
S2
?
E
b
0
E
b
?
Decision rule:
MLE => minimum distance decoding
–
=>
r > 0
decide S1
–
=>
r < 0
decide S2
?
Probability of error
–
When S2
was sent the probability of error is the probability that noise
exceeds (Eb
)
1/2
similarly when S1 was sent the probability of error is the
probability that noise exceeds - (
E
b
)
1/2
–
P(e|S1) = P(e|S2) = P[r<0|S1)
Eytan Modiano
Slide 2
0
σσσ
Qx
?
Probability of error for binary PAM
0
r
0
N
0
π
1
??
b
E
)
2
/
N
0
d
r
P
e
=
f
r
|
s
(|
s
1
)
d
r
=
∫
?∞
e
(
r
∫
?∞
N
0
π
1
?
E
b
e
?
r
2
/
N
0
d
r
=
∫
?∞
2
π
1
?
E
b
/
2
0
N
e
?
r
2
/
2
d
r
=
∫
?∞
2
π
1
E
b
∫
/
2
0
N
∞
e
?
r
2
/
2
d
r
= =
Q
(
N
b
2
0
E
/
)
where
,
∞
()
?
2
π
1
∫
x
e
?
r
2
/
2
dr
?
Q(x) = P(X>x) for X
Gaussian
with zero mean and
σ
2
= 1
?
Q(x) requires numerical evaluation and is tabulated in many math
Eytan Modiano
books (Table 4.1 of text)
Slide 2
1
∞∞∞
∞∞∞
σσσ
σσσ
More on Q function
?
Notes on Q(x)
–
Q(0) = 1/2
–
Q(-x) = 1-Q(x)
–
Q(
∞
) = 0, Q(-
∞
)=1
–
If X is N(m,
σ
2
)
Then P(X>x) = Q((x-m)/
σ
)
?
Example:
Pe = P[r<0|S1 was sent)
f
|
(|
s
1
)
~
N
(
E
b
,
N
0
/
2
)
=>
m
=
E
b
,
σ
=
rs
r
N
/
2
0
?
P
e
=?
P
[
r
>
0
|
s
1
]
=
1
?
Q
(
E
N
b
/
2
0
)
=?
Q
(
?
2
E
b
/
N
0
)
=
Q
(
2
E
b
/
N
0
)
1
1
Eytan Modiano
Slide 2
2
Pd
Error analysis continued
?
In general, the probability of error between two symbols separated by a distance d is given by:
e
()
=
Q
(
d
N
2
0
2
)
?
For binary PAM d = 2
E
b
Hence,
P
e
=
Q
(
E
N
b
2
0
)
Eytan Modiano
Slide 2
3
Orthogonal signals
?
Orthogonal signaling scheme (2 dimensional)
E
b
E
b
2
E
b
P
e
=
Q
(
d
N
2
0
2
=
QE
N
o
/
b
(
)
Eytan Modiano
Slide 2
4
Orthogonal
v
s
. Antipodal signals
?
Notice from Q function that orthogonal signaling requires twice as much bit energy than antipodal for the same error rate
–
This is due to the distance between signal points
10
-1
P
e
10
-5
12
14
antipodal
orthogonal
3dB
E
b
/N
0
(dB)
Eytan Modiano
Slide 2
5
?
?
Probability of error for M-PAM
S
1
S
2
S
M
S
i
S
M
=
A
M
E
g
,
A
M
=
(
2
m
?
1
?
M
)
τ
i
d
ij
=
2
g
E
for
|
i
?
j
|
=
1
Decision
rule
:
Choose
s
i
such
that
d(r,
s
i
)
is
minimized
P[error
|
s
i
]
=
P
[
decode
s
i
?
1
|
s
i
]
+
P
[
decode
s
i
+
1
|
s
i
]
=
2
P
[
decode
s
i
+
1
|
s
i
]
?
d
N
ii
+
,
2
1
2
0
?
?
22
0
E
N
g
?
Pe
Pe
=
2
Q
?
?
=
2
Q
?
?
,
P
eb
=
?
??
?
?
Log
2
()
M
?
Notes: 1)
the probability of
error for s
1
and
s
M
is lower because error only
occur in one direction
Eytan Modiano
Slide 2
6
2)
With Gray coding the bit error rate is
P
e
/log
2
(M)
?
?
?
?
Probability of error for M-PAM
M
2
?
1
M
2
?
1
E
av
=
3
E
g
=>
E
bav
=
3
Log
2
()
E
g
M
E
=
3
Log
2
()
E
bav
M
g
M
2
?
1
?
Log
M
M
E
bav
?
2
2
0
6
1
()
(
N
)
?
Pe
P
e
=
2
Q
?
?
,
P
eb
=
?
?
Log
2
()
M
accounting
for
effect
of
S
1
and
S
M
w
e
get
:
?
M
?
1
?
?
Log
M
M
?
2
2
()
(
11
0
)
N
E
bav
6
?
P
e
=
2
?
M
?
Q
??
?
,
?
Eytan Modiano
Slide 2
7
Probability of error for PSK
?
Binary PSK is exactly the same as binary PAM
?
4-PSK can be viewed as two sets of binary PAM signals
?
For large M (e.g., M>8) a good approximation assumes that errors occur between adjacent signal points
E
s
θ
θ
= 2
π
/M π
?
d
ij
=
2
Sin
s
E
()
,
|
ij
|
=
1
M
Eytan Modiano
Slide 2
8
?
?
M
?
?
Error Probability for PSK
P[error
|
s
i
]
=
P
[
decode
s
i
?
1
|
s
i
]
+
P
[
decode
s
i
+
1
|
s
i
]
=
2
P
[
decode
s
i
+
1
|
s
i
]
?
,
+
d
N
ii
1
2
0
2
?
?
E
N
s
2
0
?
P
es
=
2
Q
?
?
=
2
Q
?
sin(
π
/
M
)
?
?
??
?
?
?
E
b
=
E
s
/
Log
2
(
M
)
?
()
Log
M
E
N
b
2
0
2
?
P
P
es
=
2
Q
?
sin(
π
/
M
),
P
eb
=
es
?
?
?
Log
2
()
M
Eytan Modiano
Slide 2
9