03/12/03 12.540 Lec 10 1
12.540 Principles of the Global
Positioning System
Lecture 10
Prof. Thomas Herring
03/12/03 12.540 Lec 10 2
Estimation: Introduction
– Basic concepts in estimation
– Models: Mathematical and Statistical
– Statistical concepts
? Homework review
? Overview
1
03/12/03 12.540 Lec 10 3
Basic concepts
estimation
03/12/03 12.540 Lec 10 4
Basic estimation
– Parametric estimation where the quantities to be
that express the observables
–
formulated among the observations. Rarely used,
most common application is leveling where the sum
of the height differences around closed circuits
must be zero
? Basic problem: We measure range and phase
data that are related to the positions of the
ground receiver, satellites and other quantities.
How do we determine the “best” position for
the receiver and other quantities.
? What do we mean by “best” estimate?
? Inferring parameters from measurements is
? Two styles of estimation (appropriate for
geodetic type measurements)
estimated are the unknown variables in equations
Condition estimation where conditions can be
2
03/12/03 12.540 Lec 10 5
Basics of parametric estimation
– Observation equations: equations that relate the
parameters to be estimated to the observed
position, satellite position (implicit in r), clocks,
atmospheric and ionosphere delays
– Stochastic model: Statistical description that
describes the random fluctuations in the
measurements and maybe the parameters
– Inversion that determines the parameters values
from the mathematical model consistent with the
statistical model.
03/12/03 12.540 Lec 10 6
Observation model
–
–
of equation
? All parametric estimation methods can be
broken into a few main steps:
quantities (observables). Mathematical model.
? Example: Relationship between pseudorange, receiver
? Observation model are equations relating
observables to parameters of model:
Observable = function (parameters)
Observables should not appear on right-hand-side
? Often function is non-linear and most common
method is linearization of function using Taylor
series expansion.
? Sometimes log linearization for f=a.b.c ie.
Products fo parameters
3
03/12/03 12.540 Lec 10 7
Taylor series expansion
? In most common Taylor series approach:
? The estimation is made using the difference between
? The estimation returns adjustments to apriori
y = f (x
1
,x
2
,x
3
,x
4
)
y
0
y = f (x)
x
0
+
?f (x)
?x
Dx x = (x
1
,x
2
,x
3
,x
4
)
the observations and the expected values based on
apriori values for the parameters.
parameter values
+D
03/12/03 12.540 Lec 10 8
Linearization
? Since the linearization is only an
approximation, the estimation should be
iterated until the adjustments to the parameter
values are zero.
? For GPS estimation: Convergence rate is 100-
1000:1 typically (ie., a 1 meter error in apriori
coordinates could results in 1-10 mm of non-
linearity error).
4
03/12/03 12.540 Lec 10 9
? (Will return to statistical model shortly)
?
minimize the sum of the squares of the differences
on parameter estimates.
? For linear estimation problems, direct matrix
formulation for solution
?
minimum value
?
found (will not treat in this course)
Estimation
Most common estimation method is “least-squares” in
which the parameter estimates are the values that
between the observations and modeled values based
For non-linear problems: Linearization or search
technique where parameter space is searched for
Care with search methods that local minimum is not
5
03/12/03 12.540 Lec 10 10
Least squares estimation
D
observables; D
residual
Dy = ADx + v
minimize v
T
v
( )
;
Dx = (A
T
A)
-1
A
T
Dy
? Originally formulated by Gauss.
? Basic equations: y is vector of observations;
A is linear matrix relating parameters to
x is vector of parameters; v is
superscript T means transpose
03/12/03 12.540 Lec 10 11
mean.
v
T
Wv
( )
;
Dx = (A
T
WA)
-1
A
T
WDy
03/12/03 12.540 Lec 10 12
Statistical approach to least squares
Weighted Least Squares
? In standard least squares, nothing is assumed
about the residuals v except that they are zero
? One often sees weight-least-squares in which
a weight matrix is assigned to the residuals.
Residuals with larger elements in W are given
more weight.
minimize
? If the weight matrix used in weighted least
squares is the inverse of the covariance matrix
of the residuals, then weighted least squares
is a maximum likelihood estimator for
Gaussian distributed random errors.
? This latter form of least-squares is most
statistically rigorous version.
? Sometimes weights are chosen empirically
6
7
03/12/03 12.540 Lec 10 13
Review of statistics
? Random errors in measurements are
expressed with probability density functions
that give the probability of values falling
between x and x+dx.
? Integrating the probability density function
gives the probability of value falling within a
finite interval
? Given a large enough sample of the random
variable, the density function can be deduced
from a histogram of residuals.
03/12/03 12.540 Lec 10 14
Example of random variables
-4.0
-3.0
-2.0
-1.0
0.0
1.0
2.0
3.0
4.0
0.00 200.00 400.00 600.00 800.00
Uniform
Gaussian
Random variable
Sample
Histograms of random variables
200.0
Gaussian
Uniform
490/sqrt(2pi)*exp(-x^2/2)
150.0
100.0
50.0Number of samples
0.0
-3.75 -2.75 -1.75 -0.75 0.25 1.25 2.25 3.25
Random Variable x
03/12/03 12.540 Lec 10 15
03/12/03 12.540 Lec 10 16
Characterization Random Variables
Expected Value < h(x) > h(x) f (x)dx
ú
Expectation < x > xf (x)dx = m
ú
Variance < (x - m)
2
> (x - m)
2
f (x)dx
ú
? When the probability distribution is known, the
following statistical descriptions are used for
random variable x with density function f(x):
Square root of variance is called standard deviation
8
03/12/03 12.540 Lec 10 17
Theorems for expectations
– For a constant <c> = c
– Linear operator <cH(x)> = c<H(x)>
– Summation <g+h> = <g>+<h>
xy
s
xy
=< (x - m
x
)(y - m
y
) >= (x - m
x
)(y - m
y
) f
xy
(x,y)dxdy
ú
r
xy
= s
xy
/s
x
s
y
? For linear operations, the following theorems
are used:
? Covariance: The relationship between random
variables f (x,y) is joint probability distribution:
Correlation :
9
03/12/03 12.540 Lec 10 18
?
moments of a probability distribution
? As N goes to infinity these expressions approach their
expectations. (Note the N-1 in form which uses mean)
?m
x
a x
n
n=1
N
?
/N a
1
T
x(t)dt
ú
?s
x
2
a (x - m
x
n=1
N
?
)
2
/N a (x - ?m
x
n=1
N
?
)
2
/(N -1)
Estimation on moments
Expectation and variance are the first and second
03/12/03 12.540 Lec 10 19
Probability distributions
?
Gaussian f (x) =
1
s 2p
e
-(x-m )
2
s
2
)
f (x) =
1
(2p)
n
V
e
-
1
2
(x-m )
T
V
-1
(x-m )
Chi - squared c
r
2
(x) =
x
r / 2-1
e
-x / 2
G(r/
r / 2
? While there are many probability distributions
there are only a couple that are common used:
/(2
Multivariant
2)2
03/12/03 12.540 Lec 10 20
Probability distributions
?
and variance 1.
? With the probability density function known, the
probability of events occurring can be determined.
For Gaussian distribution in 1-D; P(|x|<1s) = 0.68;
P(|x|<2s) = 0.955; P(|x|<3s) = 0.9974.
? Conceptually, people thing of standard deviations in
terms of probability of events occurring (ie. 68% of
values should be within 1-sigma).
The chi-squared distribution is the sum of the squares
of r Gaussian random variables with expectation 0
10
03/12/03 12.540 Lec 10 21
Central Limit Theorem
?
? “The distribution of the sum of a large number of
is approximately Gaussian”
? When the random errors in measurements are made
up of many small contributing random errors, their
sum will be Gaussian.
?
generate another Gaussian. Not the case for other
density functions.
Why is Gaussian distribution so common?
independent, identically distributed random variables
Any linear operation on Gaussian distribution will
distributions which are derived by convolving the two
03/12/03 12.540 Lec 10 22
work
Summary
? Examined simple least squares and weighted
least squares
? Examined probability distributions
? Next we pose estimation in a statistical frame
11