Bel (x )
t
Probabilistic Model
z Estimate p(x
t
):
Bel (x ) = x p | a z
t ?1
, z
t ?1
, a
t ?2
,..., z )
t
(
t t
,
0
z Bayes’ rule:
( (z p | x , a
t ?1
, z
t ?1
, a
t ?2
,..., z ) x p | a
t ?1
, z
t ?1
, a
t ?2
,..., z )
=
t t 0 t
(z p
t
| a
t ?1
, z
t ?1
, a
t ?2
,..., z )
0
=α ( (z p | x ) x p | a
t ?1
, z
t ?1
, a
t ?2
,..., z )
t t t 0
0
Probabilistic Model
z Integrate over all p(x
t-1
):
( (Bel (x ) =α z p | x ) x p | a
t ?1
, z
t ?1
, a
t ?2
,..., z )
t t t t 0
z p | x
t
)
∫
x p
t
| x
t ?1
, a
t ?1
, z
t ?2
,..., z ) x p | a
t ?1
,..., z )dx
=α ( ( (
t 0 t ?1 0 t ?1
x
t ?1
z p | x
t
)
∫
x p
t
| x
t ?1
, a
t ?1
) x p )dx=α ( ( (
t t ?1 t ?1
x
t ?1
Probabilistic Model
| Bayes’ filter gives recursive, two-step
procedure for estimating p(x
t
x
t
)
Bel (x ) =αp (z | x ) p (x | x
t ?1
, a
t ?1
) p (x )dx
t t t t ?1 t ?1t
∫
?1
Measurement Prediction
| How to represent Bel(x
t
)?
Kalman, 1960
An action
is taken
Kalman Filter
State Space
Posterior belief
Posterior belief
Initial belief
after an action
after sensing
Problems
| Gaussian Process and Sensor Noise
? Often solved extracting low-dimensional
features
? Data-association problem
| Often hard to implement Kalman
filters
| Gaussian Posterior Estimate
Global Localization
State Space
Initial belief
Posterior belief
after an action
Posterior belief
after sensing
Markov Localization
State Space
Initial belief
Posterior belief
after an action
Posterior belief
after sensing
Burgard et al, 1996
Problems
| Large memory footprint
? 50 m x 50m map
? 1° increments
? ≈343M
| Fixed cell size limits accuracy
Monte Carlo Localization:
The Particle Filter
State Space
| Sample particles randomly from distribution
| Carry around particle sets, rather than full
distribution
Using Particle Filters
| Can update distribution by updating particle
set directly
| Can (sometimes) compute properties of
distribution directly from particles
? E.g., any moments: mean, variance, etc.
| If necessary, can recover distribution from
particles
? Fit Gaussian by recovering mean, covariance
(Kalman filter)
? Can fit multiple Gaussians using Expectation-
Maximization
? Can bin data and recover discrete multinomial
(Markov localization)
How to sample
| Sampling from uniform distributions is easy
| Sampling from Gaussians (and other
parameteric distributions) is a little harder
| What about arbitrary distributions?
| Many algorithms
? Rejection sampling
? Importance sampling
? Gibbs sampling
? Metropolis
? ….
How to sample
| Want to sample from arbitrary p(x)
| Don’t know p()
| Do know p(x) for any specific x
| Do know how to sample from q(x)
| Sample x from q
| Compare q(x) to p(x)
| Adjust samples accordingly
Sample Importance
Resampling
| Sample {x
1
,x
2
,x
3
,…,x
n
} from q(x)
| Attach “importance weights”
x p )(
=
x q )(
| Resample from {x
1
,x
2
,x
3
,…,x
n
} according to
{w
1
,w
2
,w
3
,…,w
n
}
w
i
Sample Importance
Resampling
State Space
| Sample from an easy function
? E.g., uniform distribution
Sample Importance
Resampling
State Space
| Compute importance weights
Sample Importance
Resampling
State Space
| Resample particles from particle set,
according to importance weights