Robot Localization using SIR
i
I. Sample {x
t
} from (x, y, θ)
II. Iterate:
1) Sample from motion model according
Prediction
to action a
t
, to get proposal distribution
q(x)
2) Add importance weights
Measurement
3) Resample
Robot Localization using
SIR
i
I. Take sample set {x
t-1
}
II. Iterate:
i
For each sample x
t-1
1) Sample from p(x
t-1
| x
i
t-1
,a
t-1
)
2) Attach importance weights
=
t
x w
i
t
) =
x p
i
t
) z p | x
t
) x p
t
| x
i
t ?1
, a )
= z p | x
i
t
)(
( (
t
i
(
i
t ?1
(
( (
i
x q
i
t
) x p
t
| x
i
t ?1
, a )
t ?1
II
i
I. Resample from w(x )
t
Motion model as Proposal
distribution
| Importance sampling efficiency ∝
difference between q(x) and p(x)
| Motion model and posterior are (often)
close
| Motion model is Gaussian
Motion model as Proposal
distribution
| Motion model is Gaussian about
translation and rotation
Sampling from Motion Model
| A common motion model:
? Decompose motion into rotation, translation,
rotation
? Rotation: μ = ?θ
1
, σ
2
?θ
1
= α
1
?d+ α
2
?θ
1
? Translation: μ = ?d, σ
2
= α
3
?d+ α
4
(?θ
1
+ ?θ
2
)
?d
? Rotation: μ = ?θ
1
,σ
2
?θ
2
= α
1
?d+ α
2
?θ
2
| Compute rotation, translation, rotation from
odometry
| For each particle, sample a new motion triplet by
from Gaussians described above
| Use geometry to generate posterior particle position
Sensor Model
Measured distance y [cm]
Expected distance
Probability
p(y,x)
Approximated
Measured
Sensor Model
Laser model built from collected data Laser model fitted to measured data,
using approximate geometric distribution
Problem
| How to compute expected distance for any
given (x, y, θ)?
? Ray-tracing
? Cached expected distances for all (x, y, θ).
| Approximation:
? Assume a symmetric sensor model depending
only on ?d: absolute difference between expected
and measured ranges
? Compute expected distance only for (x, y)
? Much faster to compute this sensor model
? Only useful for highly-accurate range sensors
(e.g., laser range sensors, but not sonar)
Computing Importance Weights
(Approximate Method)
| Off-line, for each empty grid-cell (x, y)
? Compute d(x, y) the distance to nearest filled cell
from (x, y)
? Store this “expected distance” map
| At run-time, for a particle (x, y) and
i
observation z =(r, θ)
? Compute end-point
(x’, y’) = (x+rcos(θ),y+rsin(θ))
? Retrieve d(x’, y’) and compute ?d from
?d = r - d(x’, y’)
? Compute p(?d |x) from Gaussian sensor model of
specific σ
2
Another problem
| An observation is several range
measurements:
? Z
t
={z
1
,z
2
,…,z
n
}
| Assume range measurements are
independent:
(
i
Z p ) =
∏
z p
t
)(
t
Z
| What happens when map is wrong?
Example problem
Initial belief:
Sense Landmark
Action
Unknown orientation
Thrun, Fox, Burgard and Dellaert, 2001
Robot Localization using SIR
? Initial Sample Set
Initial belief:
Unknown orientation
Thrun, Fox, Burgard and Dellaert, 2001
Robot Localization using SIR
?
Action
For each sample x
t-1
, sample from
p(x
t
|x
t-1
,a
t
)
x
t-1
q(x) = p(x
t
|x
t-1
,a
t-1
)
Thrun, Fox, Burgard and Dellaert, 2001
Robot Localization using SIR
?
Sense
Landmark
For each sample x
t-1
, sample from
p(x
t
|x
t-1
,a
t
)
q(x) = w(x) =
x
t-1
p(x
t
|x
t-1
,a
t-1
) p(z
t
|x
t
)
Thrun, Fox, Burgard and Dellaert, 2001
Robot Localization using SIR
? For each sample x
t-1
, sample from
p(x
t
|x
t-1
,a
t
)
q(x) =
p(x
t-1
) p(x
t
|x
t-1
,a
t-1
) w(x) = p(z
t
|x
t
) p(x
t
)
Properties of SIR
| Converges to true distribution in limit of ∞
particles
| Speed of convergence depends of distance
from p(x) to q(x)
| Any-time: sample particles until time is up
| Arbitrary sensor model
| Arbitrary posterior distributions
Thrun, Fox, Burgard and Dellaert, 2001
Grid-Based Estimation Error
Average Estimation Error [cm]
Sonar
Laser
Cell Size [cm]
Thrun, Fox, Burgard and Dellaert, 2001
Sample-Based Estimation Error
Number of Samples (log scale)
Average Estimation Error [cm]
Sonar
Laser
Courtesy of Dieter Fox
Global Localization
Limitations of Monte Carlo
Localization
| Exponential in number of state
dimensions: if sample set is too small,
catastrophic failure
| If sensors are too accurate,
performance degrades
| Gaussian proposal distribution cannot
handle Kidnapped Robot problem
Adaptive KLD Sampling for
Particle Filters
| Basic Idea:
? If particles are drawn from multinomial over discrete
events, then particles give an estimate of multinomial
parameters
? There exist error bounds on parameter estimates, as a
function of data
| Algorithm:
? Discretize state space
? Start adding samples
? Stop adding samples when the error on all states is
within acceptable bound
? Propagate only enough samples to preserve error
bounds on all states
? Add additional samples when all samples propagated,
and there are states above error bounds
Dieter Fox, 2001
Adaptive KLD Sampling for
Particle Filters
| Does not avoid exponential dependence between state
features and number of particles
Limitations of Monte Carlo
Localization
| Exponential in number of state
dimensions: if sample set is too small,
catastrophic failure
| If sensors are too accurate,
performance degrades
| Gaussian proposal distribution cannot
handle Kidnapped Robot problem
Is there a better proposal
distribution?
| Sensor model may be a better proposal distribution
than the motion model
| Importance factors given by motion model
| Two choices:
? Sample from p(x
t
|z
t
)
? Importance weights given by prediction function
i
(w = x p
i
t
| a
t ?1
, z
t ?1
,..., z )
0
z Or
? Sample from p(x
t-1
|z
t
,x
t
)
? Importance weights given by prior belief
i
(w = x p
i
t ?1
)
How to sample from the
sensor model?
| Use a low-dimensional set of features from
each range scan
? E.g., centroid of range measurements and
average distance
| Generate large number of sampled poses
and observations
| Use function approximator to approximate
mapping from low-dimensional observation
to pose
? E.g., K-D tree
How to sample from the
sensor model?
A particular observation z
The K-D tree giving p(x)
over poses
The 3-D space of observations is discretized, and a different tree
is build for each discrete observation.
Mixture-MCL
| Sometimes sensor model proposal is
good, sometimes motion model
proposal is good
| Sample particles from both
? Sample from dual MCL with probability φ,
and from regular MCL with probability (1-
φ), where φ=0.1
Thrun, Fox, Burgard, Dellaert, 2001
Mixture-MCL vs. Standard
MCL
Standard MCL
Mixture MCL
Simulated results, 50 particles
Simultaneous Localization
and People Tracking
Reliable Navigation
| Conventional
trajectories may
not be robust to
localisation error
Estimated robot position
Robot position distribution
True robot position
Goal position
Perception and Control
Perception Control
World state
Control algorithms
Perception and Control
Assumed full Exact POMDP
observability planning
Probabilistic
Perception
P(x)
World state
World state
Probabilistic
Perception
Model
P(x) Control
Model
argmax P(x) Control
Coastal Navigation
| Represent beliefs using
~
( ); (b = max arg b H s b )
s
| Discretise into 2-dimensional belief
space MDP
Back-project to high
dimensional belief
s
1
s
2
s
3
~
R(b)
Compute expected reward from belief:
~
b R ) = E
b
( s R )) =
∑
s R s p )( ( ( ) (
S
Model Parameters
| Reward
function
p(s)
Model Parameters
| Transition
function
b
j
~
b
i
~
~ ~
T(b
i
, a, b
j
)=?
Model Parameters
| Use forward
~ ~
model
b
i
b
j
~
Low dimension
Full dimension
b
i
j
b
k
a
z
1
~
~
b
p
i
(s) p
j
(s)
a, z
1
1
a, z
z
Stochastic process
Deterministic process
Model Parameters
| Use forward model
~
~
1. For each belief b
i
and action a
b
a
~
j
z
1
z
2
2. Generate full belief b
i
~
b
k
3. For each observation z
~
b
i
b
q
1. Compute p(z|b
i
, a)
~
2. Compute posterior b
j
and projection b
j
~ ~
3. Set T(b
i
, a, b
j
) to p(z|b
i
, a)
~ ~
|Z
b
|
|S| |S|
(
i
, ( ( )bT , ba
j
) =
∑∑
zp | s )
∑
sp | s , ba
j
(s )
k l l m m
k =1 l =1 m=1
Coastal Navigation
Remove video link
Dieter Fox, 2003
Bayes’ Filters
References
| CONDENSATION – Conditional Density Propagation for
Visual Tracking. M. Isard & A. Blake. International
Journal of Computer Vision. 29:1, 5-28 (1998).
| Robust Monte Carlo Localization for Mobile Robots. S.
Thrun, D. Fox, W. Burgard and F. Dellaert. Artificial
Intelligence. 128:1-2, 99-141 (2001).
| Sequential Monte Carlo Methods in Practice. A. Doucet,
N. de Freitas, N. Gordon (eds.) Springer-Verlag, 2001.