第五章 特征值与特征向量
—— 幂法 /* Power Method */
计算矩阵的主特征根及对应的特征向量
Wait a second,what
does that dominant
eigenvalue mean?
That is the
eigenvalue with the
largest magnitude.
Why in the earth do I
want to know that?Don’t you have to compute e
spectral radius from
time to time?
原始幂法 /* the original method */
条件,A 有特征根 |?1| > |?2|?…? |?n|? 0,对应 n个线性无关的特征向量
nxx,...,1思路,从任意 出发,要求
0)0(
0),( 1)0(?x
0,1
1
)0(
n
i
ii x
)0()1( A?
n
i
iii x
1
)1()2( A?
n
i
iii x
1
2
… … …
n
i
i
k
i
i
k
n
i
i
k
ii
kk
x
xA
1 1
1
1
)1()(
|?i /?1 | < 1
当 k 充分大时,有
1111)1(111)(,xx kkkk
这是 A关于?1的近似特征向量
)(
1
111
)(
k
kk xAA
1)1(
)(
)(
)(?
i
k
i
k
Ch.5 Power Method – The Original Method
定理 设 A?Rn?n为 非亏损矩阵 /* non-derogatory */,其主特征根 /* dominant eigenvalue */?1为实根,且 |?1| > |?2|?…? |?n| 。
则从任意非零向量 (满足 )出发,迭代收敛到主特征向量,收敛到?1。
)0( 0),( 1)0(1 xv )0()( kk A?
1x? ikik )/()( )()1(
每个不同的特征根只对应一个 Jordan
块注,?结论对 重根?1 =?2 = … =?r 成立。
r
i
ii
kr
i
n
ri
i
k
i
iii
kk xxx
1
1
1 1 1
1
)(
若有?1 =2,则此法不收敛。
任取初始向量时,因为不知道,所以不能保证?1? 0,故所求得之 不一定是,而是使得 的第一个,同时得到的特征根是?m 。
1x?
)(k 1x?
0),( )0(?mx mx?
HW,
p.98 #1
Ch.5 Power Method – Normalization
规范化 /* normalization */
为避免大数出现,需将迭代向量 规范化,即每一步先保证,再代入下一步迭代。一般用 。1|||| ||m a x||||
1 iniv
记,|||||| )()( k
ik k
则有:
1
0
)(
)0(1
)1(
)1()1(
|||| 1 ks si
k
k
i
kk
sk v
vA
v
vu
1
0
)(
)0()1()(
||ks si
kkk
sv
vAuAv
|| )(
)()(
k
i
kk
kv
vu 1x? )(
)1(
)(
1 1
k
ik
i
k
i
kvu
v
Algorithm,Power Method
To approximate the dominant eigenvalue and an associated eigenvector
of the n?n matrix A given a nonzero initial vector.
Input,dimension n; matrix a[ ][ ]; initial vector V0[ ]; tolerance TOL;
maximum number of iterations Nmax.
Output,approximate eigenvalue? and approximate eigenvector
(normalized) or a message of failure.
Algorithm,Power Method (continued)
Step 1 Set k = 1;
Step 2 Find index such that | V0[ index ] | = || V0 ||? ;
Step 3 Set V0[ ] = V0[ ] / V0[ index ]; /* normalize V0 */
Step 4 While ( k? Nmax) do steps 5-11
Step 5 V [ ] = A V0[ ]; /* compute Vk from Uk?1 */
Step 6? = V[ index ];
Step 7 Find index such that | V[ index ] | = || V ||? ;
Step 8 If V[ index ] == 0 then
Output ( ―A has the eigenvalue 0‖; V0[ ] ) ; STOP,
/* the matrix is singular and user should try a new V0 */
Step 9 err = || V0? V / V[ index ] ||? ;
V0[ ] = V[ ] / V[ index ]; /* compute Uk */
Step 10 If (err < TOL) then
Output (? ; V0[ ] ) ; STOP,/* successful */
Step 11 Set k ++;
Step 12 Output (Maximum number of iterations exceeded);
STOP,/* unsuccessful */
Ch.5 Power Method – Normalization
—— 幂法 /* Power Method */
计算矩阵的主特征根及对应的特征向量
Wait a second,what
does that dominant
eigenvalue mean?
That is the
eigenvalue with the
largest magnitude.
Why in the earth do I
want to know that?Don’t you have to compute e
spectral radius from
time to time?
原始幂法 /* the original method */
条件,A 有特征根 |?1| > |?2|?…? |?n|? 0,对应 n个线性无关的特征向量
nxx,...,1思路,从任意 出发,要求
0)0(
0),( 1)0(?x
0,1
1
)0(
n
i
ii x
)0()1( A?
n
i
iii x
1
)1()2( A?
n
i
iii x
1
2
… … …
n
i
i
k
i
i
k
n
i
i
k
ii
kk
x
xA
1 1
1
1
)1()(
|?i /?1 | < 1
当 k 充分大时,有
1111)1(111)(,xx kkkk
这是 A关于?1的近似特征向量
)(
1
111
)(
k
kk xAA
1)1(
)(
)(
)(?
i
k
i
k
Ch.5 Power Method – The Original Method
定理 设 A?Rn?n为 非亏损矩阵 /* non-derogatory */,其主特征根 /* dominant eigenvalue */?1为实根,且 |?1| > |?2|?…? |?n| 。
则从任意非零向量 (满足 )出发,迭代收敛到主特征向量,收敛到?1。
)0( 0),( 1)0(1 xv )0()( kk A?
1x? ikik )/()( )()1(
每个不同的特征根只对应一个 Jordan
块注,?结论对 重根?1 =?2 = … =?r 成立。
r
i
ii
kr
i
n
ri
i
k
i
iii
kk xxx
1
1
1 1 1
1
)(
若有?1 =2,则此法不收敛。
任取初始向量时,因为不知道,所以不能保证?1? 0,故所求得之 不一定是,而是使得 的第一个,同时得到的特征根是?m 。
1x?
)(k 1x?
0),( )0(?mx mx?
HW,
p.98 #1
Ch.5 Power Method – Normalization
规范化 /* normalization */
为避免大数出现,需将迭代向量 规范化,即每一步先保证,再代入下一步迭代。一般用 。1|||| ||m a x||||
1 iniv
记,|||||| )()( k
ik k
则有:
1
0
)(
)0(1
)1(
)1()1(
|||| 1 ks si
k
k
i
kk
sk v
vA
v
vu
1
0
)(
)0()1()(
||ks si
kkk
sv
vAuAv
|| )(
)()(
k
i
kk
kv
vu 1x? )(
)1(
)(
1 1
k
ik
i
k
i
kvu
v
Algorithm,Power Method
To approximate the dominant eigenvalue and an associated eigenvector
of the n?n matrix A given a nonzero initial vector.
Input,dimension n; matrix a[ ][ ]; initial vector V0[ ]; tolerance TOL;
maximum number of iterations Nmax.
Output,approximate eigenvalue? and approximate eigenvector
(normalized) or a message of failure.
Algorithm,Power Method (continued)
Step 1 Set k = 1;
Step 2 Find index such that | V0[ index ] | = || V0 ||? ;
Step 3 Set V0[ ] = V0[ ] / V0[ index ]; /* normalize V0 */
Step 4 While ( k? Nmax) do steps 5-11
Step 5 V [ ] = A V0[ ]; /* compute Vk from Uk?1 */
Step 6? = V[ index ];
Step 7 Find index such that | V[ index ] | = || V ||? ;
Step 8 If V[ index ] == 0 then
Output ( ―A has the eigenvalue 0‖; V0[ ] ) ; STOP,
/* the matrix is singular and user should try a new V0 */
Step 9 err = || V0? V / V[ index ] ||? ;
V0[ ] = V[ ] / V[ index ]; /* compute Uk */
Step 10 If (err < TOL) then
Output (? ; V0[ ] ) ; STOP,/* successful */
Step 11 Set k ++;
Step 12 Output (Maximum number of iterations exceeded);
STOP,/* unsuccessful */
Ch.5 Power Method – Normalization