Cheat Sheet For Probability Theory Page 5

ADVERTISEMENT

5
Ingmar Land, October 8, 2005
– Covariance matrix (auto-covariance matrix):
T
Σ
:= E (X − µ
)(X − µ
)
XX
X
X
− µ
X
1
1
= E
− µ
− µ
X
X
1
1
2
2
− µ
X
2
2
E (X
− µ
)(X
− µ
)
E (X
− µ
)(X
− µ
)
1
1
1
1
1
1
2
2
=
− µ
− µ
− µ
− µ
E (X
)(X
)
E (X
)(X
)
2
2
1
1
2
2
2
2
Σ
Σ
X
X
X
X
=
1
1
1
2
Σ
Σ
X
X
X
X
2
1
2
2
– Covariance matrix (cross-covariance matrix):
T
Σ
:= E (X − µ
)(Y − µ
)
XY
X
Y
Σ
Σ
X
Y
X
Y
=
1
1
1
2
Σ
Σ
X
Y
X
Y
2
1
2
2
Remark: This matrix contains the covariance of each element of the first vector
with each element of the second vector.
– Relations:
T
T
E XX
= Σ
+ µ
µ
XX
X
X
T
T
E XY
= Σ
+ µ
µ
XY
X
Y
Remark: This result is not too surprising when you know the result for the
scalar case.
3
Gaussian Random Variables
• A Gaussian RV X with mean µ
2
and variance σ
is a continuous random variable
X
X
with a Gaussian pdf, i.e., with
2
(x − µ
1
)
X
· exp −
(x) =
p
X
2
2
2πσ
X
X
The often used symbolic notation
2
X ∼ N (µ
)
, σ
X
X
2
may be read as: X is (distributed) Gaussian with mean µ
and variance σ
.
X
X
• A Gaussian distribution with mean zero and variance one is called a normal distri-
bution:
1
x 2
p(x) =
e
.
2

ADVERTISEMENT

00 votes

Related Articles

Related forms

Related Categories

Parent category: Education
Go
Page of 6