1.1 Abstract Setup
Recap (Probability space)
: A probability space is a triple
(Ω,F,P∗), where:
- Ω is a non-empty set
- F is a σ-algebra on Ω
- P∗ is a probability measure on (Ω,F)
We can think of an element ω∈Ω as randomly chosen in Ω, i.e. the outcome of a random experiment. For an event A∈F, the quantity P(A) represent the probability that this random element lies in A.
Proposition 1.1 (Stability under countable intersections)
: If
F is a
σ-algebra, then whenever
A1,A2,…∈F, one has
⋂n∈NAn∈F. Proposition 1.2 (Complement rule)
: For every
A∈F, one has
P(Ac)=1−P(A). Proposition 1.3 (Continuity of probability measures)
: Let
(An)n∈N be a sequence of events. Let
∀n∈N:An⊂An+1, then
P is continuous from below, i.e.
P(n∈N⋃An)=n→∞limP(An)
Let
∀n∈N:An+1⊂An, then
P is continuous from above, i.e.
P(n∈N⋂An)=n→∞limP(An) Proposition 1.4 (Union bound)
: For any sequence of events
(An)n∈N, one has
P(n∈N⋃An)≤n∈N∑P(An) From now on and until the end of the chapter we fix an arbitrary probability space (Ω,F,P∗).
1.2 Generated σ-Algebra
Let C={Ai ∣ i∈I} be a collection of events. The σ-algebra generated by C is the collection of all events whose occurence is determined once we know all events in C.
Definition 1.5 (Generated
σ-algebra)
: Let
C⊃P(Ω), then
σ(C) is the
σ-algebra generated by defined by
σ(C)=C⊂FF is a σ-algebra⋂F Note:- σ(C) is the smallest σ-algebra containing C.
- If F is a σ-algebra, then σ(F)=F.
- C⊂C′⟹σ(C)⊂σ(C′).
1.3 Borel Sets
Definition 1.6 (Borel sets)
: Let
(E,T) be a topological space where
T is the set of open sets of
E. The Borel
σ-algebra on
E is defined by
B(E)=σ(T). Note that the sets contained in B(E) are called Borel sets.
Proposition 1.7 (Borel sets of reals)
: For the real numbers
R equipped with the standard topology generated by open intervals
(a,b], we have
B(R)=σ({(a,b] ∣ a,b∈R}) Note that equivalently
B(R)=σ({(a,b) ∣ a,b∈R})=σ({(−∞,a] ∣ a∈R})=σ({(−∞,a) ∣ a∈R})
1.4 π- and λ-Systems
Definition 1.8 (
λ-system)
: A family of sets
D⊂P(Ω) is called a Dynkin or
λ-sytem if
- Ω∈D
- A∈D⟹Ac∈D
- A1,A2,…∈D pairwise disjoint ⟹⋃n∈NAn∈D
The notion of a λ-sytem is very similar to a σ-algebra. The only difference lies in the third item, where a weaker condition is asked: it suffices to be stable under disjoint countable union. In particular it is easier to be a λ-sytem than a σ-algebra. We always have
D is a σ-algebra⟹D is a λ-system
The converse does not hold in general. To see this, consider Ω={1,2,3,4} and D={∅,Ω,{1,2},{3,4},{1,3},{2,4}}, then D is a λ-system but not a σ-algebra.
Definition 1.9 (
π-system)
: A family of sets
C⊂P(Ω) is called a
π-system if it is stable under finite intersection, i.e.
A,B∈C⟹A∩B∈C. Proposition 1.10 (
π-
λ implies
σ)
: Let
D⊂P(Ω) be a
π-system. Then
D is a σ-algebra⟺D is a λ-system 1.5 Dynkin Theorem
Definition 1.11 (Generated
λ-system)
: Let
C⊂P(Ω). The
λ-system generated by
C is the family defined by
λ(C)=C⊂DD is a λ-sytem⋂D As for generated σ-algebras, it follows from the definitions that λ(C) is the smallest λ-sytem containing C. In particular, since σ(C) is a λ-system, we always have
λ(C)⊂σ(C)
The converse inclusion does not hold in general. For example, consider Ω={1,2,3,4} and C={∅,Ω,{1,2},{3,4},{1,3},{2,4}}, then λ(C)=C because C is already a λ-system but σ(C)=P(Ω).
Dynkin Theorem asserts that the inclusion above is an equality when C is a π-system.
Theorem 1.12 (Dynkin
π-
λ-Theorem)
: Let
C be a
π-sytem. Then, we have
σ(C)=λ(C) Dynkin Theorem is useful as in probability one often wants to prove statements valid for all events in a generated σ-algebra, i.e. if P is a property then we would be interested in ∀A∈σ(C):P(A). The family σ(C) is generally difficult to access, and showing that a property is stable under general countable union may be delicate. Often, probability statements involve probability measures, and it is easier to prove that a statement is stable under disjoint countable union, i.e. ∀A∈λ(C):P(A) is often easier to show. If C is stable under finite intersection, Dynkin Theorem allows us to deduce the validity of the statement on σ(C).
1.6 Independence of σ-Algebras
The language of probability theory relies on measure theory, together with the concept of independence. We first recall the definition of independence for a collection of events.
Definition 1.13 (Indepence of events)
: Let
I be an arbitrary, not necessarily finite index set. A collection
{Ai ∣ i∈I} of events is said to be independent if
∀J⊂I finite:P(j∈J⋂Aj)=j∈J∏P(Aj) The concept of independence extend to family of events and in particular σ-algebras.
Definition 1.14 (Indepence of family of events)
: Let
I be an arbitrary index set. For every
i∈I, let
Ei⊂F be a family of events. The collection
{Ei ∣ i∈I} is said to be independent if
∀J⊂I finite, ∀j∈J, ∀Akj∈Ej:P(j∈J⋂Akj)=j∈J∏P(Akj) Note:- If each family is made of a single event, Ei={Ai}, then the independence of the families Ei is equivalent to the independence of the events Ai.
- {Ei ∣ i∈I} independent⟺∀J⊂I finite:{Ei ∣ i∈J} independent
- Let n≥1, the families E1,…,En⊂F are independent if and only if
∀Akj∈Ej∪{Ω}:P(j∈J⋂Akj)=j∈J∏P(Akj)
To establish independence of σ-algebras, it suffices to prove independence of generating π-systems:
Proposition 1.15 (Independence of
σ-algebras)
: Let
I be an arbitrary index set. For the collection of
π-systems
{Ci⊂F ∣ i∈I} the following equivalence holds:
{Ci⊂F ∣ i∈I} independent⟺{σ(Ci)⊂F ∣ i∈I} independent Proposition 1.16 (Independence of complements)
: Let
I be an arbitrary index set and let
{Ai ∣ i∈I} be a collection of events, then
{Ai ∣ i∈I} independent⟺{Aic ∣ i∈I} independent