Markov Chain Note
Date: 2024/05/29Last Updated: 2024-08-16T15:42:43.994Z
Categories: Probability
Tags: Probability, Markov Chain
Read Time: 8 minutes
Contents
- Stochastic Process
- Markov Property
- Markov Chain
- Chapman-Kolmogorov Equation
- Hitting Probability
- Communication
- Recurrence
- Periodicity
- Stationary Distribution
- Discrete Renewal Process
- Branching Process
- Random Walk
Stochastic Process
A stochastic process is a collection of random variables indexed by time .
Markov Property
A stochastic process has the Markov property if for any , any , and any , we have
Markov Chain
A Markov chain is a stochastic process with the Markov property.
Transition Probability
The transition probability at time of a Markov chain is defined as
Transition Probability Matrix
The transition probability matrix at time of a Markov chain is a matrix where is the transition probability from state to state .
n-Step Transition Probability
The n-step transition probability from time to of a Markov chain is defined as
Time-Homogeneous Markov Chain
A Markov chain is time-homogeneous if the transition probabilities are independent of time.
In this note, we focus on time-homogeneous Markov chains.
Chapman-Kolmogorov Equation
The Chapman-Kolmogorov equation states that for a time-homogeneous Markov chain, the transition probability is given by
Hitting Probability
The hitting probability of a state starting from state is defined as
By intuition, the hitting probability could be calculated by
Thus, we have the following definition.
First-Visit(Hitting) Probability
The first-visit probability of a state starting from state is defined as
Convergence of First-Visit Probability
The first-visit probability will always converge to as goes to infinity.
Proof:
If this is not the case, then the sum of hitting probabilities will be greater than , which is a contradiction.
Hitting Time for a Given Path
The hitting time of a state to state for a given path is defined as
Sometimes we use if it is clear that the starting state is .
Mean Hitting Time (Mean First-Visit Time)
The average hitting time of a state to state is defined as
Communication
We say that the state communicates with state if
and is denoted by .
Inter-Communication
Two states and are said to inter-communicate if and .
Irreducible Class of States
A set of states is said to be an irreducible class of states if any state in communicates with any other state in .
Closed Class of States
A set of states is said to be a closed class of states if any state in communicates only with other states in .
Absorbing State
A state is said to be an absorbing state if itself forms a closed class of states.
Recurrence
Recurrent and Transient State
A state is said to be recurrent if the hitting probability
A state is said to be transient if it is not recurrent.
Recurrent and Transient Class
A class of states is said to be recurrent if any state in is recurrent.
A class of states is said to be transient if any state in is transient.
Two Important Theorems
- A state is recurrent if and only if
Proof:
We define the following generating function:
Then, the probability of hitting state at time for the k-th time is the coefficient of in the expansion of .
Thus, we the transition probability:
which should equal to the coefficient of in the expansion of .
As the coefficient of for in is 0, we have
Thus, we have
If is recurrent, then the previous sum diverges.
If is transient, then and the previous sum is a sum of a geometric series, which converges.
- If is transient, then for all
Proof:
If the condition does not satisfy, we assume that , and as is transient, let . We choose , such that . Then there exist such that for all , , and and
For , we have
which contradicts the assumption.
Finite State Space Must have Recurrent State
If the state space is finite, then there must be at least one recurrent state.
Proof:
If all state are transient, then by the second theorem in previous section, we have as for all and , thus tends to 0 as goes to infinity.
However, is constantly 1 as it is the sum of the transition probability of state to all other states, which is a contradiction.
Intercommunication and Recurrence
If two states intercommunicate, then they are either both recurrent or both transient.
Proof:
Given two states and that intercommunicate.
Suppose is recurrent, then
As and intercommunicate, there either exists such that , and such that ,
Then we have
Thus,
which is infinite as the sum of is infinite.
Then, is recurrent.
Number of Visits to a State for a Given Path
The number of visits to a state for a given path is defined as
Number of Visits at Infinity
Decomposition of State Space
The state space can be decomposed into a set of Irreducible Closed Recurrent classes and a set of Transient classes.
Proof:
Given state space , as the InterCommunicate relation is an equivalence relation, we have a partition of into Inter Communicate classes. Let the classes be .
As all states in an InnerCommunicate class have the same Recurrence, let
Let . Then, for all , is Recurrent.
We then prove that is Closed for all ., which finished the proof of this theorem.
If is not closed, then there exists and , and such that .
Also, as , is Recurrent, and thus
Also, in this case, can not Communicate with , as if it does, will be in , which means
Then we have
This is a contradiction. Thus, is closed.
Mean Recurrence Time
The mean recurrence time of a state is defined as
Mean Recurrence Time for a Transient State
The mean recurrence time for a transient state is infinite.
Positive and Null Recurrent State
A state is said to be positive recurrent if the mean recurrence time:
A state is said to be null recurrent if the mean recurrence time:
Identification of Positive and Null Recurrent State
A null recurrent state has similar properties to a transient state:
A recurrent state is null recurrent if and only if
State in the Same Irreducible Class have the Same Recurrence (Positive or Null)
If two states are in the same irreducible class, then they are either both positive recurrent or both null recurrent.
Finite State Space Must have Positive Recurrent State
If the state space is finite, then there must be at least one positive recurrent state.
Periodicity
Period of a State
The period of a state is defined as
If is , then the state is said to be aperiodic.
where is the greatest common divisor.
Period of a Class
All states in the same irreducible class have the same period.
Stationary Distribution
Stationary Distribution
A distribution is said to be a stationary distribution of a Markov chain if
where is the transition probability matrix.
Existence and Uniqueness of Stationary Distribution
If a Markov chain is irreducible and positive recurrent, then it has a unique stationary distribution.
And the stationary distribution is given by
where is the mean recurrence time of state .
Ergodic Markov Chain
A Markov chain is said to be ergodic if it is irreducible, positive recurrent and aperiodic.
Limiting Distribution
Given an ergodic Markov chain, and any vector ,
The limit:
where is the stationary distribution.
Discrete Renewal Process
Define a sequence of independent distributed random variables with positive integer state space.
Define a stochastic process as
where .
Then, is a discrete renewal process.
Other Ways to Define Discrete Renewal Process
This first definition is defined by the inter-arrival time. We can directly define the process by the renewal time , which are not independent.
We can also define the process by assign each time a renewal probability , which is the probability that the process is renewed at time .
Or we can define the process by the number of renewal , which is the number of renewals in the time interval .
Mean of Number of Renewals
If we are given the renewal probability , then the mean of the number of renewals in the time interval is
Renewal Process and Markov Chain
If we are given a Markov chain with state space , and we set , and we said the process is renewed whenever it hits state , then the process is a discrete renewal process.
And the renewal probability is given by
Inter-renewal Distribution of Discrete Renewal Process Defined by Markov Chain
The inter-renewal distribution of the discrete renewal process defined by a Markov chain is given by
which is the first renewal probability.
The Limit Renewal Theorem of Ergodic Markov Chain
Given an ergodic Markov chain with state space , and we set , and we said the process is renewed whenever it hits state , and we set the number of renewals in the time interval as , then
where is the mean recurrence time of state .
Branching Process
Offspring Distribution
An offspring distribution is a probability distribution of the number of offspring of each individual.
Discrete Branching Process
A discrete branching process with offspring distribution is a stochastic process defined as
And is called the size of the nth generation.
The Probability Generating Function of Discrete Branching Process
The probability generating function of a discrete branching process is defined as
where is the offspring distribution.
Mean Generation Size
If we are starting from , then the mean generation size of the nth generation is given by
where is the mean of the offspring distribution.
Total Progeny
The total progeny of the branching process is defined as the total number of individuals that ever exist in the process.
Probability Generating Function of Total Progeny
Let be the probability generating function of the total progeny , then
Extinction
We say that the branching process is extinct if for some .
Probability of Extinction
The probability of extinction is given by the ever visit probability of state starting from state .
Calculation of Probability of Extinction
The probability of extinction is the least non-negative solution of the equation
Random Walk
Simple Random Walk
A simple random walk is a stochastic process defined as
where is a sequence of independent and identically distributed random variables.
In this note, we only focusing on to be a Bernoulli random variable with
Symmetric Random Walk
If , then the random walk is said to be symmetric.
Number of Possible Path
The number of possible paths of a simple random walk from to is denoted as .
Crossing Condition
Let the number denote the number of possible paths of a simple random walk from to that is at for some .