Definition: Unlike a finite Markov chain, a countable Markov chain can have a countably infinite state space ,

Conceptually, this is quite similar, however we now have situations where there are no boundaries

Consider an infinite random walk , ,
This is irreducible, aperiodic yet as
This means the conditions for invariant measures are more complicated

Definition: Instead of a transition matrix, we now have a transition function where and . We label the initial probabilities , the probabilities at time , . We also denote

Theorem (Chapman-Kolmogorov):
When we considered a transition matrix, the same result occurred as a natural result of linear algebra, since means

Definition: Now an invariant measure is a probability distribution

If where then is the invariant measure and is reversible, by the same notion of reversibility we considered here.

We can also define communication classes and period the same way

Example: Find the invariant measure of the chain , .


Definition: is a recurrent state . is transient otherwise.

Theorem: is recurrent

Example:


We use Stirling’s formula to see that
This approaches , so the chain is recurrent

Now what if we are in ?
,

By the law of large numbers, about of our steps will be in each of the components. We need an even number of steps in each component to be able to return to . This occurs with probability , since each is independent, except for the last which will be even if the first are even.

Using Stirling’s formula again, we derive
, so our chain is recurrent if or but is transient otherwise

Here’s another method for differentiating recurrent and transient chains,

Consider a fixed state and define

If the chain is recurrent then for all
If the chain is transient then but and we’ll be able to find a solution

Example:
,
We use our standard methods to get

Since , ,
,

For , the only solution is
For , the only solution is ,
For ,

Positive and Null Recurrence

A recurrent chain does not necessarily have a limiting distribution for each

Certainly, if is transient, then . But this can also be the case if is recurrent. For example, take the random walk on .

Definition: We call a chain null recurrent if it is recurrent but , otherwise a recurrent chain is positive recurrent

Positive recurrent chains behave similarly to finite Markov chains, where we have to

We can also define
If is null recurrent then with probability , however this equation no longer holds (but it does make sense when is transient)

We can find an invariant distribution is positive recurrent

Branching Process

Let’s look at a more in-depth example, considering a population of individuals at times

Individuals reproduce independently and the probability an individual produces offspring is

The mean offspring produced by an individual is


So if then the mean number of offspring goes to

How do we find the probability that the population dies out? This might occur even if , and the fact that the expected value reaches does not really help us

We first assume ,



This is the generating function of a random variable! We label this , yielding

is a solution, but not the only one
We can derive , which will help us solve

Inductively, we can see that , where is the smallest positive root of (because is an increasing function)

Example: has solutions , and the extinction probability is

If : trivially

If : and therefore , so , and for any

So the expected population size stays at , while the extinction probability is . This means that the conditional size increases over time

If : and therefore there is some with (since )

We also have , so there must be some for . for so the curve is convex and there is only one solution as described.

So with positive probability, the population lives forever