Table Of ContentINTERNATIONAL CENTRE FOR MECHANICAL SCIENCES
COURSES AND LECTURES - No. 110
GIUSEPPE LONGO
UNIVERSITY OF TRIESTE
CODING FOR MARKOV SOURCES
COURSE HELD AT THE DEPARTMENT
FOR AUTOMATION AND INFORMATION
JUNE 1971
UDINE 1971
SPRINGER-VERLAG WIEN GMBH
Thill wmt iB 111}iect to copyri&ht.
All npta are reaerved,
whether the whole or pll1 of the material iB concerned
specifically thoae of tranalation, reprinting, re-use of illustrations,
broadcaeting, reproduction by photocopying machine
or similar meiiDII, and atorap in data banks.
© 1972 by Springer-VerlagWien
Originally published by Springer-Verlag Wien New York in 1972
ISBN 978-3-211-81154-2 ISBN 978-3-7091-2961-6 (eBook)
DOI 10.1007/978-3-7091-2961-6
PREFACE
This aouPse was given in June 1971
at the CISM in Udine. The notesJ howevePJ aoveP a
wideP matePialJ sinae they inalude also the new Pe
sults whiah I obtained lateP and pPesented at the
Seaond IntePnational Symposium on InfoPmation Theo
PY at TsahkadsoP in SeptembePJ 1g71.
The last ChapteP is based entiPely on
those Pesults and on fuPtheP developments whiah will
appeaP in the PPoaeedings of the sixth PPague Confe
Penae on InfoPmation TheoPyJ in SeptembeP 1971.
he~d
J wish to thank the Italian Consiglio
Nazionale delle RiaePaheJ Comitato Nazionale peP le
Saienze Matematiahe foP suppoPting the entiPe PeseaPah
as well as my tPavel to
Tsahkadso~
UdineJ June 1971
Preliminaries.
Chapter 0
0.1. Introduction.
In this chapter we wish to summarize some fonda
mentals about finite Markov chains, which will be often usedin
the sequel.
Consider any system which can be found in k
different states, ~1 , ~z , ••• , ¢k , with a probability depen~
ing on the past history of the system. Assume that the system
may change its current state only at discrete times:
(0.1)
The past history of the system is then the description of the
particular sequence of states it has taken before the current
instant, say C
0:
at time ~ _.... the system was in state .;o ~
.. -11
(0.2)
at time t_1 the system was in state ~ i._~
Once the past history of the system is given, we assume there
6 Markov Chains
is a well-defined probability for the system to be found at t
0
in any of its possible states:
(0. 3) p~ : Prob (the system is in state 0~ at ~ojgiven
its history} (1 ' ~ ~ k)
Of course the following are true
k
=
(0.4) (1 ~ ~ " k) ~~ p~ 1 .
It may happen that the influence of the past
history on the probability for the system to be found in state
has a finite range, in the sense that only the last
~ ~ 'ft,
states assumed, say '6· o· , ..• , ~;_, influence p:_
~ ' ~"11+1 • • "
of (0.3). In other words if two past histories differ possibly
only for some of the states assumed before time t_ then
11 ,
the corresponding conditional probabilities p~ coincide. In
=
particular if ~ 1 , we say that our system has a Markov be
haviour. Its past history has an influence on the probability
of finding it in state :)~ at time li0 only in what it has
led the system in state ~ ~ at time ~-1 • The history of
such a system is called a ''Markov chain", and any particular
history in a realization of the Markov chain.
In the sequel only the conditional probabilities
of a Markov chain, called "transition probabilities" will be i!!!
portant; these probabilities are of course defined as follows:
Stochastic Matrices 7
= I
p~~ Prob (system in state );) J at ~o system (0.5)
in state ~~ at ~-1
and apparently they depend on the choice of the current time ~0•
If this dependence actually does not exist, then we speak of a
"stationary Markov chain11 and (0.5) becomes:
(0.6)
= I
P~J Prob (system in state ~~ at ~ 11 system in
=
state ~~ at li11_~) 1l any integer
Since in (0.6) both indexes " and J range between 1 and k,
there are k 2 transition probabilities, which can be arranged in a
matrix 1f :
=
1T (0.7)
Matrix Tf above is called "the transition rna-
trix" of the finite stationary Markov chain we are considering.
Of course
P~j. ~ 0 (1 ~ lr ' J ~ k) (0.8)
and since from any state a transition to some state
~i. ~·J .
is necessary, also:
k
(1 k)
~} p~~ = 1 "' lr "' (0. 9)
Properties (0.8) and (0.9) are briefly accounted for by saying
that lT is a 11stochastic matrix".
8 Higher Order Transitions
If we think of starting the chain at a given
initial time, ~0 say, we must add a kind of "initial prob!
bility distribution" ruling the initial state; let TT be
0
this initial distribution, and put
(0.10)
Remark that in particular 1T can be taken as
0
a degenerate p.d. U), i.e. it can contain k- 1 zeroes and one1.
0.2. Higher order transitions.
('11)
Let p~j be the probability of finding the
Markov chain in state ~·J. at time t'll1 given it was in state
IS~ at time I;'Tll_11 (m, 1'\ positive integers). By (0.6) we
=
have for 1'\ 1
(~) --
(0.11) P~} p ~j
=
while for T\ 2
(2) I<
-
(0.12) PLi - ~t Ptj.
P~t
i
and in general
k
('ll+~) ('ll)
(0.13) = Et p p
P~J ~t t~
1
(~f) In the sequel p.d. will mean probability distribution.
Closed State Sets 9
Expression (0.13) can be further generalized as follows:
k: (n)
~t P~e (0.14)
(11)
If p~~ is considered as the ( ~,! )-th entry of a square
1T(11)
matrix of type k , equations (0.11) to (0.14) can be
rewritten in terms of the successive powers of matrix IT
'
the product being performed as usual rows by columns:
<1)
1T = rr (0.111)
(2) 2
TT -- TI (0.121)
(11+ 1) 11
TT = lT·TT (0.131)
(Tl+'ITI) 1l 'Ill
1T = Tf.Tf (0,14 I)
We remark explicitly that for any positive in
teger n , IT '11 is a stochastic matrix if 1T is •
0.3. Closed state sets.
={ k}
Let .A ~1 , ~ 2, ... , ~ be the set of all
the states of a stationary Markov chain. Then a set tft c .A
is called "closed" if starting from a state in 't , no state
outside ~ can ever be reached. Of course ~ itself is a
closed state set.
Given any iJ c A , the smallest closed set
10 Reducible Chains
containing iJ is called the "closure 11 of JJ •
A single state ~~ is said 11absorbing11 if { ~~}
is a closed set.
From the above definitions and from n. 2 it
is easily seen that:
('TI)
- 'C is closed iff p~~ = 0 For 1~ e. tt,
=
and each 1'\ 1,2, ...
1TT1
- By eliminating from the rows and columns correspon~
ing to the states outside 't one gets a new stochastic
=
matrix ( 1t 1,2 , . . . ) , whose composition laws are
again (0.11) - (0.14).
- ~ ~ is absorbing iff p~~ = 1 .
If <t is a closed state set and 't =/= .A , the corresponding
chain is said to be "reducible 11• Once the chain enters ~ it
never gets out.
Conversely a chain is called "irreducible 11 if
no set tt ~ .A is closed. As a consequence
- A chain is irreducible iff every state can be reached
from every state.
The complementary J\ -~of a closed set ~ is not necessari
ly closed. A closed set 't can be further reducible, i.e. it
may contain smaller closed sets ft ' , wu ", . . . •
0.4. Classification of states.
A state ~ ~ is called "periodic of period "" "