1 Various Views on the Trapdoor Channel and an Upper Bound on its Capacity Tobias Lutz 4 1 Abstract 0 2 Twonovelviewsarepresentedonthetrapdoorchannel.First,byderivingtheunderlyingiteratedfunctionsystem b (IFS), it is shown that the trapdoor channel with input blocks of length n can be regarded as the nth element of a e sequence of shapes approximating a fractal. Second, an algorithm is presented that fully characterizes the trapdoor F channel and resembles the recursion of generating all permutations of a given string. Subsequently, the problem of 6 maximizing a n-letter mutual information is considered. It is shown that 1log (cid:0)5(cid:1) ≈ 0.6610 bits per use is an 2 2 2 ] T upper bound on the capacity of the trapdoor channel. This upper bound, which is the tightest upper bound known, I proves that feedback increases the capacity. . s c [ Index Terms 2 v Trapdoor channel, Lagrange multipliers, convex optimization, iterated function systems, fractals, channels with 5 memory, recursions, permutations. 7 5 4 I. INTRODUCTION . 1 0 The trapdoorchannelwas introducedby David Blackwell in 1961[1] and is used by RobertAsh both as a book 4 1 cover and as an introductory example for channels with memory [2]. The mapping of channel inputs to channel : v outputs can be described as follows. Consider a box that contains a ball that is labeled s ∈ {0,1}, where the 0 i X index 0 refers to time 0. Both the sender and the receiver know the initial ball. In time slot 1, the sender places r a a new ball labeled x1 ∈ {0,1} in the box. In the same time slot, the receiver chooses one of the two balls s0 or x at random while the other ball remains in the box. The chosen ball is interpreted as channel output y at 1 1 time t = 1 while the remaining ball becomes the channel state s . The same procedure is applied in every future 1 channeluse. In time slot 2, for instance, the sender placesa new ballx ∈{0,1}in the boxand the corresponding 2 channel output y is either x or s . The transmission process is visualized in Fig. 4. Fig. 4(a) shows the trapdoor 2 2 1 channel at time t when the sender places ball x in the box. In the same time slot, the receiver chooses randomly t balls as channeloutput.Consequently,the upcomingchannelstate s becomesx (see Fig. 4(b)). At time t+1 t−1 t t the sender places a new ball x in the box and the receiver draws y from s and x . Table I depicts the t+1 t+1 t t+1 probability of an output y given an input x and state s . t t t−1 Tobias Lutz is with the Lehrstuhl für Nachrichtentechnik, Technische Universität München, D-80290 München, Germany (e-mail: [email protected]). February7,2014 DRAFT 2 0 s 0 1 0 t−1 1 1 0 1 x x x y y y t+3 t+2 t+1 x t−1 t−2 t−3 t (a) Thetrapdoor channel attimet. 1 s =x 0 1 t t 0 1 1 0 0 x x y =s y y y t+3 t+2 x t t−1 t−1 t−2 t−3 t+1 (b) Thetrapdoorchannel attimet+1. Fig.1. Attimetthesenderplacesanewballxt inthebox.Thecorresponding channeloutputyt isst−1 andthenextstatest becomesxt. Despite the simplicity of the trapdoor channel, the derivation of its capacity seems challenging and is still an open problem. One feature that makes the problem cumbersome is that the distribution of the output symbols may depend on events happening arbitrarily far back in the past since each ball has a positive probability to remain in the channel over any finite number of channel uses. Instead of maximizing I(X;Y) one rather has to consider the multi-letter mutual information, i.e., limsup I(Xn;Yn). n→∞ TABLEI TRANSITIONPROBABILITIESOFTHETRAPDOORCHANNEL xt st−1 p(yt=0|xt,st−1) p(yt=1|xt,st−1) 0 0 1 0 0 1 0.5 0.5 1 0 0.5 0.5 1 1 0 1 LetP denotethematrixofconditionalprobabilitiesofoutputsequencesoflengthngiveninputsequencesof n|s0 length n where the initial state equals s . The following ordering of the entries of P is assumed. Row indices 0 n|s0 represent input sequencesand column indices represent output sequences. To be more precise, the entry P n|s0 i,j is the conditional probability of the binary output sequence corresponding to the integer j −1 given thhe biniary inputsequencecorrespondingthetheintegeri−1,1≤i,j ≤2n.Forinstance,ifn=3then P denotesthe 3|s0 5,3 conditional probability that the channel input x x x =100 will be mapped to the channelhoutputiy y y =010. 1 2 3 1 2 3 It was shown in [3] that the conditional probability matrices P satisfy the recursion laws n|s0 P 0 n|0 P = (1) n+1|0 1P 1P 2 n|1 2 n|0 1P 1P P = 2 n|1 2 n|0 , (2) n+1|1 0 P n|1 February7,2014 DRAFT 3 where the initial matrices are given by P = P = [1]. A quick inspection of P and P reveals that the 0|0 0|1 2|0 2|1 inputs 00 and 11 are mapped to disjoint outputs. Hence, a rate of 0.5 bits per use (b/u) is achievable from the sender to the receiver. It was shown in [4] that 0.5 b/u is indeed the zero-error capacity of the trapdoor channel. Permuter et al. [5] considered the trapdoor channel under the additional assumption of having a unit delay feedback link available from the receiver to the sender. The sender is able to determine the state of the channel in each time slot. They established that the capacity of the trapdoor channel with feedback is equal to the logarithm of the golden ratio. One can already deducefrom this quantity that the achievabilityscheme involvesa constrained coding scheme in which certain sub-blocks are forbidden. In this paper, we propose two different views on the trapdoor channel. Based on the underlying stochastic matrices (1) and (2), the trapdoor channel can be described geometrically as a fractal or algorithmically as a recursive procedure. We then consider the problem of maximizing the n-letter mutual information of the trapdoor channel for any n ∈ N. We relax the problem by permitting distributions that are not probability distributions. The resulting optimization problem is convex but the feasible set is larger than the probability simplex. Using the method of Lagrange multipliers via a theorem presented in [2], we show that 1log 5 ≈0.6610 b/u is an upper 2 2 2 bound on the capacity of the trapdoor channel. Specifically, the same absolute maxim(cid:0)um(cid:1) 1log 5 ≈0.6610 b/u 2 2 2 resultsforalltrapdoorchannelswhichprocessinputblocksofevenlengthn.Andthesequenceof(cid:0)ab(cid:1)solutemaxima correspondingto trapdoorchannelswhichprocessinputsof oddlengthsconvergesto 1log 5 b/u frombelow as 2 2 2 the block length increases. Unfortunately,the absolute maxima of our relaxed optimization a(cid:0)re(cid:1)attained outside the probability simplex, otherwise we would have established the capacity. Nevertheless, 1log 5 ≈ 0.6610 b/u is, 2 2 2 to the best of our knowledge, the tightest capacity upper. Moreover, this bound is less than t(cid:0)he(cid:1)feedback capacity of the trapdoor channel. The organization of this paper is as follows. Section II interprets the trapdoor channel as a fractal and derives the underlyingiteratedfunctionsystem (IFS). SectionIIIintroducesa recursivealgorithmwhich fullycharacterizes the trapdoorchannel.Commentson the permutingnatureof the trapdoorchannelare provided.Section IV presents a solution to the optimization problem outlined above and derives various recursions. The paper concludes with Section V. A. Notation ThesymbolsN andN referto thenaturalnumberswith andwithout0,respectively.Thecanonicalbasisvectors 0 of R3 are denoted by e , e and e . They are assumed to be row vectors. The n-fold composition of a function, x y z sayΦ,isdenotedasΦ◦n.TheinputcorrespondingtotheithrowofP isdenotedasxn.Theinputcorresponding n|s0 i to the ith row of P is denoted as xn. Further, I denotes the 2n×2n identity matrix, I˜ is a 2n×2n matrix n|s0 i n n whose secondarydiagonalentriesare all equalto 1 while the remainingentriesare allequalto 0, and1 denotesa n columnvectoroflength2n consistingonlyofones.Thevector1T isthetransposeof1 .Forthesakeofreadability n n we use exp (·) instead of 2(·). If the logarithm log (·) or the exponential function exp (·) is applied to a vector 2 2 2 or a matrix, we mean that log (·) or exp (·) of each element of the vector or matrix is taken. Finally, the symbol 2 2 February7,2014 DRAFT 4 ◦ refers to the Hadarmard product, i.e., the entrywise product of two matrices. II. THETRAPDOOR CHANNELAND FRACTAL GEOMETRY A. Prerequisites We briefly introduce the idea of iterated function systems and fractals. For a comprehensive introduction to the subject, see for instance [6]. In a nutshell, a fractal is a geometric pattern which exhibits self-similarity at every scale. A systematic way for generating a fractal starts with a complete metric space (M,d). The space to which the fractal belongs is, however, not M but the space of non-empty compact subsets of M, denoted as H(M). A suitable choice for a metric for H(M) is the Hausdorff distance h (A,B) := max{d(A,B),d(B,A)} where d d(A,B) := max min d(x,y), A,B ∈ H(M) and analogously for d(B,A). It is then guaranteed that x∈A y∈B (H(M),h ) is a complete metric space and that every contraction mapping1 ϕ : M → M on (M,d) becomes a d contraction mapping ϕ:H(M)→H(M) on (H(M),h ) defined by ϕ(A)={ϕ(x):x∈A} for all A∈H(M). d The following definition and theorem provides a method for generating fractals. Definition II.1. [6, Chapter 3.7] A hyperbolic iterated function system (IFS) consists of a complete metric space (M,d) together with a finite set of contraction mappings ϕ : M → M, with respective contractivity factors n s for n = 1,2,...,N. The notation for the IFS is {M;ϕ n = 1,2,...,N} and its contractivity factor is n n s=max{s :n=1,2,...,N}. n The fixed point of a hyperbolic IFS, also called the attractor or self-similar set of the IFS, is a (deterministic) fractal and results from iterating the IFS with respect to any A ∈ H(M). This is the content of the following theorem. Theorem II.2. [6, Chapter 3.7] Let {M;ϕ n = 1,2,...,N} be an iterated function system with contractivity n factor s. Then the transformation Φ:H(M)→H(M) defined by N Φ(A)= ϕ (A) (3) n n=1 [ for all A∈H(M), is a contractionmappingon the complete metric space (H(M),h ) with contractivity factors. d Its unique fixed point, A⋆ ∈H(M), obeys N A⋆ =Φ(A⋆)= ϕ (A⋆), n n=1 [ and is given by A⋆ =lim Φ◦k(A) for any A∈H(M). k→∞ Many well-known fractals, e.g., the Koch snowflake, the Cantor set, the Mandelbrot set, etc., can be generated using Definition II.1 and TheoremII.2. Indeed,a segmentof the Mandelbrotset is shown on the coverof the book 1Let(M,d)beametricspace.Recallthatamappingϕ:M →M isacontractionifthereexistsa0<s<1suchthatd(ϕ(x),ϕ(y))≤ s·d(x,y)forallx,y∈M. February7,2014 DRAFT 5 by Cover and Thomas [7]. Another famous representative, the Sierpinski triangle, is introduced in the following example. We will later see that this fractal is related to the trapdoor channel. Example II.3. (Sierpinski triangle) Consider the IFS x+1 y x y+1 x y [0,1]2;ϕ (x,y)= , ,ϕ (x,y)= , ,ϕ (x,y)= , . (4) 1 2 3 2 2 2 2 2 2 (cid:26) (cid:18) (cid:19) (cid:18) (cid:19) (cid:16) (cid:17)(cid:27) The affine transformations ϕ , n = 1,2,3, scale any A ∈ H([0,1]2) by a factor of 0.5. Additionally, ϕ and ϕ n 1 2 introduce translations by 0.5 into the x- and y-direction, respectively. The Sierpinski triangle is approximated arbitrarily close by iterating Φ(A) for any A∈H([0,1]2). Fig. 2 shows the result after performing five iterations of(4).TheinitialshapeAinFig.2(a)isatrianglewithcornerpoints(0,0),(1,0),(0,1)andinFig.2(b)atriangle with corner points (0,0),(1,1),(1,0). As one performs more iterations, both sets converge to the same set A⋆. B. The Trapdoor Channel as a Fractal In this section, we derive a hyperbolic IFS for the trapdoor channel. Instead of working with P we take a n|s0 geometric approach, i.e., P will be mapped to the unit cube [0,1]3 ⊂R3. n|s0 Definition II.4. Let M denote the set P :n∈N ,s =0,1 of trapdoor channel matrices. The function n|s0 0 0 ρ(n) :M→[0,1]3 represents each Pn|s0(cid:8)as a shape in [0,1]3 acco(cid:9)rding to P 7→ x,y, P , for all 1≤i,j ≤2n (5) n|s0 n|s0 i,j (cid:16) (cid:2) (cid:3) (cid:17) where (i−1)·2−n <x<i·2−n and 1−j·2−n <y <1−(j−1)·2−n. Each entry P of P is identified with a square of side length 2−n, which has a distance of P n|s0 i,j n|s0 n|s0 i,j to the xy-plan(cid:2)e. Th(cid:3)e alignment of the square corresponding to Pn|s0 i,j with respect to the other s(cid:2)quares(cid:3)in ρ(n)(Pn|s0) is in accordanceto the alignmentof Pn|s0 i,j with res(cid:2)pectto(cid:3) the other entries of Pn|s0. Fig. 3 depicts the representations ρ(1)(P ) and ρ(1)(P ) of (cid:2) (cid:3) 1|0 1|1 1 0 1 1 P = P = 2 2 . 1|0 1 1 1|1 0 1 2 2 The following propositionexpresses ρ(n+1) P and ρ(n+1) P recursively in terms of ρ(n) P and n+1|0 n+1|1 n|0 ρ(n) Pn|1 . (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) Lemma II.5. Therepresentationsρ(n+1) P andρ(n+1) P ofP andP satisfy therecursion n+1|0 n+1|1 n+1|0 n+1|1 laws (cid:0) (cid:1) (cid:0) (cid:1) 1 ρ(n+1) P = · ρ(n) P +e , ρ(n) 2·P +e , ρ(n) P (6) n+1|0 2 n|0 x n|0 y n|1 (cid:0) (cid:1) 1 n (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1)o ρ(n+1) P = · ρ(n) 2·P +e , ρ(n) P +e , ρ(n) P +e +e , (7) n+1|1 2 n|1 x n|1 y n|0 x y (cid:0) (cid:1) n (cid:0) (cid:1) (cid:0) (cid:1) (cid:0) (cid:1) o for all n∈N . 0 February7,2014 DRAFT 6 1 0.9 0.8 0.7 0.6 y0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 x (a) Theinitial shapeisatriangle withcornerpoints(0,0),(1,0),(0,1). 1 0.9 0.8 0.7 0.6 y0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 x (b) Theinitial shapeisatriangle withcornerpoints(0,0),(1,1),(1,0). Fig.2. Sierpinski triangle afterfouriterations oftheunderlying IFSwithtwodifferent initial shapes. Proof:Recursions(6)and(7)areaconsequenceofthestructureofblockmatrices(1)and(2),respectively.We justoutlinethederivationof(6).Thefirsttermontherighthandsideof(6)representsthelowerrightcornerof(1), i.e., those entries of P with row and column indices 2n < i,j,≤ 2n+1. Observe that each entry P n+1|0 n+1|0 i,j is equal to 12 Pn|0 i−2n,j−2n where 2n < i,j,≤ 2n+1. Hence, scaling the three dimensions of ρ(n) (cid:2)Pn|0 b(cid:3)y a factor of 1 an(cid:2)d shif(cid:3)ting the result by 1 into the x-direction yields a representation of the lower right c(cid:0)orner(cid:1)of (1) 2 2 according to Definition II.4. Similarly, the second term of (6) representsthe upper left corner of (1), i.e., entries of P which correspond n+1|0 to row and column indices 1 ≤ i,j,≤ 2n. To be more precise, each entry P is equal to P where n+1|0 i,j n|0 i,j 1 ≤ i,j,≤ 2n. Hence, scaling the x- and y-coordinates of ρ(n) Pn|0 by a(cid:2) factor(cid:3)of 12 and shiftin(cid:2)g the(cid:3)resulting (cid:0) (cid:1) February7,2014 DRAFT 7 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 y0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x (a) Colormapofρ(1)(P ) 1|0 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 y0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x (b) Colormapofρ(1)(P ) 1|1 Fig.3. Colormapoftheρ(1)(P )andρ(1)(P ).Eachofthefoursquarescorrespondstooneoftheconditionalprobabilities0,0.5and1. 1|0 1|1 figureby 1 intothey-directionyieldsarepresentationoftheupperleftcornerP of(1)accordingtoDefinitionII.4. 2 n|0 Finally, the last term of (6) represents the lower left corner of (1), i.e., entries of P with row and column n+1|0 indices 2n <i≤2n+1, 1≤j ≤2n, respectively. By (1), each entry P is equal to 1 P for the n+1|0 i,j 2 n|1 i−2n,j same index pair i,j. Hence, scaling all coordinates of ρ(n) Pn|1 by(cid:2) a facto(cid:3)r of 21 yields a r(cid:2)eprese(cid:3)ntation of the lower left corner of (1) according to Definition II.4. (cid:0) (cid:1) Recursions (6) and (7) will be used below to obtain an iterated function system for the trapdoor channel. Recall from Theorem II.2 that an iterated function system is initialized with a single shape. Therefore, it is desirable that therighthandsideof(6)justdependsonP andtherighthandsideof(7)justonP .Thefollowingproposition n|0 n|1 introduces an affine transformation, which turns ρ(n) P into ρ(n) P and vice versa. n|0 n|1 (cid:0) (cid:1) (cid:0) (cid:1) Lemma II.6. Let τ :[0,1]3 →[0,1]3 be defined as τ(x,y,z)=(−x+1,−y+1,z). Then ρ(n) P =τ ◦ρ(n) P (8) n|1 n|0 ρ(n)(cid:0)P (cid:1)=τ ◦ρ(n)(cid:0)P (cid:1), (9) n|0 n|1 (cid:0) (cid:1) (cid:0) (cid:1) February7,2014 DRAFT 8 for all n∈N . 0 Proof: Equation (9) follows from (8) by noting that τ ◦τ = id. It remains to prove (8), which we do by induction.Observe that the affine transformationτ correspondsto a counter-clockwiserotation through180 degree aboutthez-axisandatranslationbyoneintothex-andy-direction.Usingthisproperty,(8)isreadilyverifiedfrom Fig.3forn=1.Nowassumethattheassertionholdsforsomen>1.Adirectcomputationofτ◦ρ(n+1) P n+1|0 usingtherighthandsideof(6)andtheinductionhypotheses(8)and(9)showsthatτ◦ρ(n+1) Pn+1|0 ise(cid:0)quivalen(cid:1)t to the right hand side of (7). (cid:0) (cid:1) Wecannowstatethefinalrecursionlaw.AcombinationofLemmaII.5andLemmaII.6,i.e.,replacingρ(n) P n|1 in (6) with (8) and ρ(n) Pn|0 in (7) with (9), and using (5) yields the following theorem. (cid:0) (cid:1) (cid:0) (cid:1) TheoremII.7. Therepresentationsρ(n+1) P andρ(n+1) P ofP andP withinitialmatrices n+1|0 n+1|1 n+1|0 n+1|1 P0|0 =P0|1 =1 satisfy the following recur(cid:0)sion law(cid:1)s (cid:0) (cid:1) ρ(n+1) P = φ (x,y,z)= x+1,y, Pn|0 i,j ,φ (x,y,z)= x,y+1, P , n+1|0 ( 1 2 2 (cid:2) 2(cid:3) ! 2 (cid:18)2 2 n|0 i,j(cid:19) (cid:0) (cid:1) (cid:2) (cid:3) x−1 y−1 Pn|0 i,j φ (x,y,z)= − ,− , (10) 3 2 2 (cid:2) 2(cid:3) !) ρ(n+1) P = ψ (x,y,z)= x+1,y, P ,ψ (x,y,z)= x,y+1, Pn|1 i,j , n+1|1 ( 1 (cid:18) 2 2 n|1 i,j(cid:19) 2 2 2 (cid:2) 2(cid:3) ! (cid:0) (cid:1) (cid:2) (cid:3) x y Pn|1 i,j ψ (x,y,z)= − +1,− +1, , (11) 3 2 2 (cid:2) 2(cid:3) !) where (i−1)·2−n <x<i·2−n and 1−j·2−n <y <1−(j−1)·2−n for 1≤i,j ≤2n . Remark II.8. The restrictions of φ ,φ , φ and ψ ,ψ , ψ to the x- and y-dimensionsare contraction mappings. 1 2 3 1 2 3 They compose two hyperbolic IFS with a unique attractor each. Moreover, (10) and (11) are initialized with P = 1 and P = 1, respectively. Hence, lim ρ(n) P , s ∈ {0,1}, can be approximated arbitrarily 0|0 0|1 n→∞ n|s0 0 close by iterating(10) and(11), respectively, (accordingto T(cid:0)heore(cid:1)m II.2) forany initialshapeA∈H([0,1]3) such that the restriction of A to the z-dimension equals 1. Both IFS follow directly from (10) and (11) and read x+1 y z x y+1 x−1 y−1 z [0,1]3;φ = , , ,φ = , ,z ,φ = − ,− , . (12) 1 2 3 2 2 2 2 2 2 2 2 (cid:26) (cid:18) (cid:19) (cid:18) (cid:19) (cid:18) (cid:19)(cid:27) x+1 y x y+1 z x y z [0,1]3;ψ = , ,z ,ψ = , , ,ψ = − +1,− +1, . (13) 1 2 3 2 2 2 2 2 2 2 2 (cid:26) (cid:18) (cid:19) (cid:18) (cid:19) (cid:16) (cid:17)(cid:27) There is also a relation to the Sierpinski triangle. Observe that φ , φ and ψ , ψ , respectively, restricted to the 1 2 1 2 xy-plane are equal to ϕ , ϕ in (4). 1 2 February7,2014 DRAFT 9 1 0.9 0.8 0.7 0.6 y0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 x (a) The z-dimension is visualized by means of gray colors. The gray scaleistheoneusedinFig.3 1 0.9 0.8 0.7 0.6 y0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 x (b) Restriction ofFig.(a)tothex-andy-dimensions. (c) AmoreaccurateapproximationofthefractalwheretheIFS(12)is restricted tothex-andy-dimensions. Fig.4. Theresultofrunning4iterations (Fig.(a),(b))and11iterations (Fig.(c))oftheIFS(12).TheinitialshapeAhasbeenchosentobe {(x,y,z)∈[0,1]3:z=1}. III. ALGORITHMICVIEW OF THETRAPDOORCHANNEL A. Remarks on the Permutation Nature Thetrapdoorchannelhasbeencalledapermutingchannel[4],wheretheoutputisapermutationoftheinput[5]. We point out that in general not all possible permutations of the input are feasible and that not every output is a permutationoftheinput.Thereasonthatnotallpermutationsarefeasibleisthatthechannelactionsarecausal,i.e., aninputsymbolattimencannotbecomeachanneloutputatatimeinstancesmallerthann.Consider,forinstance, a vector101which,whenappliedtoa trapdoorchannelwithinitialstate 0, cannotgiverise toan output110.Next, not every output is a permutation of the input because at a certain time instance the initial state might become an output symbol and, therefore, the resulting output sequence might not be compatible with a permutation of the February7,2014 DRAFT 10 input. For illustration purposes, consider again the previous example, i.e., a vector 101 and initial state 0. Two of the feasible outputs are 010 and 001 which are not permutations of 110. B. The Algorithm The following recursive procedure GENERATEOUTPUTS computes the set of feasible output sequences and their likelihoods given an input sequence and an initial state. procedure GENERATEOUTPUTS(in,out,state,prob) if in=∅ then set ←{out,prob} else if in[0]=state then out←out+in[0] set← GENERATEOUTPUTS(in.substr(1),out,state,prob) else out←out+in[0] set← GENERATEOUTPUTS(in.substr(1),out,state,0.5·prob) out[out.length()−1]←state ⊲ in[0] is removed from the end of out set← GENERATEOUTPUTS(in.substr(1),out,in[0],0.5·prob) end if return set end procedure Thefourvariablesin,out,stateandprobhavethefollowingmeaning:indenotesthepartoftheinputstringthat hasnotbeenprocessedyet;outindicatesthepartofoneparticularoutputstringthathasbeengeneratedsofar;state referstothecurrentchannelstate;probdenotesthelikelihoodofout.Theprocedureisinitializedwiththecomplete input string and the initial state of the channel; out is initially empty while prob equals 1. The first if statement checksthesimplecaseoftherecursion,i.e.,whethertheinputstringhasbeenprocessedcompletely.Ifyes,thenthe correspondingoutputoutanditslikelihoodprobisstoredandreturnedinset.Otherwise,wedistinguishwhetherthe nextinputsymbolin[0]isequaltothecurrentstate.Ifyes,thenthenextoutputtakesthevalueofin[0](orofstate but both are equal), i.e., out←out+in[0], with probability 1 and the procedure GENERATEOUTPUTS is applied recursivelyto the unprocessedpartof the inputstring, i.e., to in.substr(1), the substring of in with indices greater than0.Clearly,stateandprobdonotchangeand,therefore,arepassedunmodifiedtotherecursivecall.Intheother case, i.e., when in[0] is not equal to the current state, the next output symbol will have a probability of 0.5 to be eitherin[0]orstate.Ifin[0]becomesthechanneloutput,thefollowingstateremainsthesame.Thentheremaining inputstringin.substr(1)isprocessedbytherecursivecallGENERATEOUTPUTS(in.substr(1),out,state,0.5·prob). However,if state becomesthe channeloutput,then the followingstate will bein[0] andthe remaininginputstring is processed by GENERATEOUTPUTS(in.substr(1),out,in[0],0.5·prob). Note that a recursive implementation of February7,2014 DRAFT