ebook img

Digital Communications 1: Source and Channel Coding PDF

312 Pages·2015·11.691 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Digital Communications 1: Source and Channel Coding

Table of Contents Cover Title Copyright Preface List of Acronyms Notations Introduction 1: Introduction to Information Theory 1.1. Introduction 1.2. Review of probabilities 1.3. Entropy and mutual information 1.4. Lossless source coding theorems 1.5. Theorem for lossy source coding 1.6. Transmission channel models 1.7. Capacity of a transmission channel 1.8. Exercises 2: Source Coding 2.1. Introduction 2.2. Algorithms for lossless source coding 2.3. Sampling and quantization 2.4. Coding techniques for analog sources with memory 2.5. Application to the image and sound compression 2.6. Exercises 3: Linear Block Codes 3.1. Introduction 3.2. Finite fields 3.3. Linear block codes 3.4. Decoding of binary linear block codes 3.5. Performances of linear block codes 3.6. Cyclic codes 3.7. Applications 3.8. Exercises 4: Convolutional Codes 4.1. Introduction 4.2. Mathematical representations and hardware structures 4.3. Graphical representation of the convolutional codes 4.4. Free distance and transfer function of convolutional codes 4.5. Viterbi’s algorithm for the decoding of convolutional codes 4.6. Punctured convolutional codes 4.7. Applications 4.8. Exercises 5: Concatenated Codes and Iterative Decoding 5.1. Introduction 5.2. Soft input soft output decoding 5.3. LDPC codes 5.4. Parallel concatenated convolutional codes or turbo codes 5.5. Other classes of concatenated codes 5.6. Exercises Appendix A: Proof of the Channel Capacity of the Additive White Gaussian Noise Channel Appendix B: Calculation of the Weight Enumerator Function IRWEF of a Systematic Recursive Convolutional Encoder Bibliography Index End User License Agreement List of Illustrations Introduction Figure I.1. Binary symmetric channel Figure I.2. Image at the input and output of a binary symmetric channel Figure I.3. Block diagram of a transmission chain 1: Introduction to Information Theory Figure 1.1. Entropy of a binary source Figure 1.2. Relations between entropies and average mutual information Figure 1.3. Density probability and typical sequences set Figure 1.4. Block diagram of the studied chain for source coding Figure 1.5. Source coding Figure 1.6. Tree associated with the source code of example 3 Figure 1.7. Kraft inequality Figure 1.8. Entropy rate Figure 1.9. Block diagram of the coder-decoder Figure 1.10. Shannon distortion-rate for a Gaussian source of unitary variance Figure 1.11. Uniform and non-uniform quantization L = 8 for a Gaussian source with variance σ = 1 x Figure 1.12. Binary symmetric channel Figure 1.13. Conditional entropy H(X|Y) versus q and p Figure 1.14. Discrete channel without memory Figure 1.15. Erasure channel Figure 1.16. Case C = H (X) MAX Figure 1.17. Case C = 0 Figure 1.18. Communication system with channel coding Figure 1.19. Mutual information I(X;Y) versus q and p Figure 1.20. Capacity of the binary symmetric channel versus p Figure 1.21. Capacity of the additive white Gaussian noise channel Figure 1.22. Spheres of noise illustration Figure 1.23. Maximum spectral efficiency of an additive white Gaussian noise channel Figure 1.24. Spectral efficiency versus E /N b 0 2: Source Coding Figure 2.1. Example of Huffman’s encoding Figure 2.2. Probabilities of occurrence of the characters Figure 2.3. Example of partitioning Figure 2.4. Example of arithmetic coding Figure 2.5. Tree associated with the strings memorized in the dictionary Figure 2.6. Tree of the prototype strings Figure 2.7. Sampling and reconstruction Figure 2.8. Quantification uniform L = 8 Figure 2.9. Uniform quantization L = 8 Figure 2.10. Non-uniform quantization L = 8 Figure 2.11. Non-uniform quantization L = 8 Figure 2.12. Block diagram of the PCM coder Figure 2.13. A law and μ law. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 2.14. Example of realization of a source with memory function of the time Figure 2.15. Example of realization of a source with memory projected on a plane Figure 2.16. Example of scalar quantization. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 2.17. Example of vector quantization.For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 2.18. Block diagram of the P order predictor Figure 2.19. Delta modulator Figure 2.20. Delta demodulator Figure 2.21. Example of behavior of in a delta modulator delta Figure 2.22. Ideal block diagram of a DPCM transmission chain Figure 2.23. Block diagram of the DPCM coder Figure 2.24. DPCM decoder Figure 2.25. Block diagram of the transform coding Figure 2.26. Example of scalar quantization after a Karhunen–Loève transform. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 2.27. Subband coding Figure 2.28. Zigzag serialization Figure 2.29. Example of speech signal (“this is”) in the time domain Figure 2.30. Simplified block diagram of the LPC coder Figure 2.31. Block diagram of the CELP coder Figure 2.32. Another version of the CELP coder Figure 2.33. Absolute threshold curve Figure 2.34. Masking level in frequency. Fora color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 2.35. Block diagram of an MPEG audio coder Figure 2.36. Impulse response of the prototype filter Figure 2.37. Frequency response of the filterbank in the bandwidth [0; 22 Khz] Figure 2.38. Probability density Figure 2.39. Density probability Figure 2.40. Block diagram of the predictive coding system Figure 2.41. Block diagram of the modified coder 3: Linear Block Codes Figure 3.1. Parity check code C (3, 2) 2 Figure 3.2. Hamming sphere Figure 3.3. Bounds on the minimum distance for linear block codes with q = 2 Figure 3.4. Bounds on the minimum distance for linear block codes with q = 256 Figure 3.5. Error exponent function versus rate Figure 3.6. Poltyrev bound rate versus length N Figure 3.7. Sphere packing bound ratio E /N versus N b 0 Figure 3.8. Two versions of the Tanner graph for the Hamming code (7, 4) Figure 3.9. Branch of a trellis section Figure 3.10. Trellis diagram obtained from the parity check matrix of the code C 3 Figure 3.11. Trellis diagram of Hamming code (7,4) Figure 3.12. Block diagram of a transmission chain with an additive white Gaussian noise channel Figure 3.13. Branch metric calculation Figure 3.14. Cumulated metric calculation after the reception of the 1st bit of the word r. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 3.15. Cumulated metric calculation after the reception of the 4th bit of the word r. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 3.16. Cumulated metric calculation Figure 3.17. Determination of the estimated sequence. Fora color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 3.18. Branch metric calculation Figure 3.19. Cumulated metric calculation. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 3.20. Cumulated metric calculation. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 3.21. Determination of the estimated sequence. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 3.22. Iterative decoding on the Tanner graph Figure 3.23. Decision regions associated with the codewords Figure 3.24. Example Figure 3.25. Coding gain of different codes Figure 3.26. Performance comparison of hard and soft input decoding Figure 3.27. a) Stop-and-wait protocol b) go-back-N protocol with N = 4 f Figure 3.28. Hardware structure of the division by 1 +p + p2 Figure 3.29. Sequencing of the division by 1+p + p2 Figure 3.30. Hardware structure of the Hamming encoder with g(p) = 1+p + p3 Figure 3.31. Hardware structure of the Hamming encoder with premultiplication for g(p) = 1 + p + p3 Figure 3.32. Complete hardware structure of the Hamming encoder for g (p) = 1 +p + p3 Figure 3.33. Trellis diagram of the Hamming code defined by the polynomial g(p) = 1 + p + p3 Figure 3.34. Hardware structure of the decoder for the Hamming code with g(p) = 1 + p + p3 Figure 3.35. Hardware structure of the decoder for the Hamming code (7,4) Figure 3.36. SER = f(SER ) for the (255,239,17) Reed-Solomon code O I Figure 3.37. Trellis diagram of the (8,4) Reed-Muller code Figure 3.38. Trellis diagram of the (7, 4) code Figure 3.39. Trellis diagram of the (3,2) code Figure 3.40. Trellis diagram 4: Convolutional Codes Figure 4.1. Convolutional encoder Figure 4.2. Non-recursive convolutional coder of rate 1/2 M =2 = 7, = 5 g1 g2 Figure 4.3. Non-recursive convolutional encoder of rate 1/2 M = 6 = 133, = 171 g1 g2 Figure 4.4. Systematic recursive convolutional encoder 1/2 M = 2 Figure 4.5. Generic non-recursive convolutional encoder of rate 1/n Figure 4.6. State transition diagram for the non-recursive convolutional code = 7, g1 g2 = 5 Figure 4.7. Elementary trellis of a non-recursive convolutional encoder = 7, = 5 g1 g2 Figure 4.8. Trellis diagram for the non-recursive convolutional coder = 7, = 5 g1 g2 Figure 4.9. TWL graph of a systematic convolutional coder of rate 1/2 Figure 4.10. Trellis diagram of the non-recursive Figure 4.11. Modified state diagram of a non-recursive Figure 4.12. Viterbi decoding for a non-recursive convolutional encoder = 7, = 5 g1 g2 Figure 4.13. Trellis diagram of a punctured convolutional encoder 5: Concatenated Codes and Iterative Decoding Figure 5.1. Factor graph of the parity check code (3,2) Figure 5.2. Factor graph of the repetition code (3,1) Figure 5.3. Message flows for the calculation of μ (c) and μ (c) c→T T→c Figure 5.4. Level curves of μ (c) as a function of and a) sum- T→c product algorithm and b) minimum-sum algorithm Figure 5.5. Tanner graph of the considered code (7,4) Figure 5.6. Sum-product algorithm applied to the factor graph of the code (7, 4) Figure 5.7. Minimum-sum algorithm for the code (7, 4) Figure 5.8. Quantities α(m), γ(m′,m) and β (m) on the trellis of a convolutional code t t t+1 Figure 5.9. Trellis of the coder Figure 5.10. Calculation of α Figure 5.11. Calculation of β Figure 5.12. Calculation of ln α Figure 5.13. Calculation of ln β Figure 5.14. Communication system Figure 5.15. Example of a factor graph for a recursive convolutional encoder of rate 1/2 used for the forward-backward algorithm Figure 5.16. Message passing when performing the forward-backward algorithm Figure 5.17. Tanner graph of a (N, d , d ) regular LDPC code The rate of an c T LDPC code is: Figure 5.18. Tanner graph of a (12, 3, 4) regular LDPC code Figure 5.19. Bipartite graph with girth of 4 Figure 5.20. Parity check matrix of a (638,324) QC-LDPC code Figure 5.21. Parity matrix H with a lower triangular form Figure 5.22. Graphical illustration of the evolution of the erasure probability as a function of the iterations for a regular LDPC code with d = 6, d = 3 . For a color c T version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 5.23. Graphical illustration of the evolution of the erasure probability as a function of the iterations for a irregular LDPC code. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 5.24. Factor graph of the (N,d , d ) regular LDPC code c T Figure 5.25. Example of local tree for a (d , d ) LDPC code c T Figure 5.26. Message exchange for the calculation of and Figure 5.27. Bit error rate versus E /N for the (4320, 3557) regular LDPC code with b 0 Di = 17 and d = 3 T Figure 5.28. Comparison of the mean and variance calculated using the Gaussian approximation and using the density probability evolution. For a color version of the figure, see Figure 5.29. Parallel concatenated convolutional encoder Figure 5.30. TWL graph of a turbo encoder Figure 5.31. Parity check matrix of a turbo coder composed of two RSC coders (7,5) with RT = 1/3 Figure 5.32. Comparison of the average performance of average turbo codes with rate R = 1/3 composed of RSC (7,5) encoders (dashed lines) and RSC (15,17) encoders t (continuous line) and a uniform interleaver of size K = 100 and K = 1000 Figure 5.33. Contribution of the input sequences of low weight on the bit error rate of a turbo coder of rate R = 1/3 composes of RSC (15,17) encoders and a uniform t interleaver of size K = 100. For a color version of the figure, see www.iste.co.uk/leruyet/communications1.zip Figure 5.34. Contribution of the input sequences of low weight on the bit error rate of a turbo coder of rate R = 1/3 composes of RSC (15,17) encoders and a uniform t interleaver of size K = 1000 Figure 5.35. Structure of the iterative decoder Figure 5.36. Example of factor graph for a turbo code of rate 1/3 Figure 5.37. BER = f(E /N ) for a turbo code of rate 1/2 composed of two RSC B 0 encoders and an S-random interleaver of size K = 1024 Figure 5.38. Curves BER = f(E /N ) for a turbo code of rate 1/2 composed of two B 0 RSC encoders and a pseudo random interleaver of size K=65536 Figure 5.39. EXIT charts for an RSC encoder (13,15) of rate R = 1/2 Figure 5.40. Evolution of the average mutual information as a function of the iterations for a turbo code Figure 5.41. Example of primary cycle Figure 5.42. Interleavers a) L-random K = 320 and b) QPP K = 280 Figure 5.43. Code PCBC Figure 5.44. TWL graph of the PCBC codes Figure 5.45. Serial concatenated convolutional encoder Figure 5.46. RA coder R = 1/2 T Figure 5.47. Product code List of Tables 1: Introduction to Information Theory Table 1.1. Variable length code of example 1 Table 1.2. Variable length code of example 2 Table 1.3. Variable length code of example 3 Table 1.4. Probabilities P (X, Y), P (Y) and P (X|Y) for the binary symmetric channel r r r Table 1.5. Probabilities P (X,Y), P (Y) and P (X|Y) for the erasure channel r r r 2: Source Coding Table 2.1. Huffman’s encoding table Table 2.2. Probabilities of occurrence of characters Table 2.3. Dictionary of strings

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.