ebook img

Recursive Source Coding: A Theory for the Practice of Waveform Coding PDF

106 Pages·1986·2.779 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Recursive Source Coding: A Theory for the Practice of Waveform Coding

Recursive Source Coding G. Gabor Z.Gyorfi Recursive Source Coding A Theory for the Practice of Waveform Coding Springer-Verlag New York Berlin Heidelberg London Paris Tokyo G. Gabor Z. Gyorfi Department of Mathematics. Institute of Telecommunication Statistics and Computing Science Electronics Dalhousie University Technical University of Budapest Halifax. Nova Scotia Budapest Canada Hungary Library of Congress Cataloging in Publication Data Gabor. G. Recursive source coding. Bibliography: p. I. Data compression (Telecommunication) 2. Coding theory. I. Gy6rfi, Z. II. Title. TK5102.5.G33 1986 005.74'6 86-6564 © 1986 by Springer-Verlag New York Inc. Softcover reprint ofthe hardcover 1st edition 1986 All rights reserved. No part of this book may be translated or reproduced in any form without written permission from Springer-Verlag, 175 Fifth Avenue, New York, New York 10010, U.S.A. The use of general descriptive names, trade names, trademarks, etc. in this publication, even if the former are not especially identified, is not to be taken as a sign that such names, as understood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone. Typeset by Asco Trade Typesetting Ltd., Hong Kong. 9 8 7 6 5 4 3 2 I ISBN-13: 978-\-4613-8651-3 e-ISBN-13: 978-1-4613-8649-0 DOl: 10.1007/978-1-4613-8649-0 To Wives, Julie and Eva OUf Preface The spreading of digital technology has resulted in a dramatic increase in the demand for data compression (DC) methods. At the same time, the appearance of highly integrated elements has made more and more com plicated algorithms feasible. It is in the fields of speech and image trans mission and the transmission and storage of biological signals (e.g., ECG, Body Surface Mapping) where the demand for DC algorithms is greatest. There is, however, a substantial gap between the theory and the practice of DC: an essentially nonconstructive information theoretical attitude and the attractive mathematics of source coding theory are contrasted with a mixture of ad hoc engineering methods. The classical Shannonian infor mation theory is fundamentally different from the world of practical pro cedures. Theory places great emphasis on block-coding while practice is overwhelmingly dominated by theoretically intractable, mostly differential predictive coding (DPC), algorithms. A dialogue between theory and practice has been hindered by two pro foundly different conceptions of a data source: practice, mostly because of speech compression considerations, favors non stationary models, while the theory deals mostly with stationary ones. The work of R.M. Gray started a slow process of reconciliation between the theory and practice of DC in the beginning of the 1970s, when the theoretical investigation of the recursive methods began. Constructive re sults, however, are yet to be seen. The first attempts to apply block-coding methods took place at the same time, and the revision of the nonstationary model of a data source has also started. Classical information theory so far has had very little to say about the most widely used DPC's. The purpose of this work is to present a thorough viii Introduction criticism of the field of DPC by creating a general model of recursive coding. It will be shown that the existing DPC algorithms are burdened with unnecessary structural restrictions. Moreover, even if the ideas be hind these methods are accepted, the usual design routines are based on a strange misunderstanding, namely, the idea of the minimization of the prediction-error. In the first chapter we introduce a generalization of the DPC which is capable of dealing with arbitrary structural constraints and does not ex clude, at least in theory, reaching the theoretical limits established by the theory of information. The second chapter deals with structural and design problems of the general model and investigates whether the two apparently natural prop erties of the most popular methods, the so-called "equimemory" (EM) and "minimum search" (MS) properties, are theoretically necessary con ditions of optimality. An equimemorized, or EM, recursive quantizer (RQ) in which the memories of the coder and the decoder are identical ignores its coder's access to the past of the source. The MS property simply means that, given the input, the coder's choice of code-word is always the one which is closest to the input in the decoder's reproduction. The third chapter returns to the investigation of the DPC's and the fourth chapter illustrates the results on real-life design examples. Contents CHAPTER I The Fine-McMillan Recursive Quantizer Model I. I Source, Channel, Reproduction I 1.2 The Linear Deltamodulator 2 1.3 The Definition of a Fine-McMillan Recursive Quantizer 3 1.4 The Design Problem 6 1.5 The Simple Quantizer 9 1.6 Theoretical Limits with Given Channel Capacity 12 CHAPTER 2 Structural and Design Problems of a Recursive Quantizer 14 2.1 The McMillan Structure Problem 14 2.2 Fine's Principle of Minimum Search 15 2.3 The Principle of Minimum Search and the Property of Equimemory 20 2.4 Optimality and the EM Property-the McMillan Structure Theorem 25 2.5 Strong-optimality, MS and EM Properties-the Reformulation of the McMillan Structure Theorem 28 2.6 The Proof of the Structure Theorem 30 2.7 Feed-forward Design for the Causal Case 35 2.8 Trellis Coders in Delayed Recursive Quantizers 39 CHAPTER 3 Differential Predictive Quantizers 43 3.1 Additive Decoding 43 3.2 Additive Decoding, MS and EM Properties-the Definition of the Differential Predictive Quantizer 48 3.3 A Misunderstanding Concerning the Predictor 54 3.4 Additive Decoding and the Feed-forward Principle 61 x Contents CHAPTER 4 Design ExampJes-Speech Compression 67 4.1 The Stationary Model of Speech 68 4.2 The Design of a DPC 70 4.3 The Design of a Fine-McMillan Type RQ 74 References 76 Appendix 1 81 Appendix 2 89 Appendix 3 96 CHAPTER 1 The Fine-McMillan Recursive Quantizer Model In this chapter we shall formulate a source coding model which, we hope, will shed new light on the waveform coding techniques presently in'use. Before we give the definition of the general model, however, we would like to briefly look at (using the well-known deltamodulator as an example) the type of problems and concepts that will be the main subject of this work. An important special case, the simple quantizer, will also be discussed. An analysis of the theoretical limits in terms of our model will close this chapter. 1.1 Source, Channel, Reproduction We assume that the double infinite sequence of random variables (r.v.) {Xn}~oo = { ... ,X-1,XO,X1,···} = {Xn}' the message-to-be-coded or source, is stationary and E\Xo\ < + 00. The coded message {Pn}~oo = {Pn} is a sequence of B valued r.v.'s, where B = W!), ... ,b (M)} is the channel alphabet. The reproduction of {Xn}, or decoded message, {Xn}~oo = {Xn} is also a sequence of r.v.'s. The channel is assumed to be noiseless. The fidelity of the reproduc tion is defined and measured by the sequence {E[1/I(Xn-d,Xn)]}~oo' where 1/1: Rl x Rl ~ R+ is the distortion function and d is the delay of the system. If the sequence {Xn,Xn}~oo is stationary, then the single number ± n = 0, 1, ... qualifies the system. Finally, we assume that there is an x* E Rl such that (1.1.1) 2 1 The Fine-McMillan Recursive Quantizer Model 1.2 The Linear Deltamodulator The linear deltamodulator (LDM) may be considered as the prototype of the family of differential predictive (OPC) techniques which almost exclusively rule the field of waveform coding. A brief look at the LDM will give us an opportunity to point out, without going into unnecessary detailes, those, sometimes quite restrictive, properties of the DPC techniques which have, in our view, rigidified into a kind of dogma and prompted us to formulate a more general model. Let the channel alphabet be B = {I, -I}, i.e., let the capacity of the noise less channel be 1 bit. The coder of the LDM, defined as n = 0, ±1, ... , where c is some constant, codes the difference of the actual input and its "prediction" into 1 bit. The decoder, defined as gIl = Cg"-l + LP", n = 0, ± 1, ... , where L is also some constant, corrects the prediction accord.ing to the coded message (see Figure 1). The convention that in the first step the "previous reproduction" in the coder and the decoder are arbitrary but identical is an organic, though not explicitly stated, part of the definition of the LOM. The OPC procedures are more general only so far as they apply more complex rules of quantization for the prediction error as well as for the prediction. They are variants of the LDM with somewhat weaker constraints on their structure. The LDM, which is a causal, Fine-McMillan RQ (see Section 1.3) with log2 M = 1 bit channel capacity, used to play an important role in the practice of data compression (see [1], [4], [7], [48], [60]). We will now point out a few distinctive features of, and formulate some questions about the LOM. (i) Note that the codeword sent by the coder of the LDM is always the one for which the decoder's answer is closest to the actual input. Although =* " xn xn cX" n_1 one step one step delay delay Figure 1

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.