ebook img

Multi-Block Interleaved Codes for Local and Global Read Access PDF

0.09 MB·
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Multi-Block Interleaved Codes for Local and Global Read Access

Multi-Block Interleaved Codes for Local and Global Read Access Yuval Cassuto∗, Evyatar Hemo∗, Sven Puchinger† and Martin Bossert† ∗Viterbi Electrical Engineering Department, Technion – Israel Institute of Technology, Israel †Institute of Communications Engineering, Ulm University, Germany [email protected],[email protected], {sven.puchinger,martin.bossert}@uni-ulm.de Abstract—We define multi-block interleaved codes as codes Feature2takescareofthefast-accessrequirement,andfeature 7 that allow reading information from either a small sub-block or 3 improves reliability with only a minor adjustment of read 1 from a larger full block. The former offers faster access, while performance in rare unfortunate instances. Note that focusing 0 the latter provides better reliability. We specify the correction on random sub-unit read performance is in line with real 2 capability of the sub-block code through its gap t from optimal minimumdistance,andlooktohavefull-blockminimumdistance applications that are much more sensitive to delayed reads n that grows with the parameter t. We construct two families of than writes. a suchcodeswhenthenumberofsub-blocksis3.Thecodesmatch The formal model that fits the above three features now J the distance properties of known integrated-interleaving codes, follows. Define a write block as N =mn symbols, where m is 5 but with the added feature of mapping the same number of 2 information symbols to each sub-block. As such, they are the the number of sub-blocks and n is the number of symbols in first codesthatprovideread accessin multiplesizegranularities each sub-block. A write unit of K =mk information symbols ] and correction capabilities. divides into m sub-units of size k each. In the write path, T K information symbols are encoded into N code symbols I . I. Introduction and written to the memory. In the read path, we need to s c return a sub-unit of k symbols by reading a sub-block of n Thetwo centralfeaturessoughtindata-storageapplications [ symbols. This operation is divided into decoding and reverse are extreme data reliability and fast data access. In data mapping, where the former corrects the errors/erasures in the 1 reliability we wish to avoid data loss in all conceivable v nsub-blocksymbols,andthelatterretrievesthek information circumstances, and in fast data access we want to read data 4 symbols from the corrected sub-block. If sub-block decoding with high throughput and low latency. Access considerations 8 failed to correct the errors/erasures, we allow reading the 1 often dictate implementing an error-correcting code over a entire write block and decode it as [N,K] code. We define an 7 fixed (and not too large) data unit, which degrades the cod- hN,K,mi multi-block interleaved code as a code supporting 0 ing performance and the resulting reliability. An especially these operationswith the parametersN,K,m. Ourobjectivein . 1 interesting instance of this happens in Flash storage, where a this paper is to construct hN,K,mi codes with good distance 0 group of multi-level memory cells is divided to smaller data properties for both individual sub-block decoding and full 7 units called pages. write-block decoding. 1 It is well known that coding over large block lengths gives : The first place to look for multi-block interleaved codes is v the best reliability for a given coding rate. But adding access within the literature on integrated-interleaving(I-I) codes[6], i performance to the considerations, block lengths are tightly X [5] (see also [1]), which provide one minimum distance for constrained by the granularity required for the read/write r sub-block codewords and a larger one for full codewords. a interface of the storage device. The standard approach of the While I-I codes give good (sometimes provably optimal) storage industry is to fix the coding block length to be the tradeoffsbetweensub-blockandfull-codeworddistances,they basic access unit of the device (e.g. a page of a few KB), cannot be readily used as multi-block interleaved codes. The and come up with the best code for this block length. A issue with I-I codes is that it is not known how to reverse- conflict between coding efficiency and access performance map a sub-block codeword to k information symbols. The thusemergeswheneverthestorageisrequiredtodeliverdataat encoder presented for I-I codes [1] maps different numbers fine accessgranularitiesthatimplyshortcodeblocks.Solving of information symbols to different sub-blocks, and it is not this conflict needs a multi-level read/write scheme with the clearhowtore-organizetheresultingmk×mngeneratormatrix following features: togetmfull-rankk×nsub-matrices.Becauseofthat,I-Icodes 1) Jointly writing m data (sub-)units(e.g. pages), each with fail to support the critical feature 2 in the read/write model. k symbols, to a single write block. Anotherrelatedcodingframeworkislocallyrecoverablecodes 2) Allowingrandomreadaccesstoasub-unit,i.e.,returning (LRC) [4], [7], [10]. LRC arefocusedonefficientlocalrepair its k symbols by physically reading a sub-block 1/m of individual code symbols, while here we are interested in smaller than the write block. local decoding of larger sub-blocks. The primary reason why 3) In cases where a sub-unit read fails due to excess errors knownLRCsarenotdirectlyapplicabletotheproposedmodel in the sub-block,readingthe fullwrite blocksucceedsin is that they partition the code coordinates to disjoint sets that retrieving the data. areeachanMDSlocalcode.Makingeachofthemsub-blocks an[n,k]MDSlocalcodedegeneratestheproblem,becauseall code, which we denote C′. Next we define the distances of parity symbols are local in this case. multi-block interleaved codes. Our main contributions in the paper are two code con- Definition2.[sub-block minimum distance] An hN,K,mi structions of multi-block interleaved codes providing flexible multi-blockcodeChassub-blockminimumdistanceδifthe tradeoffs between sub-block and full-codeword correction ca- sub-blockprojectedcodeC′hasminimumdistanceδ. pabilities.Theflexibilityisachievedbyintroducinganinteger The(full-block)minimumdistanceofanhN,K,mimulti-block parameter t specifying the gap of the sub-block code from code will be defined to be the usual minimum distance as minimum-distance optimality. As t grows, the constructed [N,K] block code. codesenjoybetter minimumdistancesfor the fullwrite-block codewords. This new view of the local vs. global correction A. Boundonthe minimumdistance ofmulti-blockinterleaved tradeoff through the t parameter allows their explicit joint- codes optimization, while in I-I codes this relation is much less Thefollowingisarestatementofaknowntheoremfrom[5], transparent (in I-I codes there is no notion of sub-block but posed in different notation (using the t parameter) that is dimension k, so t is not even defined). One family of codes, helpful for the constructions in the sequel. presentedin Section III, allowsimprovementin the full-block Theorem1. LetC beanhN = mn,K = mk,mi multi-block minimum distance (by growingt) reaching d =1.5(n−k+1). interleavedcodewithsub-blockminimumdistanceδ=n−k− An improvedconstruction, presented in Section IV, can grow t+1.Ift 6 (k−1)/(m−1)thentheminimumdistanceofCis thefull-blockminimumdistancefurtheruntild =1.6(n−k+1). boundedby The proposed constructions are given for m = 3 sub-blocks, d 6n−k+(m−1)t+1. (1) but can be extended to additional m with similar techniques. An optimal multi-block interleaved code is one meeting the The key idea toward having sub-blockreverse mappingis the bound(1)with equality.We notetwo interestingspecialcases specification of the codes through their generator matrices. of Theorem 1. For m = 1 we get the usual Singleton bound; This enables to carefully populate the generator matrix with for m=2 we get d6n−k+t+1 when t6k−1. The upper constituent Reed-Solomon (RS) generators such that all dis- bound reveals the tradeoff between the sub-block minimum tance properties are fulfilled simultaneously. Coincidentally, distanceandtheminimumdistanceofthecode.Theparameter both constructions achieve the same distance properties as t prescribesthegapofthesub-blockdistancefromoptimality, known I-I codes: the former meets 2-level I-I and the latter and in return increasing t improves the upper bound on the meets 3-level I-I. However, the constructed codes are not distance of the full-block code. reformulations of these known I-I codes: the resulting codes are different,andthere isno apparenttransformationfromthe B. Short review of Reed-Solomon codes I-I codes to our codes that adds the desired sub-block access The key building block for our constructions is Reed- features. A previous use of constituent RS generators was Solomon (RS) codes, which are now defined. in [3] to construct partial unit memory (PUM) convolutional Definition3.[Reed-Solomoncode]Letα beanelementofa codes [9], [8] (see also [2]). finitefieldGF(q)ofordern.Thendefinea[n,ℓ]Reed-Solomon II. DefinitionsandNotations codeasallpolynomialsc(x)obtainedas Throughout the paper we denote the set {a,a + 1,...,b} c(x)=i(x)g(x), by [a : b]. Similarly, [a : b]n denotes the same set, but wherei(x)isanarbitraryinformationpolynomialofdegree6 with element indices taken modulo n. For example, if n = 7, ℓ−1,and then [5 : 9] = [5 : 2] = {5,6,0,1,2}. The operation n n A \ B represents set difference between A and B. We use g(x)=(x−α−s)(x−α−(s+1))···(x−α−(s+n−ℓ−1)), (2) the term distance to refer to the Hamming distance between andsissomeintegerin[0:n−1]. vectors over the field GF(q). For notational convenience, the In simple words, an RS code is obtained by a generatorpoly- statementsinthepaperoncodeminimumdistancesarewritten nomialg(x)withn−ℓrootswhosereciprocalsareconsecutive as some d equalsomenumber,eventhoughin some caseswe powersof α. As a usefulcompactnotation,givenn andq, we only prove (the important part) that d is at least that number. define a RS code through its (cyclically) consecutive power We first define our principal object of study, which we call set. In this representation, the code defined by the generator multi-block interleaved code. in (2) is Definition1.[multi-blockinterleavedcode]An [N,K] linear RS([s: s+r−1] ), n codeCisahN,K,mimulti-blockinterleavedcodeifitscode where r = n−ℓ is called the redundancy of the code. It is coordinatesare partitionedto m disjoint sub-blocksof size well known that the minimum distance of RS([s: s+r−1] ) n = N/m each,andfromeachsub-blockwecanrecovera n is r+1. It will often be convenient to represent an RS code disjointsize-k= K/msub-unitofinformationsymbols. by the complement of its root power set, that is, For decoding a multi-block interleaved code we assume m RS([a:b] ),RS([0:n−1] \[a:b] ). projectionsofthecoordinates[1:N]todisjointsubsets[1:n], n n n [n+1 : 2n],... up to [(m−1)n+1 : mn]. We assume that the The dimension of the code RS([a : a+ℓ−1] ) is ℓ, and its n projectionofC ontoanyofthese subsetsgivesthesame [n,ℓ] minimum distance is n−ℓ+1. III. ConstructionofMulti-BlockInterleavedCodes the dimension of C is 3k, and its length is 3n. We defer the 3 discussion on encoding and decoding till after we prove the Our focus in the paper is on codes with m = 3, because minimum distances in the following. this is the most useful case for Flash-based storage devices with 8 = 23 representation levels. The constructions can be Proposition2. When t < k/2, C hassub-blockminimum 3 generalized to larger m values. For the specific case of m=3 distanceδ=n−k−t+1. we add the following definitions. Proof:Lookingatanylength-nsub-block,weobservefrom Definition4.[Di-minimum-distances] For i∈{1,2,3}, an the structure of G that the codeword projected on the sub- hN,K,3i multi-blockinterleaved code C has Di-minimum- block is a linear combination of codewords generated by distancediifthelowestweightofanycodewordwithexactlyi G1,G2,GI,GE. Noting the root power sets of these codes non-zerosub-blocksisdi. (the complements of the sets listed in Construction 1), we get: G1 → RS([t : n − 1] ), G2 → RS([2t : t − 1] ), Note that according to Definition 4, the minimum distance n n of a hN,K,3i code is mini∈{1,2,3}di. In addition to serving GI → RS([k : 2t − 1]n), and GE → RS([k + t : k − 1]n). If a power index lies in the intersection of these sets, then as upper bounds on the code minimum distance, the D- i every polynomial of a codeword in the sub-block code has minimum-distances have important operational meanings: the this power index as root. Taking the intersection of these sets D -minimum-distance gives the correction capability when 1 1 sub-blocksuffersmany errors/erasures,and the D -minimum- we obtain that the projected sub-block code is a 2 distance does the same when 2 sub-blocks suffer many er- RS([k+t:n−1] ) n rors/erasures. code, and thus its minimum distance is n−k−t+1. Construction1.LetC bedefinedbyageneratormatrixG = 3 Given the sub-block minimum distance of Proposition 2, the  G  1  GG2 ,whereG1,G2andG3arethek×3nmatricesgivenin tnieoxnt1lemismoaptsimhoawlswtihthatrtehsepDec1t-mtointhimeubmou-dnids1taonfceTohfeoCroenmstr1u.c- 3 Fig.1.NowwedefinethecomponentmatricesofFig.1;blank Lemma3.AcodeC fromConstruction1hasD -minimum- 3 1 rectanglesrepresentall-zerosub-matrices. distanced =n−k+2t+1,forallt<k/2. 1 • G1isageneratormatrixforthecodeRS([0:t−1]n). Proof: Denote the length 3k information vector as v, and • G2isageneratormatrixforthecodeRS([t:2t−1]n). partition it as • GIisageneratormatrixforthecodeRS([2t:k−1]n). v=(v ,v ,v ). • GEisageneratormatrixforthecodeRS([k:k+t−1]n). 1 2 3 Each sub-vectorv has length k, and we further partition it to j n n n v =(v ,v ,v ). j j,I j,1 j,2 k−2t GI The lengths of the m = 3 constituent vectors are from left to G = 1 right: k −2t,t,t. Consider a codeword of C with one non- t G1 GE 3 t G2 GE zero sub-block. Assume (wlog due to symmetry) that the left sub-block is non-zero. It is clear from the structure of G in Fig. 1 that all sub-vectorsexcept v must be all-zero. In this 1,I k−2t GI case, the codeword weight is at least the minimum weight of a non-zero codeword from RS([2t:k−1] ) (spanned by GI), G = n 2 t GE G1 which is n−k+2t+1 as required by the lemma statement. t G2 GE The behavior of the D2-minimum-distance as a function of t is quite different, as we now see. Lemma4.AcodeC fromConstruction1hasD -minimum- k−2t GI 3 2 distanced =2(n−k−t+1). G = 2 3 t GE G1 The proof of Lemma 4 is trivial from the fact that δ = t GE G2 n−k−t+1. Unfortunately,the construction does not exclude codewords with two minimum-weight sub-block codewords. Figure1. Generatormatricesforah3n,3k,3imulti-blockinterleaved The problem lies in the fact that both v and v are code C . 1,1 2,2 3 multiplied by the same matrixGE in the right column,which First note that the dimensions of the codes specified for may cancel their contribution to the right sub-block. We now G1,G2,GI,GE fit their correspondingsub-matrices in Fig. 1. summarize the distance properties in the following theorem It is straightforward to write a generator matrix for any code (proof immediate). of the type RS([a : a+ℓ −1] ), because the codewords of n this code are evaluations of polynomials of the form xaU(x), 1The bound is given on the minimum distance, but from the proof of whereU(x)isapolynomialofdegreeℓ−1orless.Asrequired, Theorem1itreadily applies tothe D1-minimum-distance aswell. Theorem5. A code C fromConstruction1 hassub-block Construction1)mapsk+tinformationsymbolstoeachoftwo 3 minimumdistanceδ = n − k − t + 1,D -minimum-distance sub-blocks, and k−2t to the third. 1 d =n−k+2t+1,andD -minimum-distanced =2(n−k−t+1). Fo1rt6(n−k+1)/4,C h2asminimumdistance2d =d (optimal), IV. AnImprovedConstructionwithLargerMinimum 3 1 Distance andfor(n−k+1)/4 < tithasminimumdistanced = d (not 2 optimal). To get codes hN,K,mi with larger minimum distances beyond what Construction 1 can achieve, we present the Similar to the case here, all the known I-I constructions following improved construction. In the sequel we define in the literature with optimal D -minimum-distance (given 1 s=t/2, assuming even t. t) suffer from the same problem of having codewords with two minimum-weightsub-blockcodewords.In[5]the authors Construction2.LetK bedefinedbyageneratormatrixG = 3 address this exact problem,but give no explicit constructions.  G  1 Tdheanimdp−li2ctatiinondo,fiTshtehoartemthe5re, iins paarrtaitciuolaorfth1ebteetrwmesen2tthine  GG2 ,whereG1,G2 andG3 arek × 3n matricesspecified 1 2 3 loss in d to the gain in d as we increase t. This high ratio in Fig. 2 as follows. The figure specifies 4s rows of each 2 1 compromises the code’s ability to address high error/erasure Gi,whicharepopulatedbythegeneratormatricesspecified eventsinbothoneandtwosub-blocks.Inournextconstruction below,orcombinationsthereof.Thefigureomitsthetopk−4s wesolvethisbyprovingamuchlowerratioof2/3.Butbefore rowsofeachGi thathosttheGI matrixgivenbelow,inthe that, we want to show the good property of Construction 1 correspondingsub-block2. having an extremely efficient reverse mapping from the sub- • G1isageneratormatrixforthecodeRS([0: s−1]n). block codeword to the k information symbols of the input • G2isageneratormatrixforthecodeRS([s:2s−1]n). sub-unit vj. We now write the encoding and reverse-mapping • G3isageneratormatrixforthecodeRS([2s:3s−1]n). functions explicitly. • G4isageneratormatrixforthecodeRS([3s:4s−1]n). Encoding: The encoder input is an information vector v = • GIisageneratormatrixforthecodeRS([4s:k−1]n). (v1,v2,v3), where each vj is a vector of k GF(q) symbols • GEisageneratormatrixforthecodeRS([k:k+s−1]n). holding the information sub-unit for sub-block j. The output • GFisageneratormatrixforthecodeRS([k+s:k+2s− of the encoderis simply the vector v·G, whichis a codeword 1] ). n ofC .ThematricesG1,G2,GI,GEarechosenasthestandard 3 polynomial-evaluation matrices of the respective RS codes n n n (mappingℓ polynomialcoefficients to n code symbols). For a v1,1 : G1 GF code RS([a : a+ℓ−1]n, the corresponding generator matrix v1,2 : G2 GE 4s mapsaninformationsub-vectorutoacodewordbyevaluating v1,3 : G3 GE thepolynomialU˜(x)= xaU(x)onn elementsofGF(q),where v1,4 : G4+G3 GE U(x) is the polynomialwhose coefficientsare the elements of v2,1 : G1 GF u, and it has degree at most ℓ−1. Note that this is not the v2,2 : G2 GE 4s way the RS code was defined in Definition 3, but it is true by v2,3 : GE G3 an equivalent definition of RS codes. v2,4 : GE G4+G3 Reversemapping:Thereverse-mappingoperationreturnsthe v3,1 : GF G1 information sub-unit vj from the codeword of sub-block j. v3,2 : GE G2 4s Denote by u(x) the polynomial whose coefficients are the v3,3 : GE G3 codeword symbols corresponding to u. From the property v3,4 : GE G4+G3 given in Definition 3, u(α−i) is zero for all i except those in left center right [a:a+ℓ−1] . Since these sets are not overlapping between n Figure2. Generatormatricesforanimprovedh3n,3k,3imulti-block G1,G2,GI,GE, we can take the combined sub-block code- interleaved code K . 3 wordpolynomialc(x)(whichisasumofdifferentpolynomials like u(x) above), and get that for each i∈[0 : k − 1], that Asinthepreviousconstructionthedimensionsofthecodes c(α−i) = u(α−i) for one of the sub-vectors u in v . Now we specified for G1,G2,G3,G4,GI,GE,GF fit their correspond- j can use the well known property of RS codes (inverse DFT) ing sub-matrices in Fig. 2. The code dimension of K3 is 3k, and obtain and its length is 3n. We prove the minimum distances in the following. U =n−1u(α−(a+l))=n−1c(α−(a+l)), l∈[0:ℓ−1]. l Proposition6. When t < k/2, K hassub-blockminimum 3 Since c(x) is given, we can find the coefficients of all the distanceδ=n−k−t+1. polynomials U(x) and thus all the vectors u. Proof: The proof is identical to Proposition 2: at each sub- Neither the I-I codes from [6] nor the ones in [5] have blockthereisacodewordofanRScodewithn−k−2s=n−k−t a known way to map k information symbols to each sub- roots with consecutive powers. block codeword, such that it is possible to retrieve these k symbols back from the sub-block codeword. The encoder specified forI-Icodes[1] (correspondingto the parametersof 2sameasinFig.1. Lemma7.ThecodeK hasD -minimum-distanced =n−k+ properties by lower bounding the minimum distances. How- 3 1 1 3t/2+1. ever, an alternative way is by giving an explicit erasure- decodingalgorithm,whichwedonow.First,supposethatone Proof:Recallthe partitionv=(v ,v ,v ).Nowwe partition 1 2 3 sub-blockhasn−k+3t/2erasuresandtheothertwohaveeach each v to j n−k−t erasures or less. From symmetry assume that the left vj =(vj,I,vj,1,vj,2,vj,3,vj,4). sub-block has the largest number. Then decoding the center sub-blockwitha[n,k+t]codegivesv ,...,v ,andsimilarly 2,1 2,4 The lengths of the constituent sub-vectors are from left to the right block gives v ,...,v . In addition, we get v as 3,1 3,4 1,1 right: k −4s,s,s,s,s. Consider a codeword of K with one 3 the reverse map of GF in the center sub-block, and then v 1,2 non-zero sub-block, for which we seek a lower bound on the by canceling v and v from the reverse map of GE in the 3,3 3,4 Hamming weight. Assume (wlog due to symmetry) that the center sub-block. Finally, we can find v +v by canceling 1,3 1,4 one non-zero sub-block is the left one. Then it is clear from v from the reverse map of GE in the right sub-block. Now 2,2 Fig. 2 that the only sub-vectors that can be non-zero are v , 1,I we can decode the left sub-block with a [n,k − 3t/2] code v ,andv . Settinganyothersub-vectorto non-zeroimplies 1,3 1,4 having contributions from only GI and G4. Second, suppose a non-zero codeword in at least one other sub-block. Further, thattwosub-blockshaveeachn−k−t/2erasuresandthethird ifv orv arenon-zero,thentheymustsatisfyv +v =0. 1,3 1,4 1,3 1,4 one hasn−k−t erasuresor less. From symmetryassume that Otherwisethecodewordintherightsub-blockwillbenon-zero the right sub-block has the smaller number. Then decoding aswell.Itcanbeseenthatthislastconditionimpliesthatonly the right sub-block with a [n,k+t] code gives v , and v 2,1 3,1 G4 contributes to the non-zero codeword, and together with (amongallv ).Nowaftercancelingthesetwowecandecode 3,l the contributionofGI fromv , we getn−k+3s consecutive 1,I each of the left and center sub-blockswith a [n,k+t/2] code roots and d =n−k+3t/2+1. 1 and recover their erased symbols. Lemma8.ThecodeK3hasD2-minimum-distanced2 = 2(n− V. Conclusion k−t/2+1). The constructions in this paper are the first known ones Proof: We use the definition of v from the proof of to offer read access in both sub-blocks and full blocks. The Lemma 7. Consider a codeword of K3 with two non-zero first construction has optimal D1-minimum-distance given t, sub-blocks, for which we seek a lower bound on the total but a limiting reduction in the D2-minimum-distance as t Hamming weight. Assume (wlog due to symmetry) that the grows. The second construction has slower growth of the D1- two non-zero sub-blocks are the left and center ones. Then minimum-distance, but in return also slower reduction in D2, it is clear from Fig. 2 that the only sub-vectors that can be and altogether a higher maximal minimum distance. Future non-zeroarev ,v ,v ,v ,v fromv ,andv ,v ,v , work is needed to extend these constructions, and find others 1,I 1,1 1,2 1,3 1,4 1 2,I 2,2 2,3 v from v . Other non-zero sub-vectors would imply a non- withsimilar tradeoffs,toadditionalmvalues.While thecodes 2,4 2 zero codeword on the right sub-block. The above restriction suffice to use finite fields with sizes as small as the sub-block on the sub-vectors that can be non-zero implies that there is size, further reducing the field size is an important future no contribution of F in the left sub-block codeword and no direction, e.g. through using BCH codes as the constituents. contributionofG1 in the center sub-blockcodeword.So each VI. Acknowledgement of the sub-block codewords has n−k− s consecutive-power roots, and thus d =2(n−k−s+1). This work was supported in part by the German-Israel 2 Foundation and by the Israel Science Foundation. CombiningProposition6 andLemmas7,8, we summarizethe distancepropertiesofConstruction2inthefollowingtheorem References (proof omitted). [1] M. Blaum and S. Hetzler, “Integrated interleaved codes as locally Theorem9. Acode K fromConstruction2hassub-block recoverable codes: properties and performance,” Int. J. Inf. Coding 3 Theory,vol.3,no.4,pp.324–344,January2016. minimumdistanceδ=n−k−t+1,D1-minimum-distanced1 = [2] M.Bossert,ChannelCodingforTelecommunications. Chichester,UK: n−k+3t/2+1,andD -minimum-distanced =2(n−k−t/2+1). Wiley,1999. 2 2 Fort 6 2(n−k+1)/5,K hasminimumdistanced = d ,and [3] U. Dettmar and U. Sorger, “New optimal partial unit memory codes 3 1 basedonextendedBCHcodes,”Electronicsletters,vol.29,no.23,pp. for2(n−k+1)/5<t 6(n−k+1)/2ithasminimumdistance 2024–2025, November1993. d =d . [4] P. Gopalan, C. Huang, H. Simitci, and S. Yekhanin, “On the locality 2 of codeword symbols,” IEEE Trans. Inf. Theory, vol. 58, no. 11, pp. Theorem9demonstratesthevalueofConstruction2:itallows 6925–6934, November2012. [5] J. Han and L. Lastras-Montano, “Reliable memories with subline ac- to use large values of t that give minimum distances higher cesses,”inIEEEInternationalSymposiumonInformationTheory(ISIT), than Construction 1 can reach. It can be calculated that when Nice,France, June2007. t > 2(n − k + 1)/7, Construction 2 has superior minimum [6] M. Hassner, K. Abdel-Ghaffar, A. Patel, R. Koetter, and B. Trager, “Integrated interleaving – a novel ECC architecture,” IEEE Trans. distance compared to Construction 1. From t=2(n−k+1)/7 Magn.,vol.37,no.3,pp.773–775,February2001. we can increase t in Construction 2 up to t = 2(n−k+1)/5, [7] G.Kamath,N.Prakash,V.Lalitha,andP.V.Kumar,“Codeswithlocal and getminimumdistance ashigh as d =1.6(n−k+1),while regeneration anderasure correction,” IEEETrans.Inf. Theory, vol. 60, no.8,pp.4637–4660,August2014. Construction 1 has a cutoff at d =1.5(n−k+1). [8] G. Lauer, “Some optimal partial-unit-memory codes (corresp.),” IEEE Erasure decoding: We chose to prove the code correction Trans.Inf.Theory,vol.25,no.2,pp.240–243,March1979. [9] L.-n.Lee,“Shortunit-memorybyte-orientedbinaryconvolutionalcodes having maximal free distance (corresp.),” IEEE Trans. Inf. Theory, vol.22,no.3,pp.349–352, May1976. [10] I.TamoandA.Barg, “Afamilyofoptimal locally recoverable codes,” IEEETrans.Inf.Theory,vol.60,no.8,pp.4661–4676, August2014.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.