ebook img

DTIC ADA233041: Auditory-Acoustic Basis of Consonant Perception. Attachments A thru I PDF

849 Pages·30.6 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview DTIC ADA233041: Auditory-Acoustic Basis of Consonant Perception. Attachments A thru I

:UMENTATION PAGE T I a App@0418 OMB N&..070.01 I AD- A233 041~Hnma notmOhi,i stm 0 qan.eretout welewa msnqi nogIttiofO"f" maef oe"s o. om$mp ncg(a mre=caeol ratfe f~oar s1i1gfo w qma Mn owm em emma wa W1e N WO 1m2N1 San ai t jeof tt thaes altOtoc t he Ouffriee. sof tqeel't Ic eUwoqt. Pa a~ ANuction P~rote(cOt0 04 18). Wasninpon, oc 2@5@n. i.REPORT DATE 3. REPORT TYPE AND OATES COVERED 2 January 1991 Final Technical. 3 Sept.1986-31 Dec., 1989 T S. FUNDING NUMBERS Auditory-Acoustic Basis of Consonant Perception G - AFO R-86-0335 rF .AUOa) P - 3)3 Jlames . Miller 7 ) 7. PERFORMING ORGANIZATION NAMES) AND ADORSS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Central Institute for the Deaf 818 S. Euclid Avenue St. Louis, MO 63110 AEOS FR- 1 0 I :3 0 9. SPONSORINGIMONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORINGIMONITORING Air Force Office of Scientific Research\WL AGENCY REPORT NUMBER Bolling Air Force Base -VA4 1O0 Distric~t of Columbia 20332 D T IC 11. SUPPLEMENTARY NOTES MAR Of91 12A. OISTRIUTION/AVAILABILTY STATEMENT 'jm 121L DISTR1UUTION COOS 13. ABSTRACT (Maximum 200 wom) New facts of this auditory-acoustic basis of consonant perception were discovered as listed. 1) Plosive consonants can be distinguished from fricative consonants by the peak rate of rise of intensity at their onsets. 2) The acoustic characteristics that serve to identify plosive bursts and voiceless fricatives by place of articulation can be usefully described in terms of formants defined by a novel algorithm. 3) A connectionist software model that examines fourteen psychophysically relevent acoustic measures ccF- classify any acoustic segment of speech by the location of its source in the talker's vocal tract. 4) Preliminary studies of sonorant and nasal consonants have identified the putative acoustic cues for their identification by human listeners and/or machines. 5) New methods for formant tracking were developed. 6) An important set of software tools were developed that allow further studies of the auditory- acoustic basis of consonant perception. 7) These tools have also aided in the studies of vowels and diphthongs, whose characteristics are being elucidated under primary support from the National Institutes of Health. 14. SUBIECT TERMS 15. NUMBER OF PAGES 20 1. PRICE CODE 17. SlClRTY CLASSIFICATION It. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. LIMITATION OF A.STRACT Unclassified bnclassified Unclassified UL USN 7540-41.2100SSQ Standard Form 2"9 (Rev. 249) 44 S e UOSoMe1 IU.l.- Sdt d.MO GENERAL INSTRUCTIONS FOR COMPLETING SF 29S The Report Documentation Page (ROP) is used in announcing and cataloging reports. It is important that this information be consistent with the rest of the report, particularly the cover and title page. Instructions for filling in'each blook of the form follow. It is important to stay within the lines to meet opticals canning requirements. Block 1. Agency Use Only (Leave blank). Block 12a. Distribution/Availability Statement. Denotes public availability or limitations. Cite any Block 2. Reoort Date. Full publication date availability to the public. Enter additional including day, month, and year, if available (e.g. 1 limitations or special markings in all capitals (e.g. Jan 88). Must cite at least the year. NOFORN, REL, ITAR). Block 3. Type of Report and Dates Covered. DOD - See DoDD S230.24. Distribution State whether report is interim, final. etc. If See o n Technical applicable, enter inclusive report dates (e.g. 10 Statements on Technical Jun 87 -30 Jun 88). DOE - See authorities. Block 4. Title and Subtitle. A title is taken from NASA - See Handbook NHB 2200.2. the part of the report that provides the most NTIS - Leave blank. meaningful and complete information. When a report is prepared in more than one volume, Block 12b. Distribution Code. repeat the primary title, add volume number, and D C include subtitle for the specific volume. On cinla psasrifeientdh desoecsu. ments enter the title classification DDOOED -- LEneatevre DbOlaEn kd.istribution categories from the Standard Distribution for Black S. Funding Numbers. To include contract Unclassified Scientific and Technical and grant numbers; may include program Reports. element number(s), project number(s), task NASA - Leave blank. number(s), and work unit number(s). Use the NTIS - Leave blank. following labels: C - Contract PR - Project Block 13. Abstra Include a brief (Maximum G - Grant TA - Task 200 words) factual summary of the most PE - Program WU - Work Unit significant information contained in the report. Element Accession No. Block 6. Author(s). Name(s) of person(s) Block 14. Subiect Terms. Keywords or phrases responsible for writing the report, performing identifying major subjects in the report. the research, or credited with the content of the report. If editor or compiler, this should follow the name(s). Block 1S. Number of Pages. Enter the total number of pages. Block 7. Performing Organization Name(s) and Address(es) Self-explanatory. Block 16. Price Code. Enter appropriate price Block S. Perforhmina Organization Reoort code (NTIS only). Numb . Enter the unique alphanumeric report number(s) assigned by the organization nerforaing the repoy Blocks 17.-19. Security Classifications. Self- explanatory. Enter U.S. Security Classification in Black 9. SonwonngI/Monitoring Agency Name(s) accordance with U.S. Security Regulations (i.e., and Address(es). Self-explanatory. UNCLASSIFIED). If form contains classified Stack 10. Soonsorino/Monitorinq Aaencv ibnofottromma otiof tnh,e s tpaamgpe. classification on the top and Report Number. (If known) Block 11. Suoolementar Notes. Enter Block 20. Limitation of Abstract. This block must y information not included elsewhere such as: be completed to assign a limitation to the Prepared in cooperation with...; Trans. of...; To be abstract. Enter either UL (unlimited) or SAR (same published in.... When a report is revised, include as report). An entry in this block is necessary if a statement whether the new report supersedes the abstract is to be limited. If blank, the abstract or supplements the older report. is assumed to be unlimited. Standard Form 296 Sack (Rev. 249) AUDITORY-ACOUSTIC BASIS OF CONSONANT PERCEPTION James D. Miller Central Institute for the Deaf 818 S. Euclid Avenue St. Louis, MO 63110 22 January 1991 Final Technical Report for Period 30 September 1986 31 December 1989 - Prepared for Department of the Air Force Air Force Office of Scientific Research (APSC) Bolling Air Force Base, DC 20332-6448 DLiC T:,, J'.. y ............ . DI t ib.' , / Av ,i291y . .06 1 6 91 3 061l86 Final Technical Report Grant AFOSR 86-0335 For the period 30 September 1986 - 31 December 1989 I. SUMMARY In the three years of this grant, several projects were undertaken. All projects aimed at examining the acoustic characteristics of American English consonants and developing algorithms that would uniquely and correctly discriminate consonants, either as groups (e.g., stops vs. fricatives) or as single phonetic units (e.g., bilabial vs. alveolar stop). In the process of carrying out these projects large data sets of natural speech, produced by native speakers of American English, were collected, analyzed, and evaluated. On the basis of these results two algorithms were developed, one for characterizing the spectra of voiceless stops and fricatives and one for discriminating between stops and fricatives, regardless of voicing. In another project, connectionist software system was developed that classifies each acoustic segment (12 msec) of speech by source in the talker's vocal tract and this is essential for the appropriate spectral description of speech waveforms. Methods of automatic formant tracking were developed that perform well for certain sets of syllables. It is believed that more effort could generalize these methods to running speech. Additionally, preliminary descriptions of the defining acoustic characteristics of stop consonants (p,t,k,b,d,g), of voiceless fricatives (s,sh,f,th,h), liquids and glides (w,l,r,j) and nasals (m,n,ng) were developed. Furthermore, in conjunction with work done under Grant NIDCD-5 R01 DC00296-06, several software packages were developed for data visualization within the model of the auditory-perceptual theory of speech perception proposed by Miller (1989) (section III.D.4). Parallel studies of vowels and diphthongs supported primarily by the NIH advanced our knowledge of their acoustic correlates and has led to the notion of auditory-acoustic target zones for steady-state and for spectral glides. II. RESEARCH OBJECTIVES The overall objective of the research under this grant, and under Grant NIDCD -5 R01 DC00296-06, is to provide a detailed description of the process by which a listener decodes the incoming acoustic signal and generates an internal phonetic representation of the speaker's utterance. This description, within the auditory perceptual theory of phonetic recognition, should provide the basis for automatic, speaker-independent recognition of continuous speech. To this end, naturally produced utterances of the consonant sounds of English, divided into classes according to their source characteristics, were subjected to various analyses using computer-implemented techniques. In addition, a significant effort has been made to refine and further develop the visual aids which illustrate the hypothesized structure within the three- dimensional space of the auditory-perceptual theory. III. STATUS OF THE RESEARCH Page 1 Introduction The goal of this research project was to develop a detailed and comprehensive account of exactly how the human listener decodes the acoustic speech signal into a string of phonetic elements that then provide a basis for the recognition of the words of a spoken language. The eventual goal was to arrive at a description of this process in a manner so detailed that computer programs could exactly simulate this process and serve then as a phonetic typewriter. The original proposal to the AFOSR presented a comprehensive theory describing this process and outlined an extensive five year plan to accomplish the essentials of these objectives. The plan was conservatively developed and an average annual budget of $320,000 was requested. The AFOSR approved our proposal but annual funding was sharply reduced to about 40% of the requested amount and the time period of funding also was reduced from 5 years to 3 years. Furthermore, a requested extension after completion of the first three years of work severely damaged the success of the project as it was necessary to eliminate personnel thus making it impossible to fully capitalize on the important gains made during the first three years. In spite of these problems, very significant advances were made during the three-year grant period. In sum, all of our research was consistent with our initial projection that a conceptual model of the processes whereby the human listener converts the acoustic signal into a string of phonetic elements could be successfully implemented. The actualization of such a system only requires a few more years of adequate effort. The main achievements of the three-year grant period were as follows: 1) An algorithm was developed which allows automatic description of burst- friction sounds, that is, the bursts of plosive consonants, the aspirations of voiceless plosive consonants, and the voiced and voiceless fricatives in terms of formant center frequencies in a manner consistent with the description of formants of vocalic sounds. A description of this algorithm is in a manuscript (Jongman and Miller - section III.A.2 of this report) and its application has been described in two presentations at scientific meetings - (Miller and Jongman (1987) section III.B.4 and Jongman, A. (1987) "Locations of burst onsets of stops in the auditory-perceptual space." J. Acoust. Soc. Am. 82(Sl), S82(A)]. This alogrithm marks the first systematic attempt at a simple description of burst-friction sounds in terms that are directly comparable to the description of vocalic sounds. Although experience with this algorithm suggests bases for its improvement, it has served very well to allow classification of the phonetic correlates of stop-bursts and of fricatives. 2) Preliminary target zones for voiceless fricatives and the bursts of voiceless stops. Using the algorithm described above, Weigelt and Miller conducted numerous preliminary analyses of the bursts of voiceless stops (P,T,K) and the spectra of voiceless fricatives (F,TH,S,SH,HH). Preliminary spectral target zones performed well in classifying these tokens. 3) An algorithm was developed that correctly distinguishes plosive consonants from fricative consonants. This work is described in a publication (III.A.3) and in a manuscript submitted for publication (III.A.4). This algorithm finds the peak rate-of-rise in intensity at the onset of the consonant Page 2 and uses this to classify the consonant as a plosive or a fricative. Performance level of the algorithm is about 96% correct. "4) Essential to the correct analysis of speech is the ability to classify each acoustic segment of speech by the location of the sound source in the talker's vocal track. Each segment of speech falls into one of four categories: No source (silence), glottal source, supraglottal source, or both glottal and supraglottal. These four categories are referred to as Silence, Glottal Source, Burst-Friction, and Mixed. In a completed dissertation (III.A.5), Sadoff described a connectionist software system that correctly classified the segments in continuous speech with about 90% accuracy. This accomplishment marks an important forward step in the progress toward automatic speech analysis and recognition. 5) Preliminary target zones were developed for the nasal consonants [M,N,NXI and for the glides and liquids [U,W,R,Y]. Based on mesurements of 48 tokens produced by two male and two female talkers of the nasals following four different vowels, it was possible to develop target zones in the auditory- perceptual space for the steady-state portions of the nasal consonants. Based in measurements of 96 tokens produced by one male and one female talker preliminary target zone locations were developed for L,W,R,&Y. In each case these target zones overlap to some degree with those of other phonetic elements and it appears that additional criteria need to be used to distinguish these. The criteria suggested by these preliminary studies appear to be rather easily defined. To distinguish L's from the vowel (UH] and the consonant (WI, which sometimes have similar spectral shapes, one notes the deep spectral valley around 2200 Hz and the associated narrowing of the bandwidth of F3 in L's as opposed to UH's and W's. W's which are often spectrally similar to the vowels [(L] and [UH] and with the consonant [L], exhibit characteristic patterns of formant change (spectral glides) which serve to distinguish them from the vowel sounds and do not have the spectral dip and narrowing of F3 characteristic of L's. The consonant (Y] is often similar in spectral shape to the vowels [IY] and [IH]. However, it can be distinguished from these by characteristic patterns of formant change that result in a "loop" in the spectral path as it travels through the auditory-perceptual space. The consonant R appears to be spectrally nearly identical to the vowel ER. The distinction sometimes appears to be based on factors such as position within the syllable and duration and other times appears to be irrelevent. Some of these results have been reported at a scientific meeting (Miller, 1988 - section III.B.1 of this report). 6) Important initiatives were made in a) automatic formant tracking (III.B.3) and b) in the calculation of the loudness of speech and the effects of spectral goodness and loudness in determining the perceived characteristics of speech sounds (III.B.2). 7) A whole set of software necessary for our approach to the study of speech was developed. For descriptions, see section III.D of this report. 8) Support from AFOSR assisted in the completion of several studies of vowels and diphthongs - see Sections III.E, III.F of this report. In sumary, under AFOSR support we were able to develop methods and gather data that gave validity to the PI's auditory-perceptual approach to human speech Page 3 perception and automatic speech recognition. Only limitations of time and funding seem to have slowed the attainment of the objectives of the original proposal. Organization of the Report This section is organized in sections pertaining to research conducted under funding from this grant, under joint funding from NIDCD-5 R01 DC00296-06, and research in general by the participants. The sections also are divided into projects completed (published or submitted), projects reported at scientific meetings, and projects in progress where applicable. A. Projects completed and published, accepted for publication, or submitted. (AFOSR Funding) 1. Chang, H.M. (1987). "SWIS: See what I say, a speaker-independent word recognition system by phoneme-oriented mapping on a phonetically-encoded auditory-perceptual speech map," Ph.D. dissertation, Washington University, St. Louis, MO. This paper reports the development of a prototype speaker-independent word recognition system based on a new model of phoneme perception proposed in the auditory-perceptual theory of phonetic recognition (see Appendix A). The system is able to map directly acoustical parameters to phonetic events by means of a phonetically-encoded auditory-perceptual map designed for a selected set of phonemes derived from all the stressed syllables in an 11-word digit vocabulary recorded from 21 new talkers (11 males and 10 females) for the digit vocabulary, a 61.5% score for phoneme recognition is obtained while the phoneme insertion rate is 4.5%. A 53% score is obtained for word recognition by using only the phonetic information derived from the vocalic segments of a stressed syllable in a word utterance. After reviewing the sources of success and failure it is argued that a phoneme-oriented system based on the broad framework established in the theory and implementated in this project, could be generalized to provide a solution to speaker-independent isolated word recognition for a medium- sized vocabulary of 50 to 200 words. 2. Jongman, A. and Miller, J.D. "Method for the location of burst-onset spectra in the auditory-perceptual space: A study of place of articulation in voiceless stop consonants," J. Acoust. Soc. Am. - In press. A method for distinguishing burst onsets of voiceless stop consonants in terms of place of articulation is described. Four speakers produced the voiceless stops in word-initial position in six vowel contexts. A metric was devised to extract the characteristic burst-friction components at burst onset. The burst-friction components, derived from the metric as sensory formants, were then transformed into log frequency ratios and plotted as points in an auditory-perceptual space (APS). In the APS, each place of articulation was seen to be associated with a distinct region, or Page 4 target zone. The metric was then applied to a test set of words with voiceless stops preceding ten different vowel contexts as produced by eight new speakers. The present method of analyzing voiceless stops in English enabled us to distinguish place of articulation in these new stimuli with 70% accuracy. 3. Weigelt, L., Sadoff, S.J., and Miller, J.D. (1990). "Plosive/fricative distinction: the voiceless case," J. Acoust. Soc. Am., 87(6), 2729-2737. Using only three measures of the waveform, the zero-crossing rate, the logarithm of the root-mean-square (rms) energy, and the derivative of the log rms energy with respect to time (termed rate of rise (ROR)], voiceless plosives (including affricates) can be distinguished from voiceless fricatives in word-initial, medial, and final positions. Peaks in the ROR contour are considered for signficance to the plosive/fricative distinction by examining the log rms energy and zero-crossing rate. Then, the magnitude of the first significant peak in the ROR contour is used as the primary classifier. The algorithm was tested on 134 tokens (72 word- initial tokens produced by four female and four male speakers; 360 word- medial tokens produced by two males and two females; 320 word-final tokens produced by two males and two females). Data from two male and two female speakers (360 word-initial tokens) were used as a training set, and the remaining data were used as a test set. The overall rate of correct classification was 96.8%. Implications of this result are discussed. 4. Weigelt, L., Sadoff, S.J., and Miller, J.D. "Plosive/fricative distinction: the voiced case." Submitted to Speech Communication. We previously reported an algorithm which distinguishes voiceless plosives from voiceless fricatives with a success rate of 96.8% (J. Acoust. Soc. Am. 87(6), 2729-2737). A similar, but modified, algorithm also makes this distinction for the voiced case. Here results are presented on the modified algorithm. The input signal is high-pass filtered and two measures of the resulting waveform are used. One measure is the log rms energy. The other is the derivitives of log rms energy over time or rate of rise (ROR) of amplitude. Peaks in the ROR contour are considered for significance to the plosive/fricative distinction by examining the log rms energy. Then, the magnitude of the first signficant peak in the ROR contour is used as the primary classifier. The resulting algorithm was tested on 1128 tokens (with the consonant of interest in word-initial, medial or final position) recorded in an anechoic chamber with 564 tokens serving as a training set and the remaining data serving as a test set. The overall success rate was 96.3%. This new algorithm, developed for the voiced case, was also applied to the previously studied voiceless data. The rate of correct classification across all word positions from both the voiced and voiceless cases was 95.8%. 5. Sadoff, S.J. (1990). "Classifying speech into silence, glottal source, burst friction, or mixed categories," Sc.D. dissertation, Washington University, St. Louis, MO. The primary goal of this research is to automatically segment speech into four acoustic categories based on the location of the sound sources in the Page 5 human vocal tract: silence (S), glottal source (G), burst friction (B), mixed (M). A multidisciplinary approach has been taken in an attempt to solve this interesting and important problem in speech processing. In addressing this problem, knowledge and techniques from many fields including, but not limited to digital signal processing, artificial intelligence, pattern recognition, neural networks, psychophysics, speech perception, speech production, and the acoustics of speech have been applied. A connectionist system has been implemented and trained on 34 seconds of continuous speech from a female speaker. When tested on an additional 87 seconds of speech from the same speaker, the classification was 90.0 percent correct and when tested on 106 seconds of speech from a male speaker the accuracy was 88.8 percent. B. Projects comleted and reported at scientific meetings (other than those under "A") (AFOSR Funding) 1. Miller, J.D. (1988). "Auditory-perceptual analysis of selected syllables." J. Acoust. Soc. Am., 84, S154(A). Analyses of consonant-vowel syllables (CVs) in terms of the auditory- perceptual theory of phonetic recognition will be presented. Examples of CVs will include a voiceless stop, a voiceless fricative, a nasal, and an approximant paired with monophthongal vowels. Spectral analyses are used to locate the formant peaks and to track these during the course of the syllable, producing a sequence of spectra, one for each ms of waveform. Formant and F0 information from these sequences is then converted into sensory and perceptual paths in the theory's auditory-perceptual space. This space contains subspaces, called perceptual target zones. The activation of a zone results in the output of a phonetic code. While the exact conditions for this activation are not yet known, it appears that certain aspects of the behavior of a perceptual path in relation to the perceptual target zones can determine the phonetic transcription of a syllable. [Work supported by NINCDS and AFOSR. ] 2. Miller, J.D., Sadoff, S.J., and Veksler, M.R. (1988). "Sensory-perceptual transformations in speech analysis." J. Acoust. Soc. Am., 83, S70(A). In Miller's auditory-perceptual theory of phonetic recognition, the sensory effects of sequences of glottal-source spectra, burst-friction spectra, and silences are integrated into a unitary perceptual response by the sensory-perceptual transformations. The mathematical and computational implementations of the hypothesized transformations will be described. One transformation is applied to spectral pattern or shape and is implemented formant by formant. For example, the center frequency of a perceptual formant is calculated by second-order difference equations from the center frequencies, loudness, and bandwidths of the corresponding sensory formants of glottal-source and burst-friction spectra, when either or both are present, and from similar values for a neutral vocal tract during silence. The loudness of the perceptual response, which is calculated as the integrated loudness of the burst-friction and glottal- source inputs, decays slowly after the cessation of sensory input and, in this way, an audible perceptual response can be maintained during brief Page 6 periods of silence. Examples of application of these concepts to a variety of speech sounds, such as stops and fricatives, will be illustrated. [work supported by NINCDS and AFOSR. ] 3. Fingerhut, J.A. and Miller, J.D. (1989). "Automatic correction of formant tracks," J. Acoust. Soc. Am., 86($1), S124 (ASA meeting, Fall 1989). Since commercially available formant trackers are often inconsistent, much hand-editing is required to correct their outputs. A software package is currently being developed that may reduce this problem. Two female and two male speakers were recorded producing ten vowels in S-vowel-T context, each vowel twice. Boundaries between burst-friction, glottal-source, and silent segments are located using zero crossing and rms energy measurements. Pitch and formant-frequency values are extracted using the API and SGM commands of ILS. Our software corrects the pitch contour and calculates a sensory reference from it (J.D. Miller, J. Acoust. Soc. Am. 85, 2114-2134 (1989)]. Rules based on the relation between peak frequencies and this sensory reference are utilized to label the peaks as formants and the resulting formant tracks are then low-pass filtered. In this way, satisfactory tracking is obtained for all 80 syllables. (Work supported by AFOSR and NIDCD. ] 4. Miller, J.D. and Jongman, A. (1987). "Auditory-perceptual approach to stop consonants." J. Acoust. Soc. Am., 82(Sl), S82(A). Using natural tokens and the synthetic stimuli of Abramson and Lisker [6th Internat. Congr. Phonet. (1970)], the auditory-perceptual theory shall be applied to syllable-initial stop consonants. The burst-friction components of each token are analyzed for the locations of their second and third sensory formants, BF2 and BF3. These are located in the auditory-perceptual space (APS) by the formulas: x - log(BF3/BF2) and y - log(BF2/SR), where SR is the sensory reference. The glottal-source components are then analyzed for their sensory formants: SF1, SF2, and SF3. These are then located in APS by equations: x - log(SF3/SF2), z - log(SF1/SR), and z - log(SF2/SF1). In this way, a sensory path comprised of burst-friction points and glottal-source points is generated. Next, a sensory-perceptual transformation is applied to generate a unitary perreptual path. Since the sensory-perceptual transformation is reactive, the perceptual path overshoots the burst-friction onset of the token and enters a physically unrealizable octant of APS, wherein the perceptual target zones for the stops are located. Distinct target zones are estimated for the stops (p,t,k,d,b,g], for h-like aspiration, and for voice bars. (Supported by AFOSR and NINCDS.] C. Software development. (Joint AFOSR & NIDCD Funding) We have made considerable progress in developing software for the implementation of the theory on computers, using both the Evans and Sutherland three-dimensional graphics terminal and regular two-dimensional terminals. Below we report the work of the last two years. The first year sponsored by the NIH Grant (5 R01 DC00296-06) and the second year jointly supported by the NIH Grant and the AFOSR Grant that is the subject of this report. Page 7

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.