BBrriigghhaamm YYoouunngg UUnniivveerrssiittyy BBYYUU SScchhoollaarrssAArrcchhiivvee Theses and Dissertations 2016-03-01 TThhee EEffffeecctt ooff PPrroommpptt AAcccceenntt oonn EElliicciitteedd IImmiittaattiioonn AAsssseessssmmeennttss iinn EEnngglliisshh aass aa SSeeccoonndd LLaanngguuaaggee Jacob Garlin Barrows Brigham Young University - Provo Follow this and additional works at: https://scholarsarchive.byu.edu/etd Part of the Linguistics Commons BBYYUU SScchhoollaarrssAArrcchhiivvee CCiittaattiioonn Barrows, Jacob Garlin, "The Effect of Prompt Accent on Elicited Imitation Assessments in English as a Second Language" (2016). Theses and Dissertations. 5654. https://scholarsarchive.byu.edu/etd/5654 This Thesis is brought to you for free and open access by BYU ScholarsArchive. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of BYU ScholarsArchive. For more information, please contact [email protected], [email protected]. The Effect of Prompt Accent on Elicited Imitation Assessments in English as a Second Language Jacob Garlin Barrows A thesis submitted to the faculty of Brigham Young University in partial fulfillment of the requirements for the degree of Master of Arts Troy Cox, Chair Wendy Baker-Smemoe Dan P. Dewey Department of Linguistics and English Language Brigham Young University March 2016 Copyright © 2016 Jacob Garlin Barrows All Rights Reserved ABSTRACT The Effect of Prompt Accent on Elicited Imitation Assessments in English as Second Language Jacob Garlin Barrows Department of Linguistics and English Language, BYU Master of Arts Elicited imitation (EI) assessment has been shown to have value as an inexpensive method for low-stakes tests (Cox & Davies, 2012), but little has been reported on the effect L2 accent has on test-takers’ ability to understand and process the test items they hear. Furthermore, no study has investigated the effect of accent on EI test face validity. This study examined how the accent of input audio files affected EI test difficulty as well as test-takers’ perceptions of such an effect. To investigate, self-reports of students’ exposure to different varieties of English were obtained from a pre-assessment survey. A 63-item EI test was then administered in which English language learners in the United States listened to test items in three varieties of English: American English, Australian English, and British English. A post-assessment survey was then administered to gather information regarding perceived difficulty of accented prompts. A many facet Rasch analysis found that accent affected item difficulty in an EI test with a separation reliability coefficient of .98—British English being the most difficult and American English the easiest. Survey results indicated that students perceived this increase in difficulty, and ANOVAs between the survey and test results indicated that student perceptions of an increase in difficulty aligned with reality. Specifically, accents that students were “Not at all Familiar” with resulted in significantly lower EI test scores than accents with which the students were familiar. These findings suggest that prompt accent should be carefully considered in EI test development. Keywords: elicited imitation, language testing, accent ACKNOWLEDGMENTS I would like to thank my committee for their excellent feedback and support, especially Troy Cox for his mentoring and endless patience. I would also like to thank Judson Hart and the other employees of the English Language Center who facilitated this research as well as the other BYU students who volunteered their time to help. Finally, special thanks go to my wife, Sarah, for her support; her sister, Elizabeth, for her assistance with this research; and Sybil Lewis and David, Moira and Marjorie Barrows, who paved the way. TABLE OF CONTENTS ABSTRACT ...................................................................................................................................................... ii ACKNOWLEDGMENTS .................................................................................................................................. iii LIST OF TABLES ............................................................................................................................................. vi LIST OF FIGURES .......................................................................................................................................... vii 1. Introduction .......................................................................................................................................... 1 2. Review of Literature .............................................................................................................................. 4 2.1 Accent and Listening Comprehension ................................................................................................ 4 2.1.1 Effect of accent on listening comprehension. ............................................................................. 5 2.1.2 Accent. ......................................................................................................................................... 7 2.1.3 Accent and test face validity. ..................................................................................................... 10 2.1.4 Listening comprehension. .......................................................................................................... 11 2.2 Elicited imitation ............................................................................................................................... 12 3. Methodology ....................................................................................................................................... 16 3.1 The EI Test ......................................................................................................................................... 16 3.1.1 Selection of speakers. ................................................................................................................ 16 3.1.2 Test Design ................................................................................................................................. 23 3.3 The Pre-test Survey ........................................................................................................................... 25 3.4 The Post-test Survey ......................................................................................................................... 26 3.6 Test Administration ........................................................................................................................... 27 3.7 Test Scoring ....................................................................................................................................... 28 3.8 Data Analysis ..................................................................................................................................... 29 4. Results ................................................................................................................................................. 31 4.1 Research Question 1 ......................................................................................................................... 31 4.2 Research Question 2 ......................................................................................................................... 34 5. Discussion............................................................................................................................................ 40 5.1 Review of Findings ............................................................................................................................ 40 5.2 Implications ....................................................................................................................................... 40 iv 5.3 Limitations ..................................................................................................................................... 42 5.3 Future Research ................................................................................................................................ 42 5.4 Conclusion ......................................................................................................................................... 43 References .................................................................................................................................................. 44 Appendix A .................................................................................................................................................. 48 v LIST OF TABLES Table 1. Accent Rating of Speakers who Volunteered for this Study......................................................... 22 Table 2. Pre-Test Survey Questions ............................................................................................................ 25 Table 3. Demographic Information for Research Participants .................................................................... 27 Table 4. Number of Participants per Test Form .......................................................................................... 31 Table 5. MFRM Report for Accent ............................................................................................................. 33 Table 6. MFRM Report for Test Form (Accent Order) Ordered by Logit .................................................. 34 Table 7. Separation Reliability Statistics for Examinees, Accent, Items, and Test Form ........................... 34 Table 8. Ease of Understanding Accent Descriptive Statistics ................................................................... 36 Table 9. Descriptive Statistics of EI Observed Average Test Score by Accent Familiarity ....................... 37 vi LIST OF FIGURES Figure 1. Sample Question from the Strength of Accent Survey ................................................................ 18 Figure 2. Geolocations of the 42 Australian Participants in the Strength of Accent Survey ...................... 19 Figure 3. Geolocations of the 42 American Participants in the Strength of Accent Survey ....................... 20 Figure 4. Geolocations of the 42 British Participants in the Strength of Accent Survey ............................ 20 Figure 5. Diagram of EI Test Designs ........................................................................................................ 23 Figure 6. Vertical Scale of the Results of Many Facets Rasch Measurement. ........................................... 32 Figure 7. Results of Pre-Test Survey .......................................................................................................... 36 Figure 8. Mean Scores with 95% Confidence Intervals of EI Test Accented Portions Based on Self- Reported Familiarity of Accent ........................................................................................................... 38 vii 1. Introduction Globalization and modern communication and media technology are bringing new challenges to the field of language assessment. While students are increasingly likely to travel and study a major foreign language, they are also more likely to encounter new, challenging varieties of their target language. This is especially true of world languages—such as Spanish or English—which enjoy the privileged status of being studied and spoken internationally in many different contexts, including business, entertainment, and academics. English in particular has developed robust L1 and L2 varieties both regionally and nationally, such that a speaker or learner might only be exposed to one or two varieties. A student of Spanish in Europe, for example, might never have to communicate with a Mexican and might never be exposed to media from Mexico. How, then, would a speaking or listening assessment accurately measure that student’s ability if the prompts contained a Mexican variety of Spanish? Likewise, an Indian speaker of English might never have the opportunity to speak with a native speaker of an Inner Circle variety of English. How fairly would a test designed with British, American, or Australian English assess that speaker’s ability? This raises potential difficulties for assessing listening and speaking, as those who design tests for an international audience cannot make broad assumptions concerning a learner’s background with a given variety of the language. Care must be taken to assure that a listening or speaking assessment measures actual ability and is not affected by a learner’s familiarity (or lack thereof) with the assessment variety. This problem is compounded by the mobility of many language students (who can come from practically any place or language background) and innovative assessments that can be administered anywhere in the world via the internet. While this is a challenge for all types of language assessment, it is a particular challenge 1 for speaking and listening assessments, which typically require an interlocutor or audio prompts. Any audio prompt (or interlocutor’s speech) is by necessity colored by the speaker’s accent, which may add an additional layer of difficulty for those unfamiliar with that variety. As most interlocutors only speak a single variety, oral proficiency interviews risk putting some test-takers at a disadvantage; but even if an enterprising test designer included audio recordings from multiple varieties, they would still be faced with the impossible task of selecting the perfect cocktail of varieties that would fairly assess all test-takers. To address this challenge, this research seeks to better understand the interaction between a learner’s familiarity with regional varieties and their results on an elicited imitation (EI) speaking assessment. EI is a relatively new testing technique that has the potential to expedite the assessment process and reduce cost (Cox, Bown, & Burdis, 2015). The design of EI assessments is fairly simple: students listen to a number of audio prompts and attempt to repeat verbatim what they hear. Their repetition is recorded and later graded for accuracy. With this type of assessment, many students can complete the test simultaneously in a computer laboratory setting and receive rapid feedback; alternatively, they can complete the test from nearly any location using a web-based EI test. The time and cost benefits of EI testing, as well as the flexibility of test administration, make it an attractive alternative to traditional oral proficiency interviews (OPIs) in some low-stakes situations, and a well-designed EI test can accurately predict OPI outcomes (Cox, Bown, & Burdis, 2015). Another challenge of EI testing is face validity (Graham, Lonsdale, Kennington, Johnson, and McGhee, 2008; Van Moere, 2012; Vinther, 2002; Moulton, 2012), meaning that some language testing professionals and test-takers have difficulty seeing how a sentence repetition task could measure language proficiency. Anecdotes and comments from students suggest that 2
Description: