ebook img

Information and Communications Technology for Language Teachers. Module 4. Additional module PDF

114 Pages·2016·1.69 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Information and Communications Technology for Language Teachers. Module 4. Additional module

DAVIES G. INFORMATION AND COMMUNICATIONS TECHNOLOGY FOR LANGUAGE TEACHERS Module 4 ICT4LT Module Computer Aided Assessment (CAA) and language learning Contents Aims Authors of this module 1. Introduction What is Computer Aided Assessment? Which skills can be assessed? 2. Types of Computer Aided Assessment and the Common European Framework of Reference for Languages 3. Using a word-processor for marking and giving feedback 4. Reporting and recording students' progress 5. Using the Web to manage assessment 6. Modern Language Aptitude Testing (MLAT) 7. Plagiarism: detection, deterrence and avoidance Bibliography and references Websites Feedback and blog Aims Aims The aims of this module are for the user to consider key issues in assessing language skills through ICT in order to be able to: i. assess language learning outcomes when ICT is involved in the learning process, ii. use appropriate technologies to assess progress and provide feedback, iii. use Computer Aided Assessment (CAA) as an integral part of the language learning process. This Web page is designed to be read from the printed page. Use File / Print in your browser to produce a printed copy. After you have digested the contents of the printed copy, come back to the onscreen version to follow up the hyperlinks. Authors of this module Terry Atkinson, Freelance Educational Consultant, UK/France. Graham Davies, Editor-in-Chief, ICT4LT Website. 1. Introduction Contents of Section 1 1.1 What do we mean by Computer Aided Assessment? 1.2 Formative and summative assessment 1.3 Which skills can be assessed? 1.4 Exercise or test? 1.1 What do we mean by Computer Aided Assessment? Computer Aided Assessment (CAA) covers a range of assessment procedures and is a rapidly developing area as new technologies are harnessed. In essence, CAA refers to any instance in which some aspect of computer technology is deployed as part of the assessment process. Some of the principle examples of CAA in language learning are: Interactive exercises and tests completed on a computer: see Section 2.1 Use of computers to produce coursework, e.g. using a word-processor Onscreen marking of students' word-processed writing: see Section 3 Using a spreadsheet or database to keep a record of students' marks: see Section 4.2 Use of email to send coursework to students and (for students) to receive marks and feedback: see Section 14, Module 1.5, headed Computer Mediated Communication (CMC). Use of Web pages to set tasks for students and to provide tutor support: see Module 1.5 and Module 2.3 Use of plagiarism detection software: see Section 7 CAA is more than just a list of possible applications, however. Its importance is intimately bound up with raising achievement, since it can be argued that the role of ICT in raising achievement cannot be fully measured unless ICT is also used in the assessment process. Just as oral skills cannot easily be assessed by a written test, so there are ICT-specific language skills that cannot easily be assessed through pencil and paper exercises and tests. Consider the following examples: reading and replying to an email from an exchange partner planning a journey using a Web browser writing an essay using a word-processor If students spend time practising the above activities on computers the intention is to raise achievement in language learning at a general level, and this might well be picked up in a conventional test or examination. However, as the balance moves further and most of students' reading is online and most of their writing is computer-based, using paper technology is unlikely to enable students to demonstrate fully their skills, particularly ICT-related ones such as using writing tools - spellchecker, thesaurus, etc. An integrated approach to teaching, learning and assessment is always likely to be more successful than a random approach. Thus, teaching methods, learning methods and assessment methods need to cohere if learners are to learn successfully and if valid and reliable results are to be achieved from assessment procedures. This is true whether or not ICT is to be used - extensive oral practice activities and no oral exam is still a problem in many classes, and so too is extensive use of computers and no opportunity to use computers as part of the assessment process. Moreover, the incentive for teachers to incorporate learning technologies into classwork is reduced if the examinations do not use these technologies or expressly prohibit their use. 1.2 Formative and summative assessment Formative assessment: In general, CAA is used mainly for formative assessment rather than summative assessment because it is excellent for giving immediate feedback, e.g. in tests designed to measure students' progress in specific areas, either for self-assessment purposes or for the teacher, e.g. as in placement tests (see Section 1.4). Summative asssessment: There is a good deal of discussion at present regarding the use of CAA - also dubbed e-assessment - in examinations at the end of a course and in national examinations such as the GCSE examinations in England. It's a controversial topic and has been subjected to a good deal of media hype, with outrageous claims being made regarding its possible uses. To what extent do you think CAA can be used to carry out summative assessment? Jot down your immediate thoughts on this question and the concerns that you might have if your own students were to be assessed in this way. Then read Section 1.3, Which skills can be assessed? Go back to your list of concerns and see which ones have been answered fully, partly or not at all. Send a message to us, using our Feedback Form, if you still have concerns that have not been answered. Use of CAA for summative assessment is increasing, especially in higher education. Trainee language teachers in England and Wales are now expected to pass computer-based skills tests in literacy, numeracy and ICT, so some of the readers of this module have already experienced summative CAA and will be familiar with the procedures that make it possible. These include: a supervised computer laboratory a log-in system an appointment system timed tests computer-based practice materials The above measures can ensure that there is as much security involved as in paper-based exams and the process is not too dissimilar to that used for carrying out orals. Perhaps a more important question is whether computer-based tests can really assess language skills and, if so, which skills are best assessed through CAA and which through other formats. These questions are addressed in the next section. Summative assessment is not likely to be widely implemented in the near future as there are still a number of concerns about its reliability, but it is likely that ingenious solutions and new technologies will bring about a much greater degree of summative assessment than is currently possible. Placement testing and adaptive testing are more likely to be introduced in some areas: see Section 2.3. 1.3 Which skills can be assessed? Skill Assessment by computer Assessment of electronic output by human being ListeningComputer can assess a limited range of Listening tests can be presented on a different types of responses to test computer, students' answers can be stored comprehension. electronically and assessed by a teacher. Self-assessment and peer assessment are also possible. SpeakingVery limited as yet. Automatic Speech Students can record their own voices on a Recognition (ASR) software is developing computer for assessment by a teacher. Self- rapidly but it is still too unreliable to be used assessment and peer assessment are also in accurate testing. possible. Reading Computer can assess a limited range of Reading tests can be presented on a different types of responses to test computer, students' answers can be stored comprehension. electronically and assessed by a teacher. Self-assessment and peer assessment are also possible. Writing Very limited as yet, but spellchecking, Students' answers can be stored grammar checking and style checking are electronically and assessed by a teacher. possible, and some progress is being made in Self-assessment and peer assessment are also the development of programs that can assess possible. continuous text. Listening At a basic level it is simple to assess listening comprehension in much the same way as it is possible to assess reading comprehension, e.g. with multiple-choice, drag-and-drop and fill-in-the-blank tests. If well designed, this form of assessment works effectively and instant feedback can be offered to the student, which has a beneficial effect on learning. The main ways of assessing listening skills can be summarised as follows: i. Multiple-choice, drag-and-drop, and fill-in-the-blank tests with single-words or very short sentences, but these types of tests cannot easily assess more open-ended aspects such as the ability to infer; and in multiple-choices tests students can get the answers right by guesswork. ii. Completely open-ended answers cannot be assessed. Single-word answers or answers consisting of very short sentences can be assessed to a limited extent. Despite these limitations, assessment of listening comprehension by computer can be of great value to students, offering a form of comprehensible input (Krashen 1985). Moreover, computer-based listening comprehension can combine sound with text, still images, video, animation and onscreen interactivity which creates thereby a much richer environment than is otherwise possible: see Module 2.2, Introduction to multimedia CALL, and Module 3.2, CALL software design and implementation. A measure of student control to allow ease of navigation, options to retry or move to a different section, to attempt different tasks or roles etc are vital to ensure active participation. Good equipment is also vital: headphones, fast network/Internet access and/or networked CD-ROMs. Speaking Limited assessment of speaking skills is possible. Self-assessment and peer assessment can be managed if facilities are available (e.g. microphone and headphones) to allow students to record themselves and listen to the playback. A number of multimedia CD-ROMs have this feature: see Module 2.2. The Encounters series of CD-ROMs, for example, allows students to take part in a role-play by recording their own voices - and re-recording them until they are satisified with the results - and then saving the whole role-play on disc, with their own voices slotted into the appropriate positions in the role-play, for assessment by a teacher. The Encounters series of CD-ROMs was produced by the TELL Consortium, University of Hull and is available from Camsoft. To assess speaking skills solely by a computer, using Automatic Speech Recognition (ASR), is a very complex task and research in this area is developing rapidly. ASR can be motivating for students working independently, but computers are still not completely reliable as assessors. For further information on ASR see: Section 3.4.7, Module 2.2, headed CD-ROMs incorporating Automatic Speech Recognition (ASR) Section 4, Module 3.5, headed Speech technologies See the software for automated testing of spoken English produced by Versant. Reading At a basic level it is simple to assess reading comprehension in much the same way as it is possible to assess listening comprehension, e.g. with multiple-choice, drag-and-drop and fill-in-the-blank tests. If well designed, this form of assessment works effectively and instant feedback can be offered to the student, which has a beneficial effect on learning. The main ways of assessing reading skills can be summarised as follows: i. Multiple-choice, drag-and-drop, and fill-in-the-blank tests with single-words or very short sentences, but these types of tests cannot easily assess more open-ended aspects such as the ability to infer; and in multiple-choices tests students can get the answers right by guesswork. ii. Completely open-ended answers cannot be assessed. Single-word answers or answers consisting of very short sentences can be assessed to a limited extent. More extended reading tasks are harder to set on computer. Onscreen reading of longer texts is in any case inadvisable. Research indicates that people read around 25%-30% more slowly from a computer screen. Web guru Jakob Nielsen writes: Reading from computer screens is about 25% slower than reading from paper. Even users who don't know this human factors research usually say that they feel unpleasant when reading online text. As a result, people don't want to read a lot of text from computer screens: you should write 50% less text and not just 25% less since it's not only a matter of reading speed but also a matter of feeling good. We also know that users don't like to scroll: one more reason to keep pages short. [...] Because it is so painful to read text on computer screens and because the online experience seems to foster some amount of impatience, users tend not to read streams of text fully. Instead, users scan text and pick out keywords, sentences, and paragraphs of interest while skipping over those parts of the text they care less about. (Source: Be succinct! Writing for the Web, Alertbox, 15 March 1997.) More recent research by Nielsen, in which the iPad and Kindle were examined, showed that The iPad measured at 6.2% lower reading speed than the printed book, whereas the Kindle measured at 10.7% slower than print. However, the difference between the two devices was not statistically significant because of the data's fairly high variability. Thus, the only fair conclusion is that we can't say for sure which device offers the fastest reading speed. In any case, the difference would be so small that it wouldn't be a reason to buy one over the other. But we can say that tablets still haven't beaten the printed book: the difference between Kindle and the book was significant at the p<.01 level, and the difference between iPad and the book was marginally significant at p=.06. (Source: iPad and Kindle reading speeds, Alertbox, 2 July 2010.) See Nielsen's other articles on Writing for the Web. Extended texts are more likely to be print-based unless they are in hypertext format, i.e. separate pages linked together as on the Web, or CD-ROM reference materials. In the case of hypertext, the computer may be a suitable medium for assessing information-gathering techniques. The skills needed to track down documents, follow links within and between them and find specific extracts are of increasing importance in academic life, in commercial settings and in leisure time. These skills are, to an extent, generic rather than specific to any given language although learners do need to know key terms involved in searching for information. Many teachers are aware of the need to ensure that learners are equipped with the appropriate language skills for Web browsing, e.g. foreign language terms for help, search, next page, OK etc. Writing Limited assessment of writing skills is possible. It is fairly straightforward to program computers to assess the accuracy of single words and short sentences typed at the keyboard, and work on parsing students' typed responses, diagnosing errors and providing appropriate feedback is in progress - see Section 8 in Module 3.5, headed Parser-based CALL, and see Heift & Schulze (2003). There are also features in modern computer software that can be used within the assessment process, such as spellcheckers to enable self- assessment of spelling, and also grammar and style checkers, which - although still imperfect - do pick up many errors that students can use to self-correct, such as errors of gender or number: see example below which is a screenshot from Microsoft Word. See Section 6.1, Module 1.3, headed Spellcheckers, grammar checkers and style checkers. As for assessing continuous pieces of text, some progress has been made in developing programs that can roughly grade short essays. Figure 1: Screenshot, Microsoft Word Spellchecker The main use of computers in the assessment of writing is, currently, the use of onscreen marking as described in Section 3. You can read more about grammar and style checkers in: Section 6, Module 3.5, headed Human Language Technologies and CALL Section 7, Module 3.5, headed Linguistics and CALL. 1.4 Exercise or test? Computer-based exercises and tests often take the same kind of format. The essential difference between an exercise and a test is the purpose to which it is put. An exercise usually offers instant feedback to the learner and an opportunity to correct any errors that are made, whereas a test may offer little feedback to the learner apart from a raw score at the end of the test, or no feedback at all, e.g. where the results of the test might be stored for analysis by a teacher or examination body. Exercises are usually designed to offer the learner practice in specific areas and to motivate and encourage, whereas tests are usually designed to assess the learner's progress in specific areas, i.e. for self-assessment purposes, for the teacher or for an examination body. But sometimes these distinctions become blurred. The main kinds of tests include: Placement tests: These are designed mainly to sort students into teaching groups so that they are approximately at the same level when they join the group. Placement tests may take the form of adaptive tests (see below). Diagnostic tests: These are designed to enable the learner or teacher to identify specific strengths and weaknesses so that remedial action can be take. See Section 2.2.1 on DIALANG. Diagnostic tests may take the form of adaptive tests (see below). Adaptive tests: See Section 2.3 on WebCAPE. Achievement / attainment tests: These are usually more formal, designed to show mastery of a particular syllabus rather than as a means of motivating the learner or reinforcing specific language skills. Proficiency tests: These are designed to measure learner's achievements in relation to a specific task which they are later required to perform, e.g. follow a university course delivered in language other than their mother tongue. Proficiency tests do not normally take account of any particular syllabus that has been followed. The driving test is a typical example of a proficiency test, i.e. it assesses whether you are competent to be in control of a car on public highways. Aptitude tests: Such tests aim to predict how a student might perform in a specific subject or specific areas of a subject. See Section 6. See Linguanet Worldwide website: This project has recently undergone expansion to incorporate an interface in a number of new languages and addresses in particular the needs of adult learners and independent learners. The site includes advice on ways of assessing and improving one's current ability in different languages (including links to websites that offer diagnostic tests and placement tests), communicating electronically with other language learners and finding appropriate resources. A substantial online catalogue of language learning resources is also being built up. The Goethe-Institut offers an online placement test for learners of German. The BBC Languages website offers basic placement tests. Choose the language first then try the placement test. 2. Types of Computer Aided Assessment and the Common European Framework of Reference for Languages Contents of Section 2 2.1 Interactive exercises and tests 2.1.1 Feedback 2.1.1.1 Learning task 2.1.2 Exercise types 2.1.3 Sources of exercises 2.2 The Common European Framework of Reference for Languages 2.2.1 The DIALANG diagnostic testing project 2.2.2 WebCEF: collaborative assessment of oral language skills through the Web 2.3 Adaptive testing: WebCAPE 2.4 Other examples of tests 2.1 Interactive exercises and tests When reading this section bear in mind the distinction that was made earlier between exercises and tests. See Section 1.4, headed Exericise or test? 2.1.1 The feedback loop The feedback loop refers to the process through which the learner is giving an opportunity to produce an answer and to receive feedback on his/her input. Feedback is essential to learning, but the nature of the feedback is also important and the computer has certain advantages and disadvantages in providing that feedback. See: Section 7.2 , Module 1.1, headed Feedback Section 1.2, Module 1.4, headed Interactivity Section 8, Module 2.5, How to factor feedback into your authoring See also: Bangs (2003). The two main advantages of feedback are that it can be given (i) instantly and (ii) in a non-judgemental way. The major disadvantage is that the feedback given may not necessarily be adequately discriminatory or differentiated. It is possible to provide detailed, comprehensive feedback but this is very time-consuming. Consider a multiple-choice test in which four answers are given for each question. There are lots of software packages that allow the authoring of multiple-choice tests of this sort and for specific feedback to be written in for each answer. However, constructing such a test takes a significant amount of time, both in thinking through the details of the feedback and in typing it all in: see Module 2.5 , Introduction to CALL authoring programs. Despite this disadvantage, the two main advantages of feedback do bring excellent learning opportunities which can be authored by anyone with basic computer literacy. Heather Rendall has shown how a very simple program in which jumbled sentences have to be re-ordered can be used to develop grammatical insight: see the report on her research project in Section 5, Module 1.4. See also Rendall (2001). The immediate assessment that the learner receives provides the key to learning. Consider the following example: the talks boys teacher to the The task is to enter the text in the correct order, i.e. The teacher talks to the boys Consider what the learner must know in order to get this correct: Meaning? Word order? Verb forms? If the learner gets the answer wrong, he/she will be presented with the correct answer immediately. Whilst this is not the most sophisticated form of feedback, Rendall's research points towards the power of this type of feedback in developing an instinctive grasp of the working of the language. Her subjects justify their thinking by saying "it sounds right", so the effect of feedback is to develop a sense of what is linguistically correct. This takes lots of practice but with sentence forms like the one in the above example, banks of exercises can be quickly generated. In fact, it works best at short sentence or phrase level because that eliminates ambiguity. Here are some grammar points than can be tested through this simple procedure: word order in German gender agreement in most languages verb forms in most languages pronoun order 2.1.1.1 Learning task Can you draft some examples of the above points for the language(s) that you teach? Can you think of other points that could be tested in this way? 2.1.2 Exercise types The most common exercise types are listed below: true or false multiple-choice gap-filling and Cloze: see Section 4.6, Module 1.3, headed Cloze procedure text reconstruction (including total Cloze): see Section 8.3, Module 1.4, headed Total text reconstruction: total Cloze. matching / Pelmanism re-ordering jumbled words re-ordering jumbled sentences free text entry crosswords wordsearch exercises It should be borne in mind that since the advent of the multimedia computer tests and exercises can contain a variety of different kinds of presentation of the stimulus, student input and feedback, e.g. a multiple- choice test could consist of text and/or audio/video presentation of the stimulus student input by selecting a number or letter, by clicking on an icon or by drag-and-drop text and/or audio/video feedback - or no feedback at all in some kinds of tests etc... The possibilities are enormous. See Section 5, Module 3.2, headed Template examples. 2.1.3 Sources of exercises Collections of ready-made exercises exist for the more commonly taught languages. These can be found in software catalogues and are increasingly available online via websites: see the Websites list (below). Many exercises have been developed by teachers and are free for non-commercial use. As an alternative to buying ready-made exercises, it is possible to use authoring programs to develop your own exercises: see Module 2.5, Introduction to CALL authoring programs. The following authoring programs are all in common use. Hot Potatoes An authoring tool that was created by Martin Holmes and Stewart Arneil at the University of Victoria, Canada, and launched in 1998. It enables the speedy creation of Web-based exercises for language learners, including multiple choice, gap-filling, matching, jumbled sentences, crosswords and short text entry. This authoring tool has proved extremely popular with language teachers and it continues to be used extensively for the creation of interactive exercises and tests on the Web. Visit the Hot Potatoes website to find out more, download the software and see lots of examples: http://hotpot.uvic.ca. See Winke & MacGregor (2001) for a review of Hot Potatoes. Quia This is an online provider of exercises. There are many language learning exercises which are free to use. It is also possible to develop your own exercises. It is very easy to do this and to store the exercise on the Quia website, although subscription charges may apply: http://www.quia.com Fun With Texts This is one of the most widely used authoring packages for Modern Foreign Languages in UK secondary schools and there are many ready-made exercises available as well as full authoring functions within the program. Its use is primarily in total text reconstruction - so-called total Cloze - but it includes other exercise types too. Version 4.0 includes multimedia enhancements that allow you to integrate images and audio and video recordings into the exercises. See Section 8, Module 1.4, headed Text manipulation, and see http://www.camsoftpartners.co.uk/fwt.htm Question Mark The Question Mark company develops software for presenting interactive exercises and gathering/collating test information: http://www.questionmark.com TaskMagic An authoring tool, produced by mdlsoft, for the creation of a variety of exercise types, including.text match, picture match, sound match, picture-sound match, grid match, mix and gap, exercises based on dialogues. Websites See the Websites list (below). 2.2 The Common European Framework of Reference for Languages The Council of Europe's Common European Framework of Reference for Languages (usually known as the CEFR or CEF) provides a common basis for the elaboration of language syllabuses, curriculum guidelines, examinations and textbooks across Europe, based on research conducted by the Council of Europe dating back to the 1970s. It describes in detail what language learners have to learn to do in order to use a language for communication and what knowledge and skills they have to develop in order to be able to act effectively. The description also covers the cultural context in which language is set. The CEFR defines six reference levels of proficiency, which allow learners’ progress to be measured at each stage of learning and on a life-long basis. The Council of Europe's publications specify the number of guided learning hours (GLH) for the six levels, e.g. The Waystage publication (1990 edition): "The learning-load involved is estimated to be about half of that required by Threshold Level 1990, which means that we think that, with proper guidance, the average learner should be able to master it in some 180-200 guided learning-hours, including independent work." (p. 2 of this 154-page publication: ISBN 92- 871-2002-1) The Threshold publication (1990 edition): "If pressed to give a general indication, nevertheless, we can only say, at this stage, that we assume the learning-load for Threshold Level 1990 to be similar to that for its predecessor [i.e. the earlier publication on Threshold] and that there is some evidence that, with adequate guidance, absolute beginners need an average of c. 375 guided learning hours - including independent work - to reach the older objective." (p. 6 of this 252-page publication: ISBN 92-871-2003-X) The following table shows the correspondences between the six CEFR Levels, Cambridge General English Exams, the Languages Ladder, National Curriculum (England), National Examinations (England) and the number of guided learning hours required for each level. Note: "England", not Wales, Scotland and Northern Ireland, which have different curricula and examination schemes - although they are broadly similar in most respects. CEFR Level Cambridge Languages National National Guided General Ladder Curriculum Exams Learning Hours English (England) (England) Exams A1 1-3 1-3 NQF Entry Approx 90-100 Breakthrough Breakthrough Level 1-3 hours A2 Key English 4-6 4-6 Foundation Approx 180-200 Waystage Test (KET) Preliminary GCSE hours B1 Preliminary 7-9 7-8 & EP* Higher GCSE Approx 350-400 Threshold English Test Intermediate hours (PET) B2 First Certificate10-12 AS A AEA Approx 500-600 Vantage in English Advanced Level hours (FCE) C1 Certificate in 13 University Approx 700-800 Proficiency Advanced Proficiency Degree Level hours English (CAE) and above C2 Certificate of 14 University Approx 1000- Mastery Proficiency in Mastery Degree Level 1200 hours English (CPE) and above The guided learning hours required may vary, of course, depending upon other factors such as the learner’s mother tongue, other languages already learned, the intensity of the course being followed, the inclination and age of the learner, and the amount of exposure to the language outside lesson times, particularly if the course takes place in a country where the language is spoken. The CEFR document can be downloaded from the Council of Europe's CEFR website. A Self-assessment Grid relating to the CEFR is available as a Word DOC file. See also the Council of Europe's Language Policy Division website. Many European countries have already adopted the CEFR as a standard for assessing language proficiency. In the UK the Languages Ladder leans heavily upon the CEFR. Although the Languages Ladder appears to duplicate to some extent what the CEFR has already produced, it offers a new approach to measuring proficiency in the four skills. The Asset Languages assessment scheme is related closely to the Languages Ladder and is designed to provide accreditation options for learners of all ages and abilities from primary to further, higher and adult education. Like the CEFR, the Asset Languages scheme offers sets of "Can Do" Statements that describe what the learner is capable of doing in the different skills. The Cambridge University ESOL website is very informative regarding the CEFR, especially these pages:

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.