ebook img

ERIC EJ1102019: Student Evaluation in Higher Education: A Comparison between Computer Assisted Assessment and Traditional Evaluation PDF

2012·0.13 MB·English
by  ERIC
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview ERIC EJ1102019: Student Evaluation in Higher Education: A Comparison between Computer Assisted Assessment and Traditional Evaluation

RESEARCH PAPERS STUDENT EVALUATION IN HIGHER EDUCATION: A COMPARISON BETWEEN COMPUTER ASSISTED ASSESSMENT AND TRADITIONAL EVALUATION By YARON GHILAY * RUTH GHILAY ** * Lecturer in the Neri Bloomfield School of Design and Education, Haifa. ** Educational Counsellor in Primary Education. ABSTRACT The study examined advantages and disadvantages of computerised assessment compared to traditional evaluation. It was based on two samples of college students (n=54) being examined in computerised tests instead of paper-based exams. Students were asked to answer a questionnaire focused on test effectiveness, experience, flexibility and integrity. Concerning each characteristic, responders were asked to relate to both kinds of evaluation (computerised and traditional). Furthermore, students were asked to evaluate home and classroom computerised exams. The research reveals that there is a significant advantage to computerised assessment in comparison to paper-based evaluation. The most powerful advantage of computer-assisted assessment found throughout the research, is a test's flexibility. The research findings point out that there is significant worthiness to adopt computerised assessment technologies in higher education, including home exams. Such a new method of evaluation is about to improve significantly the institutional educational administration. Keywords: Computerised Assessment, Traditional Evaluation, Classroom Computerised Exams, Home Computerised Exams, Test Flexibility, Computer Assisted Assessment. INTRODUCTION ·Statistical analysis via SPSS (third year). The Department of Management at the Neri Bloomfield In the year 2010-11, the new system was examined again School of Design and Education, prepares students to including the same courses, except "scientific and teach management and accounting at high schools. The technological literacy." This course has been replaced by department's pedagogical aims are to provide students another one -"management of technology"(third year). relevant tools, so they would be able to deal effectively with In order to examine the effectiveness of the computerised needs existing at high schools. The department deals with tests, a research question was worded focused on the different levels, including theoretical and technological advantages and disadvantages of computer-assisted knowledge. assessment in comparison to traditional evaluation. The In the year 2009-10, a new Computer Assisted Assessment intention was to gain general conclusions concerning the (CAA) system has been, firstly, used. The system, which is a differences between computerised and paper-based part of the existing LMS (Learning Management System), exams, according to attitudes of students in a teacher- has intended to replace traditional assessment. The first training college. experiment of the new system was undertaken in the General Background Department of Management, including the following Assessment is a critical catalyst for student learning (Brown, courses Bull & Pendlebury,1997) and there is considerable pressure ·Strategic management (fourth year). on higher-education institutions to measure learning ·Entrepreneurship (fourth year). outcomes more formally (Farrer, 2002; Laurillard, 2002). This ·Scientific and technological literacy (third year). has been interpreted as a demand for more frequent 8 i-manager’s Journal of Educational Technology, Vol. 9 l No. 2 l July - September 2012 RESEARCH PAPERS assessment. The potential for Information and Comparisons of Traditional Evaluation and CAA Communications Technology (ICT) to automate aspects of The format of an assessment affects validity, reliability and learning and teaching is widely acknowledged, although student performance. Paper and online assessments may promised productivity benefits have been slow to appear differ in several respects. Studies have compared paper- (Conole, 2004; Conole & Dyke, 2004). Computer Assisted based assessments with computer-based assessments to Assessment (CAA), has a considerable potential both to explore this (Ward, Frederiksen & Carlson, 1980; Outtz, ease the assessment load and provide innovative and 1998; Fiddes, Korabinski, McGuire, Youngson & McMillan, powerful modes of assessment (Brown et al., 1997; Bull & 2002). In particular, the Pass-IT project has conducted a McKenna, 2004), and as the use of ICT increases there may large-scale study of schools and colleges in Scotland, be 'inherent difficulties in teaching and learning online and across a range of subject areas and levels (Ashton, assessing on paper' (Bull, 2001; Bennett, 2002a). CAA is a Schofield & Woodgar, 2003; Ashton, Beavers, Schofield & common term to the use of computers in the assessment Youngson, 2004). Findings vary according to the item type, of student learning. The term encompasses the use of subject area and level. Potential causes of mode effect computers to deliver, mark and analyse assignments or include the attributes of the examinees, the nature of the examinations. It also includes the collaboration and items, item ordering, local item dependency and the test- analysis of optically captured data gathered from taking experience of the student. Additionally there may be machines such as Optical Mark Readers (OMR). An cognitive differences and different test-taking strategies additional term is 'Computer Based Assessment' (CBA), adopted for each mode. Understanding these issues is which refers to an assessment in which the questions or important for developing strategies for item development tasks are delivered to a student via a computer terminal. as well as to produce guidelines for developing Other terms used to describe CAA activities include appropriate administrative procedures or statistically computer based testing, computerised assessment, adjusting item parameters. computer aided assessment and web based assessment. Limitations and Advantages of CAA The term screen based assessment encompasses both In contrast to marking essays, marking objective test scripts web based and computer based assessment (Bull & is a simple repetitive task, and researchers are exploring McKenna, 2004). methods of automating assessment. Objective testing is The most common format for items delivered by CAA is now well established in the United States and elsewhere for objective test questions (such as multiple-choice or standardized testing in schools, colleges, professional true/false) which require a student to choose or provide a entrance examinations and for psychological testing response to a question whose correct answer is (Bennett, 2002b; Hambrick, 2002). predetermined. However, there are other types of The limitations of item types are an ongoing issue. A major questions, which can be used with CAA. concern related with the nature of objective tests is whether CAA can also provide academic staff with rapid feedback Multiple-Choice Questions (MCQs) are really suitable for about their students' performance. Assessments which are assessing higher-order learning outcomes in higher- marked automatically can offer immediate and education students (Pritchett, 1999; Davies, 2002), and this evaluative statistical analysis allowing academics to assess is reflected in the opinions of both academics and quality quickly whether their students have understood the assurance staff (Bull, 1999; Warburton & Conole, 2003). The material being taught, both at an individual and group most optimistic view is that item-based testing may be level. If students have misconceptions about a particular appropriate for examining the full range of learning theory/concept or gaps in their knowledge, these can be outcomes in undergraduates and postgraduates, identified and addressed before the course or module's provided sufficientcare is taken in their construction end. (Farthing & McPhee, 1999; Duke-Williams & King, 2001). i-manager’s Journal of Educational Technology, Vol. 9 l No. 2 l July - September 2012 9 RESEARCH PAPERS MCQs and multiple response questions are still the most evaluation (paper based exams) in higher education. frequently used question types (Boyle, Hutchison, O'Hare & Another aim was to examine if there are differences Patterson, 2002; Warburton & Conole, 2003) but there is between home tests, and classroom computerised steady pressure through the use of 'more sophisticated' exams. question types (Davies, 2001).Work is also being conducted The following research questions were worded, relating to a during the development of computer-generated items teacher training college (Mills, Potenza, Fremer & Ward, 2002). This includes the ·What are the advantages and disadvantages of CAA development of item templates precise enough to enable in comparison to traditional assessment methods, the computer to generate parallel items that do not need according to students' views? to be individually calibrated. Research suggests that some ·Are there advantages or disadvantages to subject areas are easier to replicate than others–lower level computerised exams taken place at home in mathematics, for example, in comparison with higher-level comparison to classroom tests, according to students' content domain areas. views? Actually, CAA is not exactly a new approach. Over the last Population and Samples decade, it has been developing rapidly in terms of its Population: The population addressed through the study integration into schools, universities and other institutions. Its included all students in the Neri Bloomfield School of Design educational and technical sophistication and its capacity and Education. to offer elements, such as simulations and multimedia- based questions, are not feasible with paper-based Samples: There were two samples included 54 students assessments (Bull & McKenna, 2004). Overall: 33 in the year 2010 and 21 in 2011. Students in the third and fourth year have been examined via Moodle When there are increasing numbers of students and computerised tests during the whole year. They were asked decreasing resources, objective tests may offer a valuable to answer a questionnaire at the end of the first semester of addition to existing ways of assessment, which are each academic year, concerning their perceptions available for lecturers. towards computerised versus traditional exams. Possible advantages for using CAA might be the following The computerised exams related to the following courses ·To increase the frequency of assessment, there by (including open material) Motivating students to learn. ·Strategic management (2010/2011). Encouraging students to practice skills. ·Entrepreneurship (2010/2011). ·To broaden the range of knowledge assessed. ·Scientific and technological literacy (2010). ·To increase feedback to students and lecturers. ·Management of technology (2011) ·To extend the range of assessment methods. ·Statistical analysis via SPSS (2010/2011). ·To increase objectivity and consistency. Each computerised exam included 25 multiple-choice ·To decrease marking loads. questions with four or five answers each, except SPSS, which ·To aid administrative efficiency. included different types of questions (multiple choice, (Bull & McKenna, 2004). calculated number and matching lists). Students were allowed to use any support material, and they had to finish Method the computerised exam during a definite time (110 The Research Questions minutes). When the time was over, the exam has been The research questions have been derived from the automatically submitted, having no chance to start over necessity to examine advantages and disadvantages of again. computerised assessment in comparison to traditional 10 i-manager’s Journal of Educational Technology, Vol. 9 l No. 2 l July - September 2012 RESEARCH PAPERS The questionnaires were anonymous, and the rate of ·Are there any additional strengths or weaknesses for response was 90% (54 out of 60). computerised assessment beyond what has been mentioned earlier? The traditional exams related to other courses existed in 2010/2011 (research methods, marketing, accounting, ·Are there any additional strengths or weaknesses for sociology, economics, psychology, management and paper-based assessment beyond what has been organizational behaviour). mentioned earlier? Tools Data Analysis In order to examine the effectiveness of computerised In order to examine the validity of the questionnaire, the learners' evaluation in comparison to traditional reliability of the factors was calculated (Cronbach's alpha). assessment, a questionnaire, including 48 closed Item analysis was undertaken as well in order to improve questions was prepared: 24 items related to computerised reliability. Based on the reliability found, the following12 assessment and 24 equivalent items to traditional one. The factors were built (2010 and 2011 together) questionnaires were given to all the students who were ·Test Effectiveness-CAA and Traditional Assessment: examined in one computerised test at least. Most students Coverage of the taught material, accuracy, objectivity were examined in two tests and some of them, took part in and consistency (two factors). three or even four exams. ·Test Experience-CAA and Traditional Assessment: For each question, the respondents were requested to Convenience, pleasure/anxiety, ability of mention their views on the following Likert five-digit scale concentration, real time feedback (two factors). · Strongly disagree ·Test Flexibility-CAA and Traditional Assessment: Based · Mostly disagree on decreasing the load linked to preparation, · Moderately agree transferring and marking, lots of opportunities, flexibility relating to dates for being examined (two factors). · Mostly agree ·Test Integrity-CAA and Traditional Assessment: · Strongly agree Accepting forbidden assistance, test questions leak, The questionnaire was built based upon the literature strictness on test discipline (two factors). review in order to identify the main variables relating to ·Satisfaction with Home and Classroom Computerised CAA. During the review, the following areas have been Tests: Place preference, getting support from the recognized as principal characteristics of CAA lecturer, concentration, time convenience, flexibility, ·The frequency of evaluation. technical operation confidence, (two factors). ·The level of coverage of knowledge areas being ·Test Integrity-Home and Classroom Computerised valued. Tests: Maintenance of test integrity, forbidden ·Providing feedback. assistance, reflection of true knowledge (two factors). ·Diversity of methods and tools of evaluation. For every single factor, there was found a high value of ·Objectivity and consistency. reliability (ranges from 0.649 to 0.891). Each factor has ·Workload linked to preparing, running and marking of been determined by calculating the mean value of the items composing it. exams. Table 1 summarizes the eight factors (four for CAA and four In addition to the closed questions, the questionnaire for traditional assessment), the items composing them and included two open-ended questions as well. They were reliability values. designated to accomplish the main data gathered by the quantitative principal part of the questionnaire, as follows Table 2 summarizes the other four factors (student i-manager’s Journal of Educational Technology, Vol. 9 l No. 2 l July - September 2012 11 RESEARCH PAPERS No. Factor Questionnaire's questions No. Factor Questionnaire's questions 1 Test effectiveness: The test measures the level of my 1 Satisfaction with I prefer to have a computerised test Computerised: Alpha=0.702 knowledge accurately. home/classroom computerised at home/classroom. Traditional: Alpha=0.891 The test covers well the course tests: A computerised test at home/ material required. Home - alpha=0.873 classroom has an advantage over The test assesses basic learning Classroom - alpha=0.878 other examining alternatives. objectives (knowledge and It is easy to get support from the understanding). lecturer during a computerised test at The test assesses high learning home/classroom. objectives (implementation, It is easy for me to concentrate on a analysis, etc.). home/classroom computerised test. The test covers broad areas of the A home/classroom computerised test course. allows me to be examined in The test is objective and consistent. convenient time. 2 Test experience: I enjoy the exam. The flexibility of a home/classroom Computerised: Alpha=0.769 I feel comfortable during the exam. computerised test has a significant Traditional: Alpha=0.882 The test score given at the end of the advantage. test is an advantage. I feel confident concerning the I'm sure my answers would reach technical operation of a home/ properly the lecturer. classroom computerised test. It is convenient for me to give answers 2 Computerised tests integrity- Test integrity is carefully maintained on a computer screen. home/classroom: Home - during a computerised test at home/ It is convenient to update answers I Alpha=0.865 Classroom - classroom. want to change prior to submission. alpha=0.840 In a home/classroom computerised I'm not worried about the exam. test, I do not receive assistance from It is easy to concentrate while others. questions are displayed on a In home/classroom computerised computer screen/paper. tests I get scores that reflect the true The test includes a variety of level of my knowledge. assessment methods. *Each question was written twice in the questionnaire – one for home test and one for The time limit does not disturb me to classroom test. concentrate on. I can appeal against examination Table 2. Factors Relating to Student Satisfaction with Computerised results. Tests Undertaken at Home and in the Classroom and Test Integrity, including the Questionnaire's Questions* 3 Test flexibility: I can get multiple opportunities to be Computerised: Alpha=0.649 tested. Results Traditional: Alpha=0.889 There are many opportunities to improve my grade. There was no significant difference between the years 2010 The lecturer can be flexible concerning the dates of exams. and 2011 concerning the mean scores of all questions and 4 Test integrity: It is difficult to get help from other factors relating to both CAA and traditional assessment Computerised: Alpha=0.670 examinees. Traditional: Alpha=0.853 There is no chance of a leak of exam (ANOVA, a<=0.05). It means that there was a replication of questions. Examinees receive different test the results found in the first year (2010), also in the second questionnaires. Test integrity is carefully maintained. year (2011). It strengthens the findings and gives them *Each question was written twice in the questionnaire – one for CAA and one for traditional more validity. Mean factors' scores are presented for both assessment. Table 1. Factors Relating to Computerised and Traditional years together in Table 3. Assessment, Including the Questionnaire's Questions * Factors Mean N Std. Significance of difference satisfaction with computerised tests undertaken at home Deviation between computerised and traditional assessment and in the classroom and test integrity relating to both Test effectiveness - 4.5462 52 .47104 places), the items composing them and reliability values. computerised Test effectiveness - 4.1942 52 .73786 The following statistical tests have been undertaken traditional Test experience - 4.1173 53 .52112 (a<=0.05). computerised Test experience - 3.7046 53 .81239 ·Independent Samples T-test: It has been undertaken in traditional Test flexibility - 4.3333 52 .57923 order to check significant differences between each computerised Test flexibility - 3.3782 52 1.16228 factor for the year 2010 in comparison to 2011. traditional Test integrity - 4.2010 51 .65004 ·Paired Samples T-test: It was conducted for checking computerised Test integrity – 3.6650 51 .96256 significant differences between computerised and traditional traditional tests as well as additional pairs of factors. Table 3. A comparison Between Computerised and Traditional Assessment 12 i-manager’s Journal of Educational Technology, Vol. 9 l No. 2 l July - September 2012 RESEARCH PAPERS Table 3 shows that relating to these four factors, there is a The open-ended questions strengthened the closed ones significant advantage to CAA in comparison to traditional as shown in the following quotes assessment. "The computerised test has no weaknesses-all the Table 4 presents the gaps between all pairs introduced in questions are clear, accurate and understood. I have no Table 3. Comparison of these gaps shows that there is a complains whatsoever.” significant difference between test flexibility "I enjoyed the computerised tests and in my opinion, it is (gap=1.17718) and all the three other gaps (t(50) =-3.777, definitely preferred in comparison to paper-based exams. p<0.01, t(51) =-3.444, p<0.01, t(49) =2.666, p<0.0 1). On the A computerised test is much more convenient and other hand, there is no significant difference between the interesting. In my view, computerised exams have only gaps relating to the other three factors. The meaning of advantages.” these findings is that with regard to every single gap out of The results summarized in Tables 3-5, the answers to the these four, there is a significant advantage to open-ended questions and statistical significant tests, have computerised tests in comparison to a traditional one. been the basis for wording answers to the research Further more, relating to tests' flexibility, the benefit of CAA is questions, as detailed in the next sections. significantly greater in comparison to their advantage The research questions were as follows concerning the other three factors. I. What are the advantages and disadvantages of CAA Table 5 presents a comparison between home and in comparison to traditional assessment methods, classroom computerised tests, regarding to satisfaction according to students' views? with the tests and the existing level of integrity. The findings The results show that in students' view, computerised show a significant advantage to satisfaction with home assessment has a significant advantage in comparison to tests in comparison to classroom ones (both are traditional one, concerning the following factors being computerised). However, relating to test integrity, there was examined no significant difference. Therefore, it can be confidently concluded that with regard to test integrity, home tests are ·Test Flexibility: Test flexibility is expressed by the number at least not inferior in comparison to classroom exams. of opportunities available for being examined, including chances for improving grades, as well as the Factors' mean gaps N Mean Std. Deviation lecturer's ability to adjust personal test times. Test flexibility 52 .9551 1.17718 Concerning this characteristic, a computerised exam Test integrity 51 .5359 .95703 has a decisive advantage (statistically significant) in Test experience 53 .4127 .90288 comparison to all the other factors' advantage (the Test effectiveness 52 .3519 .77464 gap between computerised assessment and *Factors' mean gaps are sorted in descending order. traditional one is 1.17718). Relating to the other factors, Table 4. Factors' Gaps: Computerised Test Mean Scores Minus Traditional Test Mean Scores * the computerised exam has also a significant Factors Mean N Std. Significance of difference advantage, although its strength is lower. Deviation between computerised and traditional assessment ·Test Effectiveness: This factor describes how the exam Satisfaction with home 4.0388 54 .83652 t(53) =2.242, p= 0.029 measures relevant knowledge accurately, the material computerised tests coverage, evaluation of learning objectives and the Satisfaction with class 3.6238 54 .91126 computerised tests objectivity and consistency of the test. Relating to this Computerised tests 4.0303 54 1.06205 t =0.123, p= 0.903 (53) factor, the computerised exam has a significant integrity -Home Computerised tests 4.0123 54 0.64659 advantage in comparison to a paper-based, and the integrity -Classroom gap between computerised assessment and Table 5. A Comparison Between Home and Classroom traditional one is 0.77464. Computerised Tests i-manager’s Journal of Educational Technology, Vol. 9 l No. 2 l July - September 2012 13 RESEARCH PAPERS ·Test Experience: It relates to the convenience and to prepare, transfer and mark tests. Therefore, it is possible enjoyment of students from the test, the amount of to cover a lot of material while reducing the burden on anxiety, ability to concentrate, influence of the time faculty and administrative staff. The worthiness of adopting limit and the possibilities to appeal. Concerning this a new computerised system of evaluation depends on its factor, the computerised exam has a significant reliability and the ability to assimilate the necessary advantage in comparison to a traditional one, and the technological knowledge among lecturers. gap is 0.90288. Assuming that there is a significant advantage to ·Test Integrity: This factor describes how personal computerised assessment for institutions of higher honesty is kept including getting forbidden help, education, another critical question arises. The question is questions' leak and exam discipline. Relating to this whether in "customers' view," namely students, factor, the computerised exam has a significant computerised assessment is appropriate or at least does advantage in comparison to a traditional one, and the not cause difficulties in comparison to usual assessment. As gap is 0.95703. such, it was necessary to examine the properties of the two methods of assessment from the students' perspective, in ii. Are there advantages or disadvantages to order to learn whether a computerised assessment has computerised exams taken place at home in comparison inferiority or on the contrary, it is superior. to classroom tests, according to students' views? Since the organisational and the administrative The results show that in students' view, there was no advantages are clear, it was enough to conclude that significant difference between home and classroom computerised assessment has at least no disadvantage computerised exams. Two factors have been examined for the examinees, in order to make it worthwhile to adopt · Students' Satisfaction: It is expressed by their preferred the new technology. The study shows that not only there is place, level of support given by the lecturer, ability to be no disadvantage with respect to computerised concentrated, time convenience, flexibility and the level of assessment criteria variety, but it found out that according confidence concerning the technical operation of the to students' perspectives, information technology has computerised test. Concerning this characteristic, students' significant advantages for them. The highlight is expressed satisfaction with home computerised exams is better than in the best possible service to students due to the great classroom tests. The meaning of this finding is that students flexibility of the computer system. If so, the worthiness of have no difficulties to operate home tests alone and feel adopting computerised assessment technology increases confident to receive distance help while needed. significantly, and that might be a great contribution to the ·Tests' Integrity: It is expressed by the ability to maintain educational administration process. test integrity, the extent to which unauthorized Another important conclusion resulting from the research is assistance is given and the extent to that it is feasible to transfer computerised tests, which allow which tests reflect real knowledge. One of the greatest use of open material, at the student home instead of in the concerns of home tests is a hypothetic fear of academic institution. This method has distinct keeping test integrity. The findings show that home tests' organisational and managerial advantages but has also integrity is well maintained, at least equally to an advantage from the students' perspective. It allows classroom exams. great flexibility to students in terms of test date as well as not Discussion having to reach the institution of higher education. The literature review points out many advantages of CAA in According to the research results, transfer to home comparison to traditional assessment. These benefits are computerised exams, does neither involve any mainly focused on organisational and managerial factors. disadvantages, nor problems relating to tests' integrity. When evaluating in a computerised form, it is much easier 14 i-manager’s Journal of Educational Technology, Vol. 9 l No. 2 l July - September 2012 RESEARCH PAPERS References International CAA. Conference (edsDanson M. & Eabry [1]. Ashton H. S., Schofield D. K., & Woodgar S. C. (2003). Piloting C.).Loughborough University, Loughborough. summative web assessment in secondary education. In [13]. Davies P. (2002). There's no confidence in multiple- Proceedings of the 7th International Computer–Assisted choice testing. In 6th International CAA Conference (eds Assessment Conference (edsChristie J.).Loughborough University. Danson M.).Loughborough University, Loughborough. [2]. Ashton H. S., Beavers C. E., Schofield D. K., & Youngson [14]. Duke-Williams E., & King T. (2001). Using computer- M.A. (2004). Informative reports-experiences from the Pass-IT aided assessment to test higher level learning outcomes. In project. In Proceedings of the 8th International 5th International CAA Conference (edsDanson M. &Eabry Computer–Assisted Assessment Conference (eds Ashby M. & C.).Loughborough University, Loughborough. Wilson R.).Loughborough University, Loughborough. [15]. Farrer S. (2002). End short contract outrage. MPs insist, [3]. Bennett R. E. (2002a). Inexorable and inevitable: the Times Higher Education Supplement. continuing story of technology and assessment. Journal of [16]. Farthing D., & Mc Phee D. (1999). Multiple choice for Technology, Learning and Assessment, 1, 100-108. honours-level students? In 3rd International CAA Conference [4]. Bennett R. E. (2002b). Using Electronic Assessment To (edsDanson M.).Loughborough University, Loughborough. Measure Student Performance. The State Education [17]. Fiddes D.J., Korabinski A. A., McGuire G. R., Youngson Standard, National Association of State Boards of Education. M. A. & Mc Millan D. (2002). Are mathematics exam results [5]. Boyle A., Hutchison D., O'Hare D., & Patterson A. (2002) affected by the mode of delivery? ALT-J, 10(6), 1–9. . Item selection and application in higher education. In 6th [18]. Hambrick K. (2002). Critical issues in online, large- International CAA Conference(edsDanson M.). scale assessment: An exploratory study to identify and Loughborough University, Loughborough. refine issues. Capella University, Minneapolis. [6]. Brown G., Bull J., & Pendlebury M. (1997). Assessing [19]. Laurillard D. (2002). Rethinking university teaching a Student Learning in Higher Education. Routledge, London. conversational framework for the effective use of learning [7]. Bull J. (1999). Update on the National TLTP3 Project The technologies. RoutledgeFalmer, London. Implementation and Evaluation of Computer-assisted [20]. Mills C., Potenza M., Fremer J., & Ward C. (2002) . Assessment. In 3rd International CAA Conference (eds Computer-based testing-building the foundation for future Danson M.).Loughborough University, Loughborough. assessment. Lawrence Erlbaum Associates, New York. [8]. Bull J. (2001). TLTP85 Implementation and Evaluation of [21]. Outtz J. L. (1998). Testing medium, validity and test Computer-Assisted Assessment: Final Report. performance. In Beyond multiple choice evaluating [9]. Bull J. & McKenna C. (2004). Blueprint for Computer- alternative to traditional testing for selection (edsHakel M. Assisted Assessment. Routledge Falmer, NY. D.).Lawrence Erlbaum Associates, New York. [10]. Conole, G. (2004). Report on the effectiveness of tools for [22]. Pritchett N. (1999). Effective question design. In e-learning, report for the JISC commissioned Research Study Computer-Assisted Assessment in Higher Education(eds on the Effectiveness of Resources, Tools and Support Services Brown S., Race P. & Bull J.).Kogan Page, London). used by Practitioners in Designing and Delivering E-Learning [23]. Warburton W., & Conole G. (2003). CAA in UK HEIs: the Activities. state of the art. In 7th International CAA Conference(eds [11]. Conole, G. & Dyke, M. (2004). What are the Christie J.).University of Loughborough, Loughborough. affordances of Information and communication [24]. Ward W. C., Frederiksen N., & Carlson, S. B. (1980). technologies?, ALT-J, 12(2), 111–123. Construct validity of free response and machine-scorable [12]. Davies P. (2001). CAA must be more than multiple- forms of a test. Journal of Educational Measurement, 7(1), choice tests for it to be academically credible? In 5th 11–29. i-manager’s Journal of Educational Technology, Vol. 9 l No. 2 l July - September 2012 15 RESEARCH PAPERS ABOUT THE AUTHORS Yaron Ghilay, Ph.D, is a Lecturer in the Neri Bloomfield School of Design and Education, Haifa, Israel and the Jerusalem College. He is also a tutor in professional specialization in educational technology at the Mofet Institute in Tel-Aviv. Previously, he has worked in secondary and higher education. His current research interests are associated with educational technology, school effectiveness, assessment and evaluation and teacher training. Ruth Ghilay, Ph.D, is an Educational Counsellor in Primary Education. Previously, she has worked in educational roles in the military and in secondary education. Her current research interests are associated with educational technology, school effectiveness, assessment and evaluation and career transitions. 16 i-manager’s Journal of Educational Technology, Vol. 9 l No. 2 l July - September 2012

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.