ebook img

DTIC ADA454635: Improving the Effectiveness of Army Distributed Learning: A Research and Policy Agenda PDF

0.18 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview DTIC ADA454635: Improving the Effectiveness of Army Distributed Learning: A Research and Policy Agenda

This PDF document was made available from www.rand.org as a public THE ARTS CHILD POLICY service of the RAND Corporation. CIVIL JUSTICE EDUCATION Jump down to document6 ENERGY AND ENVIRONMENT HEALTH AND HEALTH CARE INTERNATIONAL AFFAIRS The RAND Corporation is a nonprofit research NATIONAL SECURITY organization providing objective analysis and effective POPULATION AND AGING solutions that address the challenges facing the public PUBLIC SAFETY SCIENCE AND TECHNOLOGY and private sectors around the world. SUBSTANCE ABUSE TERRORISM AND HOMELAND SECURITY TRANSPORTATION AND Support RAND INFRASTRUCTURE WORKFORCE AND WORKPLACE Purchase this document Browse Books & Publications Make a charitable contribution For More Information Visit RAND at www.rand.org Explore RAND Arroyo Center View document details Limited Electronic Distribution Rights This document and trademark(s) contained herein are protected by law as indicated in a notice appearing later in this work. This electronic representation of RAND intellectual property is provided for non- commercial use only. Permission is required from RAND to reproduce, or reuse in another form, any of our research documents for commercial use. Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 3. DATES COVERED 2006 2. REPORT TYPE 00-00-2006 to 00-00-2006 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Improving the Effectiveness of Army Distributed Learning 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION Rand Corporation,1776 Main Street,PO Box 2138,Santa REPORT NUMBER Monica,CA,90407-2138 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES The original document contains color images. 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF ABSTRACT OF PAGES RESPONSIBLE PERSON a. REPORT b. ABSTRACT c. THIS PAGE 10 unclassified unclassified unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 This product is part of the RAND Corporation occasional paper series. RAND occasional papers may include an informed perspective on a timely policy issue, a discussion of new research methodologies, essays, a paper presented at a conference, a conference summary, or a summary of work in progress. All RAND occasional papers undergo rigorous peer review to ensure that they meet high standards for research quality and objectivity. O C C A S I O N A L P A P E R Improving the Effectiveness of Army Distributed Learning A Research and Policy Agenda Susan G. Straus, Jolene Galegher, Michael G. Shanley, Joy S. Moini RAND RESEARCH AREAS As part of an ambitious plan to transform Creating and maintaining DL courses is costly. It may THE ARTS its training practices, the U.S. Army has be difficult to train some kinds of skills and to estab- CHILD POLICY established a large distributed learning lish rapport between instructors and trainees in distrib- CIVIL JUSTICE EDUCATION (DL) program. This program is intended uted settings. Further, DL environments require that ENERGY AND ENVIRONMENT to enable the Army to provide decentralized training instructors develop new skills and modify their teach- HEALTH AND HEALTH CARE using one or more information and communication ing strategies. Students may also need to learn to use INTERNATIONAL AFFAIRS NATIONAL SECURITY technologies or a combination of classroom train- new technologies and to take a more active approach to POPULATION AND AGING ing and technology-mediated training. The Army learning than is typical in a lecture-type classroom. PUBLICSAFETY Distributed Learning Program, or TADLP, is a com- The purpose of this report is to examine the bene- SCIENCE AND TECHNOLOGY prehensive program that is implementing DL through fits and challenges of DL for learning. In particular, SUBSTANCE ABUSE TERRORISM AND digital training facilities, courseware, learning man- we review research that compares DL to classroom or HOMELAND SECURITY agement systems, and other strategies and supporting residential learning (RL), which is the current standard TRANSPORTATION AND INFRASTRUCTURE mechanisms. Under this program, the Army is in the for training. We examine the effect of DL on learning WORKFORCE AND WORKPLACE process of implementing a plan to convert 525 train- and recommend strategies for the Army to evaluate and ing courses to DL by the year 2010.1 enhance its own DL efforts. DL in the Army involves (1) the delivery of indi- Previous reviews of learning in DL have found vidual, collective, and self-development training to sol- that this research consists largely of anecdotal reports diers and units at any time and any place; (2) through and studies that lack comparison groups, strong experi- multiple means and technologies; (3) using special mental or quasi-experimental designs, or grounding techniques for course design, instruction, methods in theoretical principles (e.g., Phipps and Merisotis, of communication, and organizational and admin- 1999; Wisher et al., 1999).2 At the request of Combined istrative arrangements (Moore and Kearsley, 1996; Armed Services Support Command of the U.S. Army, U.S. Department of the Army, Training Doctrine Straus et al. (2004) reviewed empirical studies pub- Command [TRADOC], 2003). Through DL, the lished in 2000–2002 that compared DL and RL for Army aims to achieve five goals. These goals are to adult learners in military and university settings. Our This product is part of the RAND Corporation occasional (1) train to one standard, (2) improve unit readi- review focused on what students learn, or objective paper series. RAND occasional ness, (3) reduce costs, (4) improve morale, and (5) learning outcomes, as opposed to other aspects of papers may include an informed perspective on a timely policy improve training efficiency (U.S. Department of the effectiveness, such as student satisfaction or the costs issue, a discussion of new research methodologies, Army, TADLP, 2003). DL offers additional benefits of development or delivery.3 Our review was restricted essays, a paper presented at by enabling self-paced learning, opportunities for to studies that used quantitative empirical methods, a conference, a conference summary, or a summary of life-long learning, and automated tracking of student because this approach is most likely to yield reliable work in progress. All RAND occasional papers undergo performance. At the same time, DL poses numerous assessments of the relative effectiveness of DL and rigorous peer review to ensure challenges (e.g., Young, 1998; Welsh et al., 2003). RL. We identified relevant studies by reviewing the that they meet high standards for research quality and objectivity. contents of 27 journals and four sets of conference proceedings in education, technology, psychology, and © RAND 2006 Many thanks to John Bondanella and Derek Davison for their contribu- tions to this research and to Walter Baer, James Crowley, Colonel Steven Hartman, Matthew Lewis, and Mr. Michael Faughnan for their helpful 2 The majority of studies reviewed in these reports were published between feedback on this report. 1986–1998. 1 The projected rate of courseware redesign was based on an annual target 3 Many studies of DL measure satisfaction with instruction. Although goal of 31 courses per year through FY02 and 47 courses per year there- satisfaction is a component of training effectiveness, it is not a measure of www.rand.org after (U.S. Department of the Army, TRADOC, 2001). learning. – 2 – management as well as reports published by the Army in the experimental studies, the effects of medium Research Institute. To supplement our analysis, we were reversed, such that students performed better interviewed training developers, instructors, and other on tests of declarative knowledge in RL than in DL. training staff at five Army installations. Thus, in the quasi-experimental studies in Sitzmann Key findings of this work are as follows: et al. as well as in other reviews, results showing no • Although hundreds of studies were examined, we differences between RL and DL or better perfor- identified only 13 studies that measured objective mance in DL may have been due to factors such as learning outcomes and had unambiguous research selection biases. That is, these differences may be findings.4 due to systematic differences in the types of students • The course topics and media used for DL varied who chose RL and DL, rather than to instructional considerably, making it difficult to draw system- medium. The implications of these findings are par- atic conclusions about instructional media. For ticularly important for the Army to consider, given example, examples of media included one-way and that for some courses, trainees may not have a choice two-way video-teletraining (VTT), Web-based about the instructional medium. instruction, computer-based training, chat, and Although studies such as Sitzmann et al. (2005) self-study. There also were a number of different are promising, internal validity in most research on . . . it is difficult to combinations of technologies used in DL (e.g., learning outcomes in DL remains problematic. In determine which VTT alone or VTT plus chat). other words, the experimental designs used in the technologies • Results were consistent with other reviews that preponderance of studies do not allow conclusions are most likely found no clear advantage for DL or RL on learning. about whether the treatment (DL or RL) caused the to be beneficial • Many of the studies we identified were subject observed effects on learning (see Cook and Campbell, for particular to methodological limitations, including the 1979, for a discussion of threats to internal validity). use of convenience samples rather than random This state of affairs is of increasing concern because kinds of training assignment, small sample sizes, low response technological capabilities are evolving rapidly, and and how those rates or differential response rates in DL and RL new technologies are being deployed without an technologies might conditions, and insufficient evidence about the empirical foundation to inform decisions about their best be employed. reliability and validity of measures. These short- use. Without the knowledge that might be derived comings are consistent with those reported by from such research, it is difficult to determine which Phipps and Merisotis and Wisher et al. in their technologies are most likely to be beneficial for par- 1999 reviews. ticular kinds of training and how those technologies might best be employed. These findings of no advantage for DL or RL were Thus, we present here three sets of recommenda- inconsistent with recent results reported by Sitzmann tions to help the Army meet its goals for DL. The et al. (2005), who conducted a meta-analysis of first set of recommendations focuses on the kinds studies that compared classroom instruction with of questions that should be addressed in a well- Web-based instruction for adult learners. Using defined program of research on the effectiveness of research published from 1996–2005, they found that DL. The second set of recommendations focuses on Web-based instruction was superior to RL for train- improving the quality of research by using rigorous ing declarative knowledge (knowledge about facts) methods that yield credible results. Addressing the (77 studies) but that there was no difference between research questions we propose and using the research media for training procedural knowledge (knowl- techniques we describe in future analyses of DL pro- edge about processes) (12 studies). However, like the grams will strengthen the Army’s knowledge base studies reviewed by Straus et al. (2004), only a small and enable the Army to increase training effective- number of studies (11 studies) in the Sitzmann et al. ness. Finally, we recommend a set of policies that are review were true experiments in which students were needed to support the design, implementation, and randomly assigned to training conditions. Moreover, evaluation of DL in the Army. Establish a Research Agenda Focused on the Effects of Variations in the 4 We originally identified 33 studies that appeared to fit our criteria. After careful review, we eliminated 20 of them because they measured outcomes Design of Distributed Learning other than learning (e.g., satisfaction, graduation rates), consisted of the Many studies of DL have compared DL and RL, same data published in different journals or conference proceedings, or were subject to methodological weaknesses that rendered the results but we believe that focusing on DL alone, and the ambiguous. For instance, in several studies, differences in the characteris- factors that influence its effectiveness, would be more tics of students in the DL and RL conditions (e.g., in pre-test scores, expe- rience, or age) were not controlled for in the analyses of study outcomes. valuable. This recommendation is based on two – 3 – arguments. First, it is often impractical or too costly Examples of research questions pertaining to self- to conduct rigorous field experiments that compare guided learning include the following: results for the same course given in both RL and DL • What types of student-centered approaches are formats, which means that studies of this sort are appropriate for different forms of DL and what are likely to suffer from such methodological problems the effects on learning? as lack of random assignment and small sample sizes, • How do variations in course structure—for exam- which undermine their value. Second, the Army is ple, number of assignments, frequency of dead- already moving to DL, in part because it may offer lines, and requirements for collaboration—affect such advantages as the potential for increased person- self-paced learning and students’ success in DL? nel readiness and the ability to train at any time and • What student characteristics influence success in at any place. These advantages may justify invest- self-guided learning and how should information ments in DL even if performance is no better than in about student individual differences affect devel- RL. Thus, research focused on enhancing the quality opment and delivery of DL? For instance, should of DL—rather than contrasting RL with DL—is the degree of course structure or learner control more likely to yield results that are of practical value afforded to students be based on such character- to the Army. istics as locus of control or mastery (learning) Below, we identify specific questions for a research versus performance (outcome) orientations (e.g., agenda on DL. We regard these questions as particu- Wang and Newlin, 2000)? larly important, but they should be seen as exem- plars of the kinds of questions that might be asked. Examine Specific Technology Configurations There are, no doubt, other specific issues that might A wide variety of technologies, including videotapes, be addressed in a comprehensive analysis of DL. computer-based training, Web-based instruction, and Regardless of the particular issue addressed, the design VTT, may be used in DL. Technologies vary with of specific DL configurations, as well as corresponding respect to such characteristics as whether the instruc- research questions and measures, should be guided by tion occurs in real time (e.g., VTT) or asynchro- theory in education and other relevant disciplines. nously (e.g., computer-based training), the depth of . . . research communication they permit between the instructor focused on Use a Student-Centered Approach to Frame and the student (e.g., voice versus text-based media), enhancing the Research Questions and the degree of interactivity between the informa- quality of DL— Some researchers have discussed the importance of tion source and the student (e.g., watching a video- rather than using a student-centered approach rather than an tape versus playing a role in a constructive or virtual contrasting RL instructor-centered approach to learning in DL. simulation). Examining the effectiveness of different with DL—is more Student-centered approaches, such as the constructiv- technology configurations for DL should provide likely to yield ist model, assume that students learn more effectively knowledge that can be used to improve the design results that are if they take responsibility for their learning and and outcomes of Army training programs. of practical value discover information on their own rather than being Questions of this sort include the following: told what to do by an instructor or machine. Indeed, • What technologies or combinations of technolo- to the Army. Sitzmann et al. (2005) found that in comparison gies are most effective for DL, and how do these with students in RL, students in Web-based courses technologies interact with characteristics of the acquired more declarative knowledge when the courses course topic, students, instructors, and aspects of had high levels of learner control than did students in the training environment? Web-based courses with low levels of learner control. • How can the Army make the most effective use In comparisons of classroom instruction with and of advanced DL technologies, such as intelligent without computer-assisted instruction, Kulik (1994) tutoring and virtual and constructive simulations? found that classes with assisted instruction produced positive effects on learning when compared with Examine Strategies for Facilitating Interaction in traditional instruction, especially when the computer Distributed Learning acted as a tutor—that is, when it divided material into One challenge of DL for learning is that it restricts a sequence of small steps, sought learner responses communication between instructors and students at each step, and provided immediate feedback after and among students. Studies of collaborative learn- each response. Because many technologies used in DL ing show that interaction among students is an permit or require self-guided work, a model of training important source of knowledge acquisition for a and the assessment of training that focuses on what variety of learning tasks (e.g., Slavin, Hurley, and students are doing as they learn seems appropriate. Chamberlain, 2003). A number of aspects of com- – 4 – municating at a distance are likely to interfere with Use Random Assignment Whenever Possible and this process (Straus and Olivera, 2000). For instance, Measure Relevant Covariates of Learning communicating via such modes as telephone, e-mail, In studies that examine different interventions or ways and online chat can require more planning or effort of implementing DL (for instance, different technol- and are typically less elaborate or detailed than the ogy configurations or different types of training for DL . . . features of types of interactions that occur when students and instructors), randomly assigning participants to condi- technologies for instructors are collocated. These features of technolo- tions is the best way to ensure that any differences in DL can create gies for DL can create obstacles to achieving a shared performance between the groups can be reliably attrib- obstacles to understanding of course material and may disrupt uted to the intervention. When random assignment is achieving a shared opportunities for students to learn from instructors impractical (e.g., in field studies of training in exist- understanding of and each other. ing groups), it is especially important to measure and course material Examples of questions that address communica- control statistically for the effects of variables that can and may disrupt tion in DL include the following: typically affect performance in training; such variables opportunities for • What aspects of communication (e.g., frequency, include student cognitive ability, personality traits, students to learn media used) contribute to better outcomes? experience with course material, and demographic from instructors • Under what conditions can and should collabora- characteristics. In the context of DL, variables such tion between students be fostered in DL, and as previous experience with DL and computer self- and each other. what mechanisms can be used to support collabo- efficacy—one’s confidence in using computers—may ration? also affect student performance. • What strategies (e.g., peer tutors) can be used to overcome some of the problems of mentoring at a Use Large Sample Sizes or More Powerful distance? Study Designs Findings of “no difference” between DL and RL can- Examine incentives, training, and support not be reliably interpreted to mean that DL and RL systems for distributed learning developers are equally effective. In the language of research, one and instructors cannot confirm the “null hypothesis.” To be confident DL instruction requires different methods of design- that a finding of no difference is meaningful, one ing training, delivering material, and interacting with needs sufficient statistical power, which is determined, students than does traditional classroom training. We in part, by sample size.5 For example, if one is plan- have seen little research that delineates what skills are ning a study that compares two groups of students— needed and how to train instructors to teach in DL one that receives RL and one that receives DL—393 environments. In addition, our interviews with train- students are needed in each group to conclude that an ing staff suggested that current incentive schemes are absence of difference between groups is meaningful inconsistent with the requirements for the design and (Cohen, 1988).6 Many previous studies comparing RL delivery of effective DL courses. and DL lack sufficient power to conclude that a lack Examples of questions that address the development of difference between conditions is “real.” and application of these skills include the following: Thus, when comparing alternative interventions, • What is the effect of providing training to instruc- such as different technology configurations for DL, tors for teaching in DL environments? and when anticipating a small effect, it is important • What compensation and incentive structures are to use sufficiently large samples or research designs needed to motivate trainers to use appropriate that provide adequate power. In contrast, if one instructional practices for DL? expects a moderate (or larger) effect of the interven- • How do different incentive structures for organic tion, a much smaller sample is required.7 Researchers training developers and external contractors affect also can use study designs that require smaller sam- efficiency in course development (e.g., through ples. For instance, fewer study participants typically sharing and reuse of training content) and course are needed to determine whether a single intervention quality? 5 The larger the sample size, all other things being equal, the smaller the Improve the Quality of Research on error in measurement and the greater the precision or reliability of the results (Cohen, 1988). When results are measured with greater precision, Distributed Learning there is a higher probability of detecting the phenomenon of interest. In addition to shifting the focus of its research on 6 This assumes conventional standards of a small effect size (d = .2), alpha DL, the Army should strive to improve the quality of = .05, and power = .80 (Cohen, 1988). 7 For instance, if one expects a moderate effect size (d = .5) with alpha = this research. .05 and power = .8, 64 students are needed in each group. – 5 – makes a difference (e.g., improves learning) or when learning and provide a means for training develop- the same participants experience multiple interven- ers and quality assurance personnel to access them tions than when comparing alternative interventions online. These measures can then be used to cre- among different groups of participants.8 ate course evaluation forms that are customized to particular courses (Straus et al., 2003). In addition, Use a Multivariate Approach to Measuring making results of course evaluations available to the Training Success training community at large will provide data to sup- The effectiveness of training can be measured many port DL efforts across proponent schools. ways. Using multiple measures can help training developers and instructors determine whether and Develop In-House Expertise in the Development what trainees learn from training. It is particularly and Assessment of Distributed Learning Courses important to use objective measures of learning We suggest that the Army develop in-house expertise (e.g., performance on skills tests) and to measure in the development and assessment of DL courses to knowledge retention and job performance after stu- avoid becoming overly dependent on contractors and dents complete training. In particular, few studies to make more cost-effective use of their services. The have measured job performance following training. more Army personnel understand about how to develop Other indicators of learning from training include and evaluate DL products, the better they will be at self-efficacy (confidence in one’s ability to perform a writing appropriate contracts, conducting oversight task), commitment to training goals, and motivation of product development, and providing appropriate It is particularly (Kraiger, Ford, and Salas, 1993). quality-assurance testing. Key practices the Army important to use should develop to become informed consumers are objective measures Use Appropriate Statistical Tests to Analyze described in Shanley et al. (2005, Chapter 3). of learning . . . Empirical Results One way for the Army to develop this expertise and to measure We found that in a number of published studies, is to use public-private partnerships in which Army knowledge incorrect statistical tests were used, thus compro- training personnel work alongside contractors and mising the meaning and value of results. Bonk and eventually assume responsibility for the training retention and Wisher (2000) provide a useful guide for e-learning function—an approach that state governments have job performance research intended for military training developers found effective for implementing and managing after students and planners, instructional designers, and program information technology projects (Anderson et al., 2003). complete training. evaluators. They describe instructional design issues, appropriate research methods, and suggested topics Enhance Opportunities for Developers and for research. Trainers to Communicate Course development and evaluation are generally Policy Recommendations to Support decentralized in the proponent schools. Our discus- Design, Implementation, and Evaluation sions with personnel at various schools suggest that of Distributed Learning in the Army the training development community would benefit To reap the benefits of an expanded research effort from additional opportunities for formal or informal on the quality of DL, we recommend a number of interaction. For instance, we found that DL modules policies for TADLP and other organizations that developed at one school were applicable to courses are responsible for training in the Army. These rec- being developed at other schools, but the training ommendations are based on the interviews that we developers were not aware of these potential syner- conducted with training developers, instructors, and gies. Likewise, useful information about the effective- other training staff. ness of DL may not be routinely disseminated in the Army training community. Use Empirically Tested Measures of Learning The Army already offers several mechanisms for and Make Results Available Within the Training information-sharing, including the TADLP Bulletin, Community the Army Training Support Center Individual Comprehensive assessment should become a rou- Training Support Directorate DL conferences, and tine procedure in every DL course. We suggest that TRADOC Knowledge Net (TKN), which includes TRADOC identify empirically tested measures of collaborative tools such as threaded discussions, instant messaging, and repositories where staff can post and download documents. Apparently, use of 8 See Cohen (1988) for recommendations regarding sample sizes needed for different study designs. TKN has been limited, perhaps in part because of – 6 – a lack of staff to market and manage it. We recom- of training staff. However, we do not recommend mend that TRADOC provide the staff and financial using these outcomes as metrics because they are resources needed to promote information exchange typically beyond the control of individual training and build communities of practice among training developers or instructors. To influence the behavior staff. Further, it may be helpful to establish an advi- of these training staff, incentives should, instead, sory group made up of training staff from the schools be focused on effective processes or behaviors, such to guide course development and assessment efforts. as (1) applying instructional design principles in DL course development and delivery, (2) creating Align Incentives and Other Organizational SCORM-compliant courseware to facilitate reuse of Systems with the Development, Implementation, content among related courses,9 (3) participating in and Maintenance of High-Quality Distributed communities of practice to share information about Learning DL efforts, (4) conducting systematic evaluations of In the past, resourcing practices have created dis- DL courses and revising courses based on the results incentives for implementing effective DL courses of these evaluations, and (5) revising courses to keep and keeping course material current. For example, both the content and the means of delivering them the training personnel we interviewed reported that up to date. TRADOC should also develop metrics to funding for the delivery of DL courses—which is assess the broader goals of DL specified above at the based on the instructor contact hours metric—does proponent school level. not take into account all the required supporting activities, such as off-the-platform interaction between Summary instructors and students (see also U.S. Department Research on DL has not kept pace with the imple- of the Army, 2002). Recently, the method of calcu- mentation of DL efforts. We recommend that the lating instructor contact hours for DL courses has Army continue to conduct research on DL, empha- been modified to better reflect the amount of inter- sizing how to improve DL rather than comparing DL action required between instructors and students and RL. Such research might focus, for instance, on (U.S. Department of the Army, TRADOC, 2005). the effects of student-centered learning approaches, Resources must be Resources also must be available to update train- specific configurations of DL technologies, and strat- available to update ing to keep pace with the Army’s rapidly changing egies for facilitating interaction among instructors training to keep operating environment. DL courseware maintenance and students. Many other factors that could contrib- pace with the and improvement has been persistently underfunded. ute to learning in DL could be examined in this way. Army’s rapidly As noted in the U.S. Army Audit Agency's (AAA) We also encourage the Army to improve the quality changing operating review of courseware development for DL, many of this research by conducting studies that are based environment. courses are or will be obsolete before they are imple- on more rigorous research designs, use larger sample mented (U.S. Department of the Army, 2002). sizes, and examine multiple aspects of learning. Creating and maintaining DL is essentially a soft- Finally, we recommend providing support for effec- ware development process (Shanley et al., 2005); tive DL development, implementation, maintenance, software life-cycle and financial support models and improvement through organizational resources might inform TRADOC on how to resource DL for and personnel practices. Such support should include the longer term. In addition, the AAA recommended establishing mechanisms to make systematic evalua- that the Army establish a maintenance strategy that tion a standard practice in DL courses. These recom- specifies who is responsible for updating courseware mendations can help the Army use DL to realize its and what procedures to follow if additional financial vision of transforming training and providing life- resources are needed for courseware maintenance. long learning for multiskilled soldiers. To motivate training staff to support DL efforts, performance evaluation systems must be aligned with incentives. Increased personnel readiness and train- ing to one standard are goals for DL that might be 9 SCORM, or Shareable Content Object Reference Model, refers to stan- dards that specify ways to catalog course objects to enable interoperability, considered as a basis for evaluating the performance accessibility, and reusability of learning content. – 7 – Bibliography Anderson, Robert H., Tora K. Bikson, Rosalind Slavin, Robert. E., Eric A. Hurley, and Ann Lewis, Joy S. Moini, and Susan G. Straus, Effective Chamberlain, “Cooperative Learning and Use of Information Technology: Lessons about State Achievement: Theory and Research,” in W. M. Governance Structures and Processes, Santa Monica, Reynolds and G. E. Miller (eds.), Handbook of Calif: RAND Corporation, MR-1704-BSA, 2003. Psychology, Vol. 7, Educational Psychology, New York: Wiley, 2003, pp. 177–198. Bonk, Curtis J., and Robert A. Wisher, Applying Collaborative and e-Learning Tools to Military Straus, Susan G., Derek Davison, John R. Distance Learning: A Research Framework, Alex- Bondanella, Jolene Galegher, Joy Moini, and andria, Va.: U.S. Army Research Institute for the Michael G. Shanley, “The Effectiveness of Behavioral and Social Sciences, September 2000. Distributed Learning: A Review of Research from 2000–2002 and Implications for the Army,” Cohen, Jacob, Statistical Power Anaylsis for the unpublished report, 2004. Behavior Sciences (2nd edition), Hillsdale, N.J.: Lawrence Erlbaum Associates, Inc., 1988. Straus, Susan G., and Fernando Olivera, “Knowledge Acquisition in Virtual Teams,” in Cook, Thomas D., and Donald T. Campbell, T. A. Griffith (ed.), Research on Managing Groups Quasi-Experimentation: Design and Analysis Issues and Teams, Vol. 3, Greenwich, Conn.: JAI Press, for Field Settings, Boston, Mass.: Houghton- 2000, pp. 257–282. Mifflin, 1979. Straus, Susan G., Valerie L. Williams, John R. Kraiger, Kurt, Kevin J. Ford, and Eduardo Bondanella, David M. Oaks, and Emile Ettedgui, Salas, “Application of Cognitive, Skill-Based Designing Assessments of Microworld Training for and Affective Theories of Learning Outcomes to Combat Service Support Staff, Santa Monica, Calif.: New Methods of Training Evaluation,” Journal of RAND Corporation, MR-1759-A, 2003. Applied Psychology Monograph, Vol. 2, No. 2, 1993, pp. 311–328. U.S. Department of the Army, U.S. Army Audit Agency, “Courseware Development for Distance Kulik, Chen Lin C., and James A. Kulik, Learning,” December 3, 2002. “Effectiveness of Computer-Based instruction: An Updated Analysis,” Computers in Human Behavior, U.S. Department of the Army, TADLP, “Army Vol. 7, Nos. 1 and 2, 1991, pp. 75–94. Distributed Learning, TADLP Overview, Training to Win!” March 5, 2003. Online at http://www. Kulik, James A., “Meta-Analytic Studies of tadlp.monroe.army.mil/tadlpcgbrief(5MAR03).pps Findings on Computer-Based Instruction,” in (accessed March 9, 2003). E. L. Baker and H. F. O’Neil, Jr. (eds.), Technology Assessment in Education and Training, Hillsdale, N.J.: U.S. Department of the Army, TRADOC, “Train- Lawrence Erlbaum Associates, Inc., 1994, pp. 9–33. ing Distributed Learning: Managing Courseware Production and Implementation,” Fort Monroe, Moore, Michael G., and Greg P. Kearsley, Distance Va.: TRADOC Pamphlet 350-70-12, July 2003. Education: A Systems View, Belmont, Calif.: Wads- worth Publishing Company, 1996. U.S. Department of the Army, TRADOC, “Distance Learning Course Implementation Phipps, Ronald A., and Jamie Merisotis, What’s Procedures,” June 20, 2001. Online at http:// the Difference? A Review of Contemporary Research www.tadlp.monroe.army.mil/new_page_2.htm on the Effectiveness of Distance Learning in Higher (accessed December 21, 2005). Education, Washington, D.C.: The Institute for Higher Education Policy, 1999. U.S. Department of the Army, TRADOC, “DL ICH Policy & Implementation Procedures,” May Shanley, Michael G., James C. Crowley, Matthew 19, 2005. Online at http://www.tadlp.monroe. W. Lewis, Ralph Masi, Susan G. Straus, Kristin army.mil/ICH%20Documents%20Page.htm J. Leuschner, Steven Hartman, and Sue Stockly, (accessed December 21, 2005). Transformation and The Army School System, Santa Monica, Calif.: RAND Corporation, MG-328-A, Wang, Alvin, and Michael Newlin, “Characteris- 2005. tics of Students Who Enroll and Succeed in Psychology Web-Based Classes,” Journal of Edu- Sitzmann, Traci M., Kurt Kraiger, David W. cational Psychology, Vol. 92, No. 1, 2000, pp. Stewart, and Robert A. Wisher, The Comparative 137–143. Effectiveness of Web-Based and Classroom Instruction: A Meta-Analysis, manuscript under review, 2005.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.