ebook img

ERIC EJ1064298: Learning Content and Software Evaluation and Personalisation Problems PDF

2010·0.23 MB·English
by  ERIC
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview ERIC EJ1064298: Learning Content and Software Evaluation and Personalisation Problems

InformaticsinEducation,2010,Vol.9,No.1,91–114 91 ©2010InstituteofMathematicsandInformatics,Vilnius Learning Content and Software Evaluation and Personalisation Problems EugenijusKURILOVAS,SilvijaSE˙RIKOVIENE˙ InstituteofMathematicsandInformatics Akademijos4,LT-08663Vilnius,Lithuania VilniusGediminasTechnicalUniversity Saule˙tekio11,LT-10223VilniusLithuania e-mail:[email protected];[email protected] Received:July2009 Abstract.Thepaperaimstoanalyseseveralscientificapproacheshowtoevaluate,implementor chooselearningcontentandsoftwaresuitableforpersonalisedusers/learnersneeds.Learningob- jects metadata customisation method as well as the Method of multiple criteria evaluation and optimisationoflearningsoftwarerepresentedbytheexperts’additiveutilityfunctionareanalysed in more detail. The value of the experts’ additive utility function depends on the learning soft- ware quality evaluation criteria, their ratings and weights. The Method is based on the software engineeringPrinciplewhichclaimsthatoneshouldevaluatethelearningsoftwareusingthetwo differentgroupsofqualityevaluationcriteria–‘internalquality’criteriadefiningthegeneralsoft- ware quality aspects, and ‘quality in use’ criteria defining software personalisation possibilities. TheapplicationoftheMethodandPrinciplefortheevaluationandoptimisationoflearningsoft- wareisinnovativeintechnologyenhancedlearningtheoryandpractice.Applicationofthemethod oftheexperts’(decisionmakers’)subjectivityminimisationanalysedinthepaperisalsoanewas- pectintechnologyenhancedlearningscience.Allaforementionedapproachesproposeanefficient practicalinstrumentalityhowtoevaluate,designorchooselearningcontentandsoftwaresuitable forpersonalisedlearnersneeds. Keywords: e-learning systems, learning objects, repositories, virtual learning environments, personalisation,multiplecriteriaevaluation,optimisation,reusability. 1. Introduction:TheBasicNotions,PrinciplesandMethodsUsedintheResearch Theproblemsofcustomisation/personalisationofLearningObject(LO)metadataaswell as Learning Object Repositories (LORs) and Virtual Learning Environments (VLEs), their technological quality evaluation and optimisation are high on the agenda of the Europeanresearchandeducationsystems. Differentscientificmethodsareusedforqualityevaluationoflearningsoftwarepack- ages such as LORs and VLEs. The paper is aimed to consider the problems of expert evaluationandpersonalisationofLORsandVLEstechnologicalqualitycriteria. Thebasicnotions,principlesandmethodsappliedinthepaperareasfollows. Learning object is referred to as any digital resource that can be reused to support learning (Wiley, 2000). LORs are considered here as properly constituted systems (i.e., 92 E.Kurilovas,S.Se˙rikoviene˙ organisedLOscollections)consistingofLOs,theirmetadataandtools/servicestoman- agethem(Kurilovas,2007).Metadataisreferredhereasstructureddataaboutdata(Du- valetal.,2002).VLEsareconsideredhereasspecificinformationsystemswhichprovide thepossibilitytocreateandusedifferentlearningscenariosandmethods(IMI,2005).In ISO/IEC14598-1:1999qualityevaluationisdefinedasthesystematicexaminationofthe extent to which an entity (part, product, service or organisation) is capable of meeting specifiedrequirements. Differentscientificmethodsareusedforqualityevaluationofsoftware.Multiplecri- teria evaluation Method is referred to as the experts’ additive utility function presented furtherinSection4.3includingthealternatives’evaluationcriteria,theirratings(values) andweights. Expertevaluationisreferredtoasthemultiplecriteriaevaluationofthelearningsoft- wareaimedattheselectionofthebestalternativebasedonscore-rankingresults.Accord- ingtoDzemydaandSaltenis(1994),ifthesetofdecisionalternativesisassumedtobe predefined, fixed and finite, then the decision problem is to choose the optimal alterna- tiveor,maybe,torankthem.Butusuallytheexperts(decisionmakers)havetodealwith theproblemofoptimaldecisioninthemultiplecriteriasituationwheretheobjectivesare often conflicting. In this case, according to Dzemyda and Saltenis (1994), “an optimal decisionistheonethatmaximisesthedecisionmaker’sutility”. Theauthorsapplyhereoneofthesoftwareengineeringprincipleswhichclaimsthat oneshouldevaluatethesoftwareusingthetwodifferentgroups/typesof evaluationcri- teria – ‘internal quality’ and ‘quality in use’ criteria (further referred to as Principle). AccordingtoGasperovicandCaplinskas(2006),‘internalquality’isadescriptivechar- acteristicthatdescribesthequalityofsoftwareindependentlyfromanyparticularcontext ofitsuse,and‘qualityinuse’isevaluativecharacteristicofsoftwareobtainedbymaking a judgment based on criteria that determine the worthiness of software for a particular projectoruser/group(GasperovicandCaplinskas,2006). The rest of the paper is organised as follows. Section 2 presents the comprehensive LORstechnologicalqualityevaluationmodel,Section3–thecomprehensiveVLEstech- nologicalqualityevaluationmodel,Section4–implementationofthecustomisableLOs metadataschemaandthemethodforthemultiplecriteriaevaluationandoptimisationof LORsandVLEsfortheparticularlearnerneeds.Conclusionandresultsareprovidedin Section5. 2. LearningObjectRepositoriesQualityEvaluationCriteria The learning software multiple criteria evaluation Method proposed by the authors is basedonthesoftwarequalitycriteria,theirratingsandweights.Themostdifficultprob- lemhereistheanalysisandproposalofsuitableevaluationmodel(setofcriteria). First of all let us review and shortly analyse the literature on existing well- known LORs technological quality evaluation models (i.e., sets of evaluation criteria) andmethods. The main attention is paid to the sets of evaluation criteria (i.e., the models), but severalevaluationmethodsconcerningtheapplicationofratings(values)andweightsof theevaluationcriteriaarealsoprovided. LearningContentandSoftwareEvaluationandPersonalisationProblems 93 2.1. SWITCHLearningObjectRepositoryQualityEvaluationGrid ThefirstLORqualityevaluationtoolpresentedhereisSWITCHprojecttool(SWITCH collection, 2008) developed while evaluating DSpace and Fedora LORs in 2008 (see Table1). Table1 SWITCHLORqualityevaluationcriteria Criteria Sub-criteria Description Architecture Flexibleandmodular Flexibleandmodulararchitecturethatallowstoincludevarious system extensionsinthefuturethatareunknownnow PossibilitytouseLORsystemaspartofafederation APIforstorageengine Basicoperationsaccessiblethroughwell-defined,documented anddesignedAPI.API:SOAP,RESTorcomparable.Basic operations:Createnew,editexisting,search,query(neededfor LMS-LORintegration) APIforuseraccessrights Useraccessrightscanbeassignedprogrammatically APIforfederationfunctions APIsformetadataharvestingandfederatedsearch Metadatasearchwith MetadatasearchworkswithheterogeneousschemasacrossLOR heterogeneousschemas federationandwithinaLOR.Applysmetadatamappingor anotherbest-effortapproach Full-textsearch Fulltextsearch:infederation,inobjectsofmostcommon formats:HTML,XML,TXT,PDF,PPT,DOC,IMS-CP,SCORM Performance Acommonsingle-serverinstallationisabletoreasonablydeal withatleast50’000objects Scalability Possibilitytosupportupto1Mioobjects,ifnecessarywith clusteringetc. Security Nointrinsicvulnerabilities.Bullet-proofaccessrightssystem Interoperability OAI-PMH,federatedsearch Persistentlinks PossibilitytoaddpersistentlinksystemlikeOAI,handle,URN Internationalisation PossibilitytosupportatleastEN,IT,DE,FR Metadata Minimalmetadataschema Possibilitytodefineaminimalmetadataschemawithoverall mandatoryitems Predefinedsetsof Possibilitytopre-definecommonmetadataschemas(IMS, metadata MPEG7,DCetc.)thatcanbeusedwhennecessary Customisablemetadata Institutions,groupsofinterestorindividualscanextendthe schema minimalmetadataschemaaccordingtotheirneeds Metadatamappingfor Possibilitytomapmetadataitemsbetweenschemas.Forexample metadatasearch “contributor”ofmandatoryschemato“author”ofIMSschema Unicodesupport Socialtagging Possibilitytosupportdynamicallydefinedsocialtags Graphical CompletestandardUI CompletestandardUIavailablethatcoversallimportant user functionsforadministratorsandend-users interface ExtensiblestandardUI ThestandardUIcanbecustomisedandextended MultiplestandardUIs MultiplestandardUIscanbeconfiguredtorunonthesame repositorysystem(neededtosupportmultipleinstitutions) CustomUIs PossibilitytoaddspecialpurposeUIsasneeded,like stripped-downqueryinterface,spec.videoportal,etc. (Tobecontinued) 94 E.Kurilovas,S.Se˙rikoviene˙ Table1(continuation) Criteria Sub-criteria Description AAIauthentication PossibilitytoaddAAIauthenticationsystem Associatecopyright Pre-definedcopyrightlicenses(likeCreativeCommons)canbe license easilyassociated Directdistribution ObjectscanbedirectlyaccessedthroughaURL Directstreaming Video/audioobjectscanbedirectlystreamed Alternativeprotocolsfor Accesstroughhttps,WebDAV,(s)ftp,... dataupload Storage Objectcanbeofanyformat Multi-partobjects OneLOmayconsistofmultipleelements:aCDconsistsof audiotracksandscannedbookletimages Accessrights Possibilitytodefinereadandwriteaccessrightson4levels: world,institution,self-definedgroup,private Hierarchicalorganization LOscanbestoredinaself-definedhierarchicalstructurelike Institution–Domain–Department–Teacher Propertyandmetadata Objectproperties(accessrights)andmetadataitemscanbe inheritance inheritedwithinhierarchy.Supportforpredefinedpropertiesand possibilitytooverrideit Versioningsystem Re-uploadinganobjectwiththesameIDdoesnotoverwritethe original,butcreateanewversion. Largeobjects SupportforlargeobjectslikevideosofseveralGBytes Other Strengthofdevelopmentcommunity Strengthofuserscommunity Codequality Documentationquality Easeofinstallation Theresultsoftheshortanalysisofthismodelareasfollows. There is no clear division of the criteria into ‘internal quality’ and ‘quality in use’ criteria in this tool. According to the Principle (see the Introductory Section), ‘internal quality’criteriashouldbemainlytheareaofinterestofthesoftwareengineers,and‘qual- ityinuse’criteriashouldbemostlyanalysedbytheprogrammersanduserstakinginto accounttheusers’feedbackontheusabilityofsoftware. Howeverwecannoticethat‘Architecture’group’ssub-criteriaaremainlyengineering criteria, and therefore they could be analysed as ‘internal quality’ criteria, and all other criteriaaremainlyusers-related,andthereforetheycouldbeanalysedas‘qualityinuse’ criteria. Noaccessibilityand/ordesignforallcriteriaarementionedinthismodel. 2.2. CatalystITTechnicalEvaluationToolforOpenSourceRepositories ThesecondLORqualityevaluationmodelpresentedhereisthemodeldevelopedbyCat- alystIT while evaluating DSpace, EPrints and Fedora LORs in Technical Evaluation of SelectedOpenSourceRepositorySolutions(2006).Themodelisquitecomplexandcom- LearningContentandSoftwareEvaluationandPersonalisationProblems 95 binesseveraltypesofcriteria:scalability,easeofworkingoncodebase,security,inter- operability,easeofdeployment,systemadministration,internationalisation,opensource, workflowtools,andcommunityknowledgebase(seeTable2). Table2 CatalystITLORqualityevaluationcriteria Criteria Sub-criteria Description Scalability ScaleUp AbilityfortheRepositorytoscalehigherbyaddingmore resources(CPU,ram,etc.) Scaleout Therepositorysupportscaching,addingmoreinstances,and othermechanismstoscalehigher Architecture Therepositorybeseparatedintodifferentlocalpartsandput intodifferentmachines(e.g.,separatethedatabase,data directory,componentsfromtherepositorytodistributeto differentmachines) Easeof Add/changedigitalobject Theworkinvolvedinaddingorchangingadigitalobjecttype workingon type suchasaddingorchangingmetadata codebase Documentationofcodeandcodeconsistency&style Security Dataencryption Supportsencryptionofdatawhiletransmittingthecontent, suchhasusingSSL/https Serversecurity Whatdoestherepositoryrequireforinstallation?Doesit followgoodsecuritypractices,e.g.,properfilepermissions, securedatabaseconnection? Authentication Theauthenticationusedbytherepositorytoauthenticateuser Authorisation/access Supportfordifferentrolestoproperlymanagethecontentand rights administerthesystem Abilitytorestrictaccessat E.g.,viewmetadatabutnotcontent repositoryitemlevel Interoperability OAI-PMHcompliant(essential) SOAP,UDDI SRU/SRW Bulkimportandexport Supportforbatch/bulkimportandexportofdigitalobjects Institutionexitmechanismtowithdrawtheircontentfromtherepositoryfarm(essential) Authentication Useanexternalauthenticationmechanism(ex.LDAP) Standardmetadata Dublincore,METS,etc. Easeof Softwareandhardware Therepositoryonlyrequirescommon/basicsoftwareand deployment requirements hardware Packagingandinstallationsteps Separaterepositoryandbrandingforeachinstitution(essential) System Abilitytocustomiselook Changetheheader,theme,footer administration andfeel Easeofpublishing Inexperiencedusersoftherepositorycaneasilypublishcontent Internatio- LocalisableUI nalisation Unicodetexteditingandstorage (Tobecontinued) 96 E.Kurilovas,S.Se˙rikoviene˙ Table2(continuation) Criteria Sub-criteria Description Opensource Opensourcelicense(required) Definedroadmapforthefuture Workflowtools Workflowintegration Supporttousedifferentworkflowtools Supportfordifferentworkflows Community Qualityandcompletenessofinformationontheproduct’swebsite knowledge Sizeofandlevelofactivityinthedevelopercommunity base Sizeofandlevelofactivityintheusercommunity Availabilityanduseofarangeofcommunicationchannels(email,forums,IRC,wiki,etc.) Softwarereleasehistoryforevidenceofsustainabilityandvitality Documentationonhowtosetupand Onecodebase,manyindependent managearepositoryfarm repositories This LORs evaluation model also proposes the set of ratings (values) to assess the evaluationcriteria.Eachcriterioninthistoolisproposedtobegivenarating(value)to be usedwhenevaluatingLORs.Major criteria(if needed)havetobe brokendowninto sub-criteriawitheachsub-criterionalsohavingarating.Theratingrangeis0–4,with0 beingthelowestand4beingofthehighestvalue. Theresultsoftheshortanalysisofthismodelareasfollows: Thereisnodivisionofthecriteriainto‘internalquality’and‘qualityinuse’criteria in this tool. We can notice that ‘Scalability’,‘Security’, ‘Interoperability’, and ‘Ease of deployment’criteriaaremainlyengineeringcriteria,andthereforetheycouldbeanalysed as‘internalquality’criteria,andtheothercriteriaaremainlyusers-related,andtherefore theycouldbeanalysedas‘qualityinuse’criteria. Manyusefulevaluationcriteriaaremissedinthismodel. 2.3. OMIISoftwareRepositoryEvaluationCriteria The next model presented here is “Software Repository – Evaluation Criteria and Dis- semination” prepared by Newhouse (2005) from Open Middleware Infrastructure Insti- tute(OMII). Thisdocumenthasspecifiedthethreecriticalphasesofthesoftwarerepositorypro- cess: 1. Theinformationthatmustbecapturedwhenaproductiscreatedwithintherepos- itoryandaspecificreleasesubmittedtotherepository. 2. Theassessmentcriteriathatshouldbeusedtoreviewthesoftwarecontribution. 3. Howtheproductandreleaseinformation,coupledwiththeevaluationresults,are presentedwithinLOR. The model combines 3 types of criteria: documentation, technical and management (seeTable3). LearningContentandSoftwareEvaluationandPersonalisationProblems 97 Table3 OMIILORqualityevaluationcriteria Criteria Sub-criteria Description/Ideal Documentation Ahighoverallscorehereindicatesthattheusercanbereasonablyconfidentthatthesup- portingdocumentationwillanswerthemajorityoftheirqueries.Howcomprehensiveand usefulistheprovideddocumentation?Ideal:Ahighscoreindicatesthatthedocumentation providessufficientdepthandcoveragetobeusefulforthosetryingtoutilisetheproductfor itsmainpurpose/function Introductory Aconcisesummaryofthesoftware.Ideal:Informationastowhatthe docs. softwaredoesandhowtoquicklygetstartedwithit Pre-requisite Informationrelatingtoenvironmentrequiredforrunningthissoftware. docs. Ideal:Istheenvironmentrequiredforthissoftwarewelldescribed? Installationdocs. Informationonhowtoinstallthesoftware.Ideal:Clearinstructionson howtoinstallthesoftware Userdocs. InformationontheAPI.Ideal:Clearsimpleusermanualwithusage scenariosandsamplecode Admindocs. Informationonhowtoadministerthesoftware.Ideal:Clearinstructions onhowtoconfigurethesoftwareandmaintainitinoperation Tutorials Detailshowtousethesoftware.Ideal:Clear,simplestep-by-step descriptionhowtousethesoftwarewithcodesamplesifappropriate Functional Functionalspecificationofthesoftware.Ideal:Clear,simpledescription specification ofproduct’sfunctionality Implementation Implementationdetailsoffunctionalspecification.Ideal:Thisdocument specification shouldcontainnotonlytheimplementationdetailsbutalsothereasons Testdocuments Detailsofproducttesting.Ideal:Detailsofthetestplans,testcodeand results(withknownissues)fromrunningonvariousplatformsand scenarios.Alsodescribehowtheusercanrepeatthesametests Technical Theevaluatorwillusetheprovideddocumentationtotryandusethesoftware.Theirsuccess (orotherwise)inusingthesoftwarewilldemonstrateifthecontributedsoftwareprovides useful functionality. An examination of the technical components of the software. Ideal: Cantheproductbedeployedanddoesitrunsuccessfullyfromtheprovideddocumentation? Pre-requisites Softwareandenvironmentchangesnecessarytosupporttheinstallation ofthesoftware.Ideal:Arethepre-requisitesaccuratelydescribedand sufficienttoinstallandrunthesoftware? Deployment Deploymentoftheproductintotheserverorclientenvironment.Ideal: Howeasyisthedeploymentofthissoftwareintotherequired environment? Verification Evaluatingthecorrectoperationoftheproduct.Ideal:Isitclearhowyou canverifythatthesoftwarehasbeensuccessfullydeployedand operatingcorrectly,e.g.,post-installationtests? Stability Determinationastothestabilityoftheproduction.Ideal:Doesthe softwarerunreliablyunderreasonableusageandarethereteststo supportthis? Scalability Anassessmentofthescalabilityofthesoftware.Ideal:Howwelldoes thesoftwarerespondtohighlevelsofutilisationandconcurrentclient activityandarethereteststosupportthis? Coding Aninspectionofthecodewithinthesoftware (Tobecontinued) 98 E.Kurilovas,S.Se˙rikoviene˙ Table3(continuation) Criteria Sub-criteria Description/Ideal Management Adecisiontousesoftwarewithinaprojectisdrivenbymanynon-functionalissuessuch assupport,adoption,etc.–non-functionalissuesrelatingtotheproduct.Ideal:Doesthe productappeartohaveasupported,sustainablefuture? Support Thesupportmodelprovidedbythecontributors.Ideal:Isthesoftware supportedthroughthecommunityordedicatedresources? Sustainability Informationabouttheproductsfuture.Ideal:Istherearoadmapwith supportingfundingdefiningtheproductstechnicalfuture? Standards Ideal:Doesthesoftwaresupportmainstreamspecificationsthatare standardsorbecomingstandards? Theresultsoftheshortanalysisofthetoolareasfollows. There is also no clear division of the criteria into ‘internal quality’ and ‘quality in use’ criteria in this tool. We can notice that ‘Technical’ group’s sub-criteria are mainly engineering criteria, and therefore they could be analysed as ‘internal quality’ criteria, and‘Documentation’and‘Management’criteriaaremainlyusers-related,andtherefore theycouldbeanalysedas‘qualityinuse’criteria. Manyusefulevaluationcriteriaarealsomissedinthismodel. 2.4. ComprehensiveTechnicalEvaluationModelforLearningObjectRepositories ThePrinciplepresentedintheIntroductorySectionclaimsthatthereexistboth‘internal quality’and‘qualityinuse’evaluationcriteriaofthesoftwarepackages(suchasLORs). TheanalysisshowsthatnooneofthemodelspresentedinSections2.1–2.3hasclearly divided the LORs quality evaluation criteria into two separate groups: LORs ‘internal quality’evaluationcriteriaand‘qualityinuse’criteria.Thereforeitishardtounderstand which criteria reflect the basic LORs quality aspects suitable for all software packages alternatives, and which are suitable only for the concrete project or user, and therefore needtheusers’feedback. WhileanalysingtheLORqualityevaluationcriteriapresentedinSections2.1–2.3we cannoticethatseveralmodelspaymoreattentiontothegeneralsoftware‘internalqual- ity’evaluationcriteria(suchasthe‘Architecture’groupcriteria)andtheseveral–tothe ‘customisable’‘qualityinuse’evaluationcriteriagroupssuitablefortheconcreteproject or user: ‘Metadata’, ‘Storage’, ‘Graphical user interface’ and ‘Other’. In conformance withthePrinciple,thecomprehensiveLORqualityevaluationmodelshouldincludeboth generalsoftware‘internalquality’evaluationcriteriaand‘qualityinuse’evaluationcri- teriasuitablefortheparticularprojectoruser. Theauthors’proposedLORqualityevaluationmodelispresentedinTable4. This tool is mostly similar to the SWITCH tool (see Table 1) in comparison with the other tools presented in Tables 2 and 3, but it also includes criteria from the other presentedtools(incl.theauthors’ownresearch).Themainideasfortheconstitutionof this model are to clearly divide LORs quality evaluation criteria in conformance with LearningContentandSoftwareEvaluationandPersonalisationProblems 99 Table4 LORstechnologicalqualityevaluationcriteria Internalquality Architecture 1.FlexibilityandmodularityoftheLORsystem evaluation 2.PossibilitytouseLORsystemaspartofafederation criteria 3.Performanceandscalability 4.Security 5.Interoperability 6.Stability 7.Easeofdeployment 8.APIforstorageengine,useraccessrightsandfederationfunctions 9.Coding:aninspectionofthecodewithinthesoftware 10.Full-textsearch 11.Internationalisation Qualityinuse Metadata 12.Minimalmetadataschema evaluation 13.Predefinedsetsofmetadata criteria 14.Customisablemetadataschema 15.Metadatamappingformetadatasearch 16.Unicodesupport 17.Socialtagging Storage 18.Objectcanbeofanyformat 19.Accessrights 20.Hierarchicalorganization 21.Propertyandmetadatainheritance 22.Largeobjects Graphicaluser 23.CompletestandardUI interface 24.CustomisableandextensiblestandardUI 25.MultiplestandardUIs 26.Directdistribution Other 27.Strengthofdevelopmentcommunity 28.Strengthofuserscommunity 29.LOsretrievalquality:userableretrieveLOsindifferentways 30.Easeofinstallation 31.Accessibility,designforall 32.Sustainability 33.Systemadministration:abilitytocustomiselookandfeel 34.Documentationquality the Principle as well as to ensure the comprehensiveness of the model and to avoid the overlapofthecriteria. The overlapping criteria could be ‘Accessibility: access for all’ (it could also be in- cludedintothe‘Architecture’group,butthiscriterionalsoneedsusers’evaluation,there- fore it is included into ‘Quality in use’ criteria group), ‘Full text search’ (which could alsobeincludedintothe‘Qualityinuse’criteriagroup),and‘Propertyandmetadatain- 100 E.Kurilovas,S.Se˙rikoviene˙ heritance’(itcouldalsobeincludedintothe‘Metadata’group,butitdealsalsowiththe ‘Storage’issues). Wehavementioned34differentevaluationcriteriainthismodel(setofcriteria),from which11criteriadealwith‘Internalquality’(or‘Architecture’),and23criteriadealwith ‘Quality in use’. 23 ‘Quality in use’ criteria are divided into four groups for the proba- bly higher quality of practical evaluation and convenience reasons. There could be dif- ferent experts (programmers and users) for different groups of ‘Quality in use’ criteria: ‘Metadata’, ‘Storage’ and ‘Graphical user interface’ criteria need different kind of the evaluators’expertise. Allnewmodelsneedvalidation.ItisplannedtoperformtheproposedLORsquality evaluation model’s validation procedure in autumn 2009 in Lithuania involving three researchersandsoftwareengineeringexpertstovalidate‘Internalquality’criteria,and12 (3foreveryof4groups)programmersanduserstovalidate‘Qualityinuse’criteria. Weexpectthattheadvantagesoftheproposedmodelcouldbeitscomprehensiveness andthecleardivisionofthecriteria. Therefore,thismodelcouldprovidetheexpertsofe-learningsectortheclearinstru- mentality who (i.e., what kind of experts) should analyse what kind of LORs quality criteriainordertoselectthebestLORsoftwarepackagesuitablefortheirneeds. 3. VirtualLearningEnvironmentsQualityEvaluationCriteria Nowletusreviewandshortlyanalysetheliteratureonexistingwell-knownVLEsevalu- ationtoolsandmethods. The main attention is paid to the sets of evaluation criteria, but several evaluation methodsconcerningtheapplicationofratings(values)andweightsoftheevaluationcri- teriaarealsoprovided. 3.1. MethodologyofTechnicalEvaluationofLearningManagementSystems Methodology of Technical Evaluation of Learning Management Systems – LMSs (or VLEs)isapartoftheEvaluationofLearningManagementSoftwareactivityundertaken as part of the New Zealand Open Source LMS project (Technical Evaluation Criteria, 2004). Theevaluationcriteriahereexpandonasubsetofthecriteria,focusingonthetechni- calaspectsofVLEs(Kurilovas,2005): 1. Overall architecture and implementation (suitable for technical evaluation): Scal- abilityofthesystem;Systemmodularityandextensibility;Possibilityofmultiple installations on a single platform; Reasonable performance optimisations; Look andfeelisconfigurable;Security;Modularauthentication;Robustnessandstabil- ity;Installation,dependenciesandportability. 2. Interoperability (suitable for technical evaluation): Integration is straightforward; LMS/VLEstandardssupport.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.