ebook img

Classification of EEG Signals Based on Pattern Recognition Approach PDF

12 Pages·2017·2.18 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Classification of EEG Signals Based on Pattern Recognition Approach

METHODS published:21November2017 doi:10.3389/fncom.2017.00103 Classification of EEG Signals Based on Pattern Recognition Approach HafeezUllahAmin*,WajidMumtaz,AhmadRaufSubhani, MohamadNaufalMohamadSaadandAamirSaeedMalik* CentreforIntelligentSignalandImagingResearch(CISIR),DepartmentofElectricalandElectronicEngineering,Universiti TeknologiPetronas,SeriIskandar,Malaysia Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and thenoptimizedusingFisher’sdiscriminantratio(FDR)andprincipalcomponentanalysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasksusingRaven’sAdvanceProgressiveMetric(RAPM)test;(2)EEGsignalsrecorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), SupportVectorMachine(SVM),Multi-layerPerceptron(MLP),andNaïveBayes(NB)were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90Hz. Accuracy rates for Editedby: detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for DanChen, detailed coefficients (D5) deriving from the sub-band range (3.90–7.81Hz). Accuracy WuhanUniversity,China ratesforMLPandNBclassifierswerecomparableat97.11–89.63%and91.60–81.07% Reviewedby: for A5 and D5 coefficients, respectively. In addition, the proposed approach was DuanLi, UniversityofMichigan,UnitedStates also applied on public dataset for classification of two cognitive tasks and achieved JamieSleigh, comparable classification results, i.e., 93.33% accuracy with KNN. The proposed UniversityofAuckland,NewZealand scheme yielded significantly higher classification performances using machine learning *Correspondence: HafeezUllahAmin classifiers compared to extant quantitative feature extraction. These results suggest [email protected] the proposed feature extraction method reliably classifies EEG signals recorded during AamirSaeedMalik cognitivetaskswithahigherdegreeofaccuracy. [email protected] Keywords:featureextraction,featureselection,machinelearningclassifiers,electroencephalogram(EEG) Received:02May2017 Accepted:01November2017 Published:21November2017 INTRODUCTION Citation: AminHU,MumtazW,SubhaniAR, Clinicians use the electroencephalogram (EEG) as a standard neuroimaging tool for the study SaadMNMandMalikAS(2017) of neuronal dynamics within the human brain. Data extracted from EEG results reflect the ClassificationofEEGSignalsBased process of an individual’s information processing (Grabner et al., 2006). Recent technological onPatternRecognitionApproach. Front.Comput.Neurosci.11:103. advanceshaveincreasedthescopeofEEGrecordingabilitiesbyusingdensegroupsofelectrodes doi:10.3389/fncom.2017.00103 includingarraysof128,256,and512electrodesattachedtothecranium.Visualinspectionofthese FrontiersinComputationalNeuroscience|www.frontiersin.org 1 November2017|Volume11|Article103 Aminetal. EEGSignalsAnalysis massivedatasetsiscumbersome(Übeyli,2009)forexistingEEG- methods including Fisher’s discriminant ratio (FDR) and basedanalysistechniques.Hence,optimizedfeatureextractionof principalcomponentanalysis(PCA)toeliminatenon-significant relevantEEGdataisessentialtoimprovethequalityofcognitive features, which, in turn, optimized the “features” data set. As performance evaluations, especially since it directly impacts a for classification, we employed K-nearest neighbors (KNN), classifier’s performance (Iscan et al., 2011). In addition, more SupportVectorMachine(SVM),Multi-layerPerceptron(MLP), expressive features enhance classification performance; hence, andNaïveBayes(NB)tooptimizethe“features”setevenfurther featureextractionhasbecomethemostcriticallysignificantstep forourpurposeofEEGpatternclassification.Resultswerethen inEEGdataclassification. comparedwithexistingquantitativemethodstoconfirmrather Severalresearchersinvestigated“Quantitative-EEG”(QEEG) robust outcomes. Two EEG datasets were used to validate the fortheevaluationofneuralactivityduringcognitivetasks.They proposed method. Dataset I comprise complex cognitive task usedtimeandfrequencydomainfeaturessuchas,entropy,power (class 1) and baseline eyes open (class 2) task; while dataset II spectrum, autoregressive coefficients, and individual frequency comprises two cognitive tasks: mental multiplication (class 1) bands, etc. (Doppelmayr et al., 2005; Zarjam et al., 2012). andmentallettercomposing(class2). Frequency characteristics depend on neuronal activity and are Section Materials and Methods describes materials and groupedintoseveralbands(delta,theta,alpha,beta,andgamma; methods, followed by section Experimental Results and Amin and Malik, 2013) that have been linked to cognitive Discussion, which presents results and discussion, followed by processes. sectionLimitationsandconcludingremarks. Severalfeatureextractionmethodshavebeenreportedinthe literature.Theseincludetime-frequencydomainandthewavelet MATERIALS AND METHODS transform(WT)(Iscanetal.,2011).WT-basedanalysisishighly effectivefornon-stationaryEEGsignalscomparedtotheshort- This section provides details of experimental tasks and the time Fourier transformation (STFT). Moreover, wavelet-based dataset used in this study. In addition, we briefly describe features,includingwaveletentropy(Rossoetal.,2001),wavelet the classification algorithms employed and the DWT, as coefficients (Orhan et al., 2011), and wavelet statistical features well as the computations used for wavelet and relative (mean, median, and standard deviations) have been reported energy. for the evaluation of normal EEG patterns and for clinical Raven’s Advance Progressive Matric Test applications (Yazdani et al., 2009; Garry et al., 2013). However, significant gaps in the literature exist regarding cognitive load (RAPM) studies and approaches to pattern recognition. Many studies Raven’s Advance Progressive Matric Test (Raven, 2000) is employedmultiplecognitivetaskssuchas,multiplication,mental a non-verbal tool that measures levels of an individual’s object rotation, mental letter composing, and visual counting intellectual ability. It is commonly used to explicitly measure (Xue et al., 2003). These tasks are simple in nature and may twocomponentsofgeneralcognitiveability(Raven,2000):“the not induce a high enough load to activate correlative cognitive ability to draw meaning out of confusion, and the ability to neuronalnetworksthatgeneratedetectableelectricalpotentials. recallandreproduceinformationthathasbeenmadeexplicitand Furthermore, from a pattern recognition perspective, several communicatedfromonetoanother.”TheRAPMtestcomprises studiesexcluded“featurenormalization”and“featureselection” 48 patterns (questions) divided into two sets (I and II). Set- steps (Xue et al., 2003; Daud and Yunus, 2004), while many I contains 12 practice patterns and Set-II contains 36 patterns others used very few instances (observations) as input for used to assess cognitive ability. As shown in Figure1, each classifiers that query the output of classification algorithms patterncontainsa3 3-cellstructurewhereeachcellcontainsa × (Lin and Hsieh, 2009; Guo et al., 2011). In addition, some geometricalshape,exceptfortheemptyright-bottomcell.Eight studies failed to cross validate their proposed classification multipleoptionsarepresentedassolutionsfortheemptycell.A procedures. scoreof“1”isassignedforeachcorrectanswerandascoreof“0” The present study utilized “Raven’s Advance Progressive foranincorrectanswer.Theprocessingtimeis10and40minfor Metric” (RAPM)—commonly used to measure inter-individual setsIandII,respectively. differences in cognitive performance—as its standard psychometric cognitive task to stimulate EEG data (Raven, Description of Dataset I 2000; Neubauer and Fink, 2009). RAPM requires higher ProceduresadoptedfortherecordingandpreprocessingofEEG cognitiveprocessingresourcesandreasoningabilitytoperform dataarefoundinourpreviousstudies[7,8].Afterpreprocessing, its tasks, which are non-verbal psychometric tests that require wedividedthedatasetintotwoclassesasfollows:Class1included inductive reasoning considered an indicator of cognitive EEGdatafromeightexperimentalsubjectsrecordedduringtheir performance (Raven, 2000). The authors propose a “pattern RAPM task performances; Class 2 included EEG data recorded recognition” approach comprising feature extraction, feature witheyes-open(EO)fromthesameeightsubjects’.Asmentioned normalization,featureselection,featureclassification,andcross intheRAPMtaskdescription,eachsubjectcompleted36patterns validation (Figure5). Wavelet coefficients were extracted using within a specified period. Hence, these EEG recordings were the discrete wavelet transform (DWT) as well as relative sub- time-markedfortheonsetofRAPMpatterndisplay,andagain bandenergies,whichwerethenstandardizedtozeromeanand at the end of a subject’s response, specifically when pressing a unit variance. The feature selection process utilized statistical button indicating a solution. Each subject’s EEG recording was FrontiersinComputationalNeuroscience|www.frontiersin.org 2 November2017|Volume11|Article103 Aminetal. EEGSignalsAnalysis MentalLetterCompositionTask Inthistask,theparticipantswere askedto mentallycompose a lettertoafriendorrelativewithoutvocalizing. The Discrete Wavelet Transform (DWT) Discrete Wavelet Transform decomposition includes successive high and low pass filtering of a time series with a down- sampling rate of 2. The high pass filter [g(n)] is the discrete “mother” wavelet and the low pass filter [h(n)] is its mirror version (Subasi, 2007). The “mother” wavelet [Daubechies wavelet(db4)]andcorrespondingscalingfunctionareshownin Figure2. Outputs from initial high pass and low pass filters are called “approximations” and “detailed” coefficients (A1 and D1), respectively. A1 is disintegrated further and the procedure repeated until reaching a specified number of decomposition levels (see Figure3; Jahankhani et al., 2006; Subasi,2007). Thescaling[ϕ (n)]andwaveletfunctions[ψ (n)]both j,k j,k dependonlowpassandhighpassfilters,respectively.Theseare denotedasfollows: φj,k(n) 2−j/2h 2−jn k (1) = (cid:0) − (cid:1) FIGURE1|AsimpleRaven’sstylepattern(Aminetal.,2015b). ψj,k(n) 2−j/2g 2−jn k (2) = (cid:0) − (cid:1) Wheren 0, 1, 2, ..., M 1 j 0, 1, 2,..., J 1 k = − ; = − ; = 0, 1, 2, ..., 2j 1 J 5 and segmentedaccordingtothenumberofpatternssolved.EachEEG − ; = ; Misthelengthof thesignal(GonzalezandWoods,2002). segment was also considered an observation; thus, producing Approximation coefficients (A) and detailed coefficients 36 observations per subject. However, a few patterns went i (D )attheithlevelaredeterminedasfollows(Orhanetal.,2011): unanswered (unsolved) by some subjects, which EEG segments i wereexcludedfromtheanalysis.Weobserved280observations 1 forClass1.Tomaintainabalancebetweenclasses,EOandEC A x(n).φ (n) (3) i = √MX j,k (eyes-closed)dataforeachsubjectweresegmentedaccordingto n the number of attempted RAPM patterns. As a result, EO data 1 D x(n).ψ (n) (4) alsoincluded280observationsforClass2. i = √MX j,k n Description of Dataset II Relative and Total Wavelet Sub-band EEGdatasetutilizedinthisstudywasoriginallyreportedbyKeirn Energy and Aunon (1990) and publically available for reuse. The EEG Wavelet energy for each decomposition level (i 1,...,l) is datawasrecordedbyplacingelectrodesoverC3,C4,P3,P4,O1, determinedasfollows: = andO2positionsaccordingto10–20montageandreferencedto linkedmastoids,A1andA2.Theimpedancesofalltheelectrodes N 2 were kept below 5 k(cid:127). The data were digitized at 250Hz with EDi =X(cid:12)Dij(cid:12) , i = 1, 2, 3,...,l (5) a Lab Master 12-bit A/D converter mounted on a computer. j 1(cid:12) (cid:12) = Seven participants, 21–48 years old, recorded EEG data during cognitive tasks. The data was recorded for duration of 10s in Wherel 5,reflectsthelevelofdecompositions = eachtrialandeachtaskwasrepeatedfivetimes(formoredetail N of dataset, see Keirn and Aunon, 1990 original work). For this 2 E A , i l (6) study, we tested our proposed approach on two cognitive tasks Ai =X(cid:12) ij(cid:12) = j 1(cid:12) (cid:12) employedfromKeirnandAunon’sdatasetanddescribedbelow. = Therefore,fromEquations(5)and(6),totalenergycanbedefined MentalMultiplicationTask as: Theparticipantsweregivennontrivialmultiplicationproblems, such as, multiply two multi-digit numbers, and were asked to l solve them without vocalizing or making any other physical E  E E  (7) Total = X Di + Al movements. i 1  = FrontiersinComputationalNeuroscience|www.frontiersin.org 3 November2017|Volume11|Article103 Aminetal. EEGSignalsAnalysis FIGURE2|Motherwaveletandscalingfunction(db4). FIGURE3|DWTsub-banddecomposition. The normalized energy values represent relative wavelet energy SupportVectorMachine(SVM) (see Figure4 for an example of total and relative sub-band The SVM is a supervised learning algorithm that uses a kernel energy). trick to transform input data into higher dimensional space, afterwhichitsegregatesthedataviaahyper-planwithmaximal margins.Duetoitsabilitytomanagelargedatasets,thealgorithm E j Er , (8) is widely used for binary classification problems in machine = E Total learning.FormoredetailsonSVM,see(Hsuetal.,2003). MultilayerPerceptron(MLP) WhereE E orE . j = Di=1,...,5 Ai=5 MLP is a non-linear neural network based method comprising three sequential layers: input, hidden and output, respectively, Classification Algorithms where the hidden layer transmits input data to the output Machine learning classifiers used in this study are now briefly layer. However, the MLP model can cause over-fitting due to described. insufficientorexcessivenumbersofneurons.Forourpurposes, Aclassifierutilizesvaluesforindependentvariables(features) weemployedtheMLPmodelwithfivehiddenneurons. as input to predict the corresponding class to which an independent variable belongs (Pereira et al., 2009). A classifier NaïveBayes(NB) hasanumberofparametersthatrequiretrainingfromatraining The NB classifier provides simple and efficient probabilistic dataset. A trained classifier will model the association between classification based on Bayes’ theorem, which posits that classesandcorrespondingfeaturesandiscapableofidentifying extracted features are not dependent. The NB model uses (i) a new instances in an unseen testing dataset. We employed the maximumprobabilityalgorithmtodeterminetheclassofearlier followingclassificationmethodstodemonstratetheeffectiveness probabilities, and (ii) a feature’s probability distribution from a ofthisstudy’sproposedtechnique. training dataset. Results are then employed with a maximized FrontiersinComputationalNeuroscience|www.frontiersin.org 4 November2017|Volume11|Article103 Aminetal. EEGSignalsAnalysis FIGURE4|EEGsignalenergyandrelativesub-bandenergy. posteriori decision treeto findthespecificclasslabelfor a new for either training or testing, where each instance is employed testinstance(Hanetal.,2011). for validation exactly once. For our purposes, we used 10- fold cross-validation to train and test extracted features for all k-NearestNeighbor(k-NN) classifiers. The k-nearest neighbor is a supervised learning algorithm that identifiesatestingsample’sclassaccordingthemajorityclassof The Proposed Scheme k-nearesttrainingsamples;i.e.,aclasslabelisallocatedtoanew Figure5 summarizes the proposed feature extraction scheme, instanceofthemostcommonclassamongstKNNinthe“feature” whichcomprisesthefollowingsteps: space.Inthisstudy,kvaluewassettothree.See(Pereiraetal., Steps: 2009;Hanetal.,2011)fordetailsonk-NN,SVM,MLP,andNB a. FeatureExtraction machinelearningclassifiers. 1. DecompositionofEEGsignalintosub-bandsusingDWT K-foldCrossValidation 2. Computationofeachsub-band’srelativeenergy All classification models in the present work were trained and Repeat steps 1 and 2 for all channels for each subject and for tested with EEG data and then confirmed using k-fold cross eachsegment(question) validation, whichis a commonly used technique thatcompares b. FeatureVisualizationandStandardization (i)performancesoftwoclassificationalgorithms,or(ii)evaluates theperformanceofasingleclassifieronagivendataset(Wong, 1. Standardizationofextractedfeaturestozeromeanandunit 2015). It has the advantage of using all instances in a dataset variance FrontiersinComputationalNeuroscience|www.frontiersin.org 5 November2017|Volume11|Article103 Aminetal. EEGSignalsAnalysis FIGURE5|ProposedschemeforfeatureextractionandclassificationofEEGsignals. c. FeatureSelection RelativeEnergyFeatureMatrix(−→Fr) [ErA5(280 128)] = × 1. ApplicationofFDRtothe“features”setfollowedbysorting Where,thenumberofchannelsis128;thenumberofinstances features in descending order according to the power of in each class is 280; E represents relative energy at discrimination rA5 approximation coefficients A5 (0–3.90Hz). Similarly, feature 2. SelectionofsubsetfeaturesabovetheFDRmedianofsorted matricesforD –D coefficientswereidenticallyrepresentedfor featuresaslistedinstep1 2 5 eachclassandeachexperimentalsubject. 3. Transforming selected subset features to principal components followed by sorting in descending order FeatureVisualizationandNormalization accordingtoprincipalvalue Featurevisualizationisanimportantstepbeforetheapplication 4. Selectionofprincipalcomponentsupto95%variance of any normalization method. Features should be visualized to 5. Reconstructionofcorrespondingfeaturevectors checkthedistributionoffeaturevalues(Figure7).Inthisstudy, Repeatsteps1to5forallsub-bandsinthe“features”set featureswerestandardizedasfollows: d. FeatureClassification x µ x − (9) 1. ClassificationofselectedfeaturesetsviaKNN,SVM,MLP ´ = σ andNB Wherexistheoriginalfeaturevalue;µisthemean;andσ isthe 2. Evaluation of classifier performances with 10-cross standarddeviationofthe“features”set,respectively,andx´isthe validation and assessing degrees of accuracy, sensitivity normalizedfeaturevalue. andspecificity Repeatsteps1and2foreachsub-bandofthe“features”set FeatureSelection The main objective of the feature selection step in pattern FeatureExtraction recognition is to select a subset from large numbers of The EEG signal was decomposed into sub-band frequencies by available features that more robustly discriminate for purposes using the discrete DWT with Daubechies 4 Wavelet to level ofclassification(Theodoridisetal.,2010).Inthisstudy,FDRand 5. Approximate and detailed coefficients were then computed PCAwereusedtooptimizefeaturesselection. (see Figure6 for example). Table1 presents one channel’s sub- FDR is an independent type of class distribution and a band relative energy percentage and frequency range for a quantifier of the discriminating power of individual features singleexperimentalsubject.Totalandrelativesub-bandenergies betweenclasses.Ifm andm aremeanvalues,andσ2andσ2are 1 2 1 2 were then computed from extracted wavelet coefficients. We respectivevariancesforafeatureinbothclasses,FDRisdefined calculated relative wavelet energy(ErD1,ErD2,...,ErA5) by using as: Equation(8). We computed relative energy features for all experimental (m1 m2) FDR − (10) subjectsandforalldatacollectedfromallchannels.Accordingly, = σ2 σ2 (cid:0) 1 − 2(cid:1) for dataset I, the feature matrix describing relative energy for a single subject in each EEG condition and for each sub-band PCA(AbdiandWilliams,2010)selectsformutuallyuncorrelated (detailedorapproximated)becomes: features; hence, it avoids redundancy in the feature set. In a FrontiersinComputationalNeuroscience|www.frontiersin.org 6 November2017|Volume11|Article103 Aminetal. EEGSignalsAnalysis FIGURE6|A5andD1–D5componentsofanexperimentalsubject’sEEGsignalduringacognitivetask. TABLE1|FrontallobeF3channels. Step2:Singularvaluedecomposition(SVD)isappliedto(S)as wellasto(l)singularvaluesandsingularvectors(λ)and(aǫRl); i i Levels Waveletenergy% Waveletcoefficients Frequencybands(Hz) then(i 1,2,...,l)arecomputed.Singularvaluesareorganized = in descending order (λ λ λ), after which correct 1 0.66 D1 62.50–125 1 ≥ 2 ≥ ··· ≥ l pairing of singular values with corresponding singular vectors 2 2.31 D2 31.25–62.50 areensured.Inaddition,(m)largestsingularvaluesareselected. 3 7.63 D3 15.62–31.25 Normally,(m)isselectedtoincludeacertainpercentageoftotal 4 10.77 D4 7.81–15.62 energy(95%).Singularvalues(λ λ λ )arecalled(m) 5 21.55 D5 3.90–7.81 1≥ 2≥···≥ m principalcomponents.Respective(column)singularvectors(a,i 5 57.12 A5 0–3.90 i 1,2,...,m)arethenusedtoconstructthetransformationmatrix: = Relativeenergysub-bandpercentagesandfrequencyrange. A [a a a ... a ] (13) 1 2 3 m = featureset(X),containing(l)featuresof(N)exampleseach,such that(xi ǫ Rl,i 1,2,...,N),PCAaimstolinearlytransformthe Each l-dimensional vector in the original space (X), is = feature set and thus obtain a new set of samples (y), in which transformedtoanm-dimensionalvector(y),viatransformation componentsof(y)areuncorrelatedperEquation(12). (y ATx).Inotherwords,theithelementof(y),i.e.[y(i)],isthe = projectionof(x)on[a (y(i) ATx)]. y ATx (11) i = i = FeatureClassification Where(A)isatransformationmatrixorarrangementofsingular Theoptimized“features”setwasvisualizedbyusingprobability vectorsthatcorrespondtosignificantsingularvalues. distribution functions (PDFs) and the ROC curve to check for ThePCAprocessrequiresanumberofsteps: any overlapping in each selected feature for both classes (see Step 1: Covariance matrix (S) of feature matrix (X) is Figure8: EO vs. RAPM). Each selected feature yields a partial estimated: overlap and ROC values that are >0.9 or close to 1.0, which confirm the discriminating power of the selected “features” set. N S xxT (12) Finally,theclassifiers,KNN,SVM,MLP,andNB,wereemployed =X i i forthediscriminationofbothclasses. i 1 = FrontiersinComputationalNeuroscience|www.frontiersin.org 7 November2017|Volume11|Article103 Aminetal. EEGSignalsAnalysis FIGURE7|Featurevisualizationofonesamplewithinadatasetusingnormalprobabilityandhistogram. EXPERIMENTAL RESULTS AND composingtask.Theextractedfeatureswerereducedtooptimum DISCUSSION numberoffeaturesbyusingFDRandPCA.Wefurtherapplied machine learning algorithms, i.e., SVM with RBF kernel, MLP Experimental Setup with five hidden layers, KNN with k 3, and NB to classify According to the 10-fold cross validation process, we divided the extracted features to all decompos=ition levels (D2–D5 and the dataset into 10 subsets of equal instances with nine subsets A5). The detail coefficient D1 reflects high frequency (62.5– employed for classifier training and one subset for classifier 125Hz)components,thusconsideredasnoiseandexcludedfrom testing.Theprocesswasrepeated10timessothateachsubsetwas classification. testedforclassification.Classifierperformancewasmeasuredfor accuracy,sensitivity,specificity,precision,andtheKappastatistic ResultsofDatasetI (Aminetal.,2015a),eachdefinedasfollows: The performance of SVM and KNN classifiers showed 99.11 Totalno.ofcorrectlyclassifiedinstances and 98.21% accuracies each for A5 approximation coefficients, Accuracy 100 and 98.57 and 98.39% accuracies for D5 details coefficients, = Totalnumbersofinstances × (14) respectively, as shown in Tables2, 3. This impressive TruePositive performance at level 5 reflects low frequency (0–3.90Hz) Sensitivity 100 (15) = TruePositive FalseNegative × and above low frequency (3.90–7.81Hz) cognitive domination + TrueNegative tasks.MLPandNBclassifieraccuracieswere97.14and89.63%, Specificity 100 (16) = TrueNegative FalsePositive × respectively,forapproximatedcoefficientsand91.60and81.07% + fordetailedatlevel5.Resultsforotherperformanceparameters, TruePositve Precision 100 (17) i.e., sensitivity, specificity, precision, and the Kappa statistic, = TruePositive FalsePositive × + were also impressive. Subject-wise classification accuracies for P PC Kappa(k) (cid:0) o− e(cid:1) (18) complex cognitive task (RAPM) vs. eyes open (EO) baseline = (1 PC) − e are presented in Tables4, 5. The mean and standard deviation of accuracies with KNN, SVM, MLP, and Naïve classifiers for Where P represents the probability of overall agreement o between label assignments, classifier and true process; and PCe app1r.6o0x,im97a.t1i4on c1o.e4ffi3,caienndts96(0.0–73.901H.1z9);aanred9d6e.t2a5ile±dc2o.4e7ffi,c9i7en.1t4s denotes the chance agreement for all labels; i.e., the sum of the ± ± ± (3.90–7.81Hz)are94.10 2.69,95.35 2.62,97.32 1.42,and proportionofinstancesassignedtoclassmultipliesinproportion ± ± ± 93.92 2.83,respectively. totruelabelsofthatspecificclassinthedataset. ± Classification Results ResultsofDatasetII WeusedDWTtoextractrelativewaveletenergyfeaturesforD1– For classification of two cognitive tasks, i.e., mental D5 and A5 for both EEG datasets, where dataset I contained multiplication task and mental letter composing task, MLP two conditions (EO and RAPM) and dataset II contained two andSVMclassifiersachieved89.17%and86.67%accuracieseach cognitivetasks,i.e.,mentalmultiplicationtaskandmentalletter forA5approximationcoefficients,andKNNandMLPclassifiers FrontiersinComputationalNeuroscience|www.frontiersin.org 8 November2017|Volume11|Article103 Aminetal. EEGSignalsAnalysis FIGURE8|(A)Distributionswithpartialoverlapbetweenpdfsofbothclasses—(eyesopenandRAPM);(B)CorrespondingROCCurvewhere0denotescomplete overlapand1indicatescompleteseparation. TABLE2|Approximationcoefficients(0–3.90Hz)forcognitivetasks. TABLE4|Approximationcoefficients(0–3.90Hz)forsubjectwiseclassification accuracy. Classifier Accuracy Sensitivity SpecificityAUC Precision Kappa (%) (%) (%) (%) Statistic KNN SVM MLP BN KNN,k 3 98.21 96.88 99.63 0.98 99.64 0.96 Subject1 97.14 100 97.14 97.14 = SVM,RBF 99.11 99.28 98.93 0.99 98.93 0.97 Subject2 98.57 98.57 98.57 95.71 MLP,N 5 97.14 96.48 97.83 0.98 97.86 0.94 Subject3 94.28 97.14 94.28 95.71 = Naïve 89.63 88.24 90.77 0.94 91.07 0.78 Subject4 92.85 95.71 95.71 94.28 Subject5 92.85 97.14 98.57 95.71 Relativewaveletenergyclassificationresultsforlevel5. Subject6 100 98.57 97.14 95.71 Subject7 97.14 95.71 97.14 95.71 TABLE3|Detailedcoefficients(3.90–7.81Hz)forcognitivetasks. Subject8 97.14 100 98.57 98.57 Classifier Accuracy Sensitivity SpecificityAUC Precision Kappa Mean 96.25 97.86 97.14 96.07 (%) (%) (%) (%) Statistic STD 2.47 1.60 1.43 1.19 KNN,k=3 98.39 98.22 98.56 0.98 98.56 0.96 Relativewaveletenergyclassificationresultsforlevel5. SVM,RBF 98.57 97.89 99.28 0.98 99.28 0.96 MLP,N 5 91.60 88.70 94.98 0.92 95.36 0.89 = activecognitivestatesofconsciousness.UsingSVM,MLP,KNN, Naïve 81.07 82.71 79.59 0.82 78.57 0.79 and NB classifiers for both conditions, we classified extracted Relativewaveletenergyclassificationresultsforlevel5. relativewaveletenergyfeaturesforD2–D5andA5.Classification resultswerenotprominentlyevidentforalldecompositionlevels. Results for the relative energy of approximation and detailed achieved93.33and92.50%accuraciesforD5detailscoefficients, coefficients at level 5 showed the highest performances for respectively,asshowninTables6,7.Theseresultsindicatethat cognitive task dominations at low frequency (0–3.90Hz) and the proposed approach is strong enough to perform on public abovelowfrequency(3.90–7.81Hz;Tables2,3,7,8). dataset for classification of cognitive tasks. Comparison of the These results indicate that relative wavelet energy for low proposed approach with previous work on the same dataset is frequency (0–3.90Hz) and above low frequency bands (3.90– presentedinTable8. 7.81Hz) is a useful feature for EEG classification of cognitive tasksaswell asseparationof baseline(eyesopen)and complex DISCUSSION cognitivetask.TheseresultsconfirmDWT’sabilitytocompactly represent EEG signals and compute total and relative energy This study used a “pattern recognition” based approach to levels for different frequency bands. The normalization process classify EEG signals that were recorded during resting and reducedthenon-Gaussianityofextractedfeatures.Furthermore, FrontiersinComputationalNeuroscience|www.frontiersin.org 9 November2017|Volume11|Article103 Aminetal. EEGSignalsAnalysis TABLE5|Detailedcoefficients(3.90–7.81Hz)forsubjectwiseclassification (1990)andapplieddifferentfeatureextractionandclassification accuracy. algorithms. Here, the authors reported the results of previous studies for comparison of only two cognitive tasks, i.e., mental KNN SVM MLP BN multiplicationandmentallettercomposing,forexample81.5% Subject1 90.00 91.42 97.14 91.42 classification accuracy reported by Keirn and Aunon (1990) Subject2 95.71 97.14 95.71 98.57 for classification of mental multiplication from mental letter Subject3 94.28 95.71 95.71 95.71 composing(seeTable8foradetailedsummaryofthesemethods and results as reported in the literature). The cited authors Subject4 90.00 98.57 98.57 92.85 reported low classification accuracy rates and used complex Subject5 94.28 95.71 97.14 91.42 classification models such as, neural network or kernel-based Subject6 95.71 91.42 100 91.42 classifiers.Ourproposedscheme,asdemonstratedinthepresent Subject7 95.71 95.71 97.14 92.85 study, yielded higher accuracy rates with a SVM and KNN, Subject8 97.14 97.14 97.14 97.14 indicatingsuperiordiscriminatoryperformanceswhenassessing Mean 94.10 95.35 97.32 93.92 mental tasks. In addition, the present study achieved high STD 2.69 2.62 1.42 2.83 classification accuracy than our previous study using the same EEGdataset.Therefore,theproposedmethodemployedfeature Relativewaveletenergyclassificationresultsforlevel5. normalization and optimization modalities that appear to have yieldedamoreefficientandreliablesolution. TABLE6|Approximationcoefficients(0–3.90Hz)forcognitivetasks(mental multiplicationvs.mentallettercomposing). LIMITATIONS Classifier Accuracy Sensitivity SpecificityAUC Precision Kappa (%) (%) (%) (%) statistic There are some limitations in the present study, which should be highlighted for future research. The authors used dataset KNN,k 3 82.50 80.95 84.21 0.85 85 0.84 = from their previous study and a small public dataset. However, SVM,RBF 86.67 87.93 85.48 0.89 85 0.87 alargepublicdatasetcanvalidatetherobustnessoftheproposed MLP,N=5 89.17 89.83 88.52 0.90 88.33 0.89 methodforEEGsignalsclassification.Inaddition,EEGrecorded Naïve 78.33 77.42 79.31 0.79 80 0.77 during a cognitive task is relatively easy to separate from a baseline EEG than EEG signals recorded in two different Relativewaveletenergyclassificationresultsforlevel5. cognitivetasks.Infuturestudy,wewillextendtheapplicationof theproposedmethodtoclinicaldatasets,suchas,classificationof TABLE7|Detailedcoefficients(3.90–7.81Hz)forcognitivetasks(mental ictalvs.inter-ictalornormalEEGpatterns. multiplicationvs.mentallettercomposing). Classifier Accuracy Sensitivity SpecificityAUC Precision Kappa CONCLUSION (%) (%) (%) (%) statistic Theauthorspresentedapatternrecognitionbasedapproachfor KNN,k 3 93.33 91.94 94.83 0.92 95 0.93 = classification of cognitive tasks using EEG signals. The DWT SVM,RBF 90 87.50 92.86 0.92 93.33 0.90 was applied to EEG signals for decomposition. Classification MLP,N 5 92.50 90.48 94.74 0.94 95 0.91 = results were superior to our previous study (Amin et al., Naïve 84.17 82.54 85.96 0.85 96.67 0.83 2015a) for dataset I as well as previous work done on dataset Relativewaveletenergyclassificationresultsforlevel5. II. The experimental results validated the proposed scheme. These outcomes suggest promising potential for the method’s application to clinical datasets as a beneficial adjunct for the use of the proposed feature selection approach minimized discriminationbetweennormalandabnormalEEGpatterns,as non-significantfeaturesfroma“features”setbeforefeedingitto it is able to cope with variations in non-stationary EEG signals classifiers,whichreducescomputationalcost. viathelocalizationcharacteristicoftheWT(Rossoetal.,2001). ResultsofDatasetIarenotexactlycomparablewithprevious Finally, in this study, the combination of DWT with FDA and work except the authors’ own work published in 2015 in PCAtechniquesprovidearobustfeatureextractionapproachfor which the same EEG data was used as described in dataset classificationofcognitivetasksusingEEGsignals. I. However, dataset II is a publicly available dataset, which is used to compare with previous work done on the same dataset.Thenotionreflectedbythestudiesisthediscriminatory ETHICS STATEMENT performance levels (accuracy) achieved by various quantitative analyticalapproachestoclassifytwodifferentcognitiveactivities ThisresearchstudywasapprovedbytheResearchCoordination using EEG signals, for example mental multiplication and Committee of Universiti Teknologi PETRONAS, Malaysia. All mental letter composing(Liang et al., 2006; Zhang et al., the participants had signed the informed consent form before 2010; Vidaurre et al., 2013; Dutta et al., 2018). These studies starting the experiment. The consent document had a brief employedtheEEGdataoriginallyreportedbyKeirnandAunon descriptionofthisresearchstudyconcerninghumans. FrontiersinComputationalNeuroscience|www.frontiersin.org 10 November2017|Volume11|Article103

Description:
Feature extraction is an important step in the process of electroencephalogram. (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolu
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.