ebook img

Comparative analysis of intelligent models for prediction of Langmuir constants for CO 2 adsorption PDF

13 Pages·2016·1.79 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Comparative analysis of intelligent models for prediction of Langmuir constants for CO 2 adsorption

Geomech.Geophys.Geo-energ.Geo-resour.(2016)2:97–109 DOI10.1007/s40948-016-0025-3 ORIGINAL ARTICLE Comparative analysis of intelligent models for prediction of Langmuir constants for CO adsorption of Gondwana 2 coals in India . A. K. Verma Abhinav Sirvaiya Received:20October2015/Accepted:22March2016/Publishedonline:29March2016 (cid:2)SpringerInternationalPublishingSwitzerland2016 Abstract An artificial neural network (ANN) is an Keywords ANN(cid:2)Sequestration(cid:2)CO (cid:2)Gondwana 2 artificial intelligence technique in which performance coals(cid:2)Langmuirvolumeconstant(cid:2)Langmuirpressure can be improved by adapting to the changes in the constant environment. The efficient manipulation of large amounts of data and the ability to generalize results arethemainadvantagesofneuralnetworks.Consider- ingtheadvantagesofthistechnique,thispresentpaper 1 Introduction aimstoperformacomparisonbetweenlinearmethods like Multivariate Regression Analysis (MVRA) and Coalbedmethane(CBM)productionwith CO injec- 2 different ANN techniques such as back propagation tion combination is presently a worldwide topic of withregressionanalysis(BPNN),layerrecurrentneural researchandstudies.TheroleofCO injectionisnot 2 network (LRNN), generalized regression neural net- onlytoenhanceCBMproduction(ECBM)butalsoin work(GRNN)andradialbasisneuralnetwork(RBNN). subsurfacestorage ofhugeamountofCO which are 2 Thiscomparisonwasperformedtopredicttheapprox- contributing in reduction of greenhouse gas percent- imatevaluesofLangmuirvolumeconstant(LVC)and age in atmosphere. Storing CO in deep unminable 2 Langmuirpressureconstant(LPC)forCO adsorption coalbedshasbecomeoneofthepromisingtechnolo- 2 in coal using proximate and maceral properties of gies of geological sequestration (Busch and Gen- India’smajorcoalfieldasinputparameters.Itisfound sterblum2011). that RMSE value for RBNN is least followed by TheentiremethanegasinaCoalbedMethane(CBM) GRNN,LRNN,BPNNandMVRAforbothLVCand reservoirisstoredintothemicroporesandcleatswithin LPCmodels.Basedonthebestnetwork,itisfoundthat thecoalmatrixinadsorbedform.Forenhancepurpose, coal seams from Narayankuri coal mine has highest theCO capturedisbeinginjectedintothesubsurface 2 adsorbing capacity of CO (0.0019791 mol/gm) as coalbedswhich getadsorbtothecoalporessurfaces. 2 comparedtoothercoalseamsofthisstudy. CO canmoveinthefinestporesandadsorbtothecoal 2 atanear-liquiddensityfirmlywithnominalchancesof being released lately. (Krooss et al. 2002). Coal has higheraffinitytoCO gas.Thus,CO replacesadsorbed 2 2 A.K.Verma(&)(cid:2)A.Sirvaiya methanefromthecoal,acceleratingmethanerecovery DepartmentofMiningEngineering,IndianSchoolof inECBMprocess.Therefore,CO forECBMisavalue- Mines,Dhanbad826004,Jharkhand,India 2 addedselection.Although,relativesorptionaffinityof e-mail:[email protected]; [email protected] coaltomethaneandCO2istheprimaryparameterfactor 123 98 Geomech.Geophys.Geo-energ.Geo-resour.(2016)2:97–109 which is needed to be consider for site suitability parameters help in determining of CO adsorption 2 analysis of CO –ECBM (Dutta et al.,2011). A good tendency in coals. Generally it has been seen that the 2 understandingofmethaneandCO sorptionbehavioron coal with higher Langmuir pressure constant value 2 coalisanunderlyingrequirementfortheestimationof shows very high sorption capacity in low pressure methanerecoveryandCO storagecapacityofacoal. range.Itwasobservedthattheadsorptionisothermsof 2 In coal matrix, mainly the CO storage occurs by the coals if are steadily increasing and remain under- 2 physical sorption (e.g. Harpalani and Chen 1997). saturated within the experimental pressure range then Therearemanyfactorsthatareneedtobeconsidered thesecoalshavecomparableorevenhigherLangmuir forunderstandingthecomplexityoftheinteractionof volumeconstant. CO withcoalinthecleatsystemandthecoalmatrix. Thispaperhasalinearandnonlinearmathematical 2 Different aspects like moisture, volatile matter per- modelslikeMultivariateregressionanalysis(MVRA), centage, Ash content, depth of coal formation, ArtificialNeuralNetwork(ANN),RadialBiasneural Liptinite, Inertinite, Mineral Matter etc. that are network(RBNN),andLayerRecurrentmethodwhich obtained from the laboratory are minimum require- are used to calculate Langmuir constants (for CO ) 2 menttodealwithCO adsorptionselectivity. using coal proximate and macerals properties like 2 During CO -ECBM processes, the injected CO moisture, volatile matter, ash content, fixed carbon 2 2 flows through network of cleats by a complex mech- composition, vitrinite, semi-vitrinite, liptinite, iner- anism of sorption/diffusion process (Mazzotti et al. tinite, mineral matter, mean and depth (Dutta et al. 2009). It gets adsorb to the coal inner surface by 2011).Theseparametersaretakenasinputparameters replacingCH4fromthewalls.Thisreplacementoccurs forthesemathematicalmodels.Themainobjectiveof dueto(1)byareductionoftheCH4partialpressureor thisstudyistodevelopintelligentmodelstocalculate (2) by a higher selective sorption of CO over CH4. coalhavingbestadsorptioncapacitytowardsCO gas 2 2 Concentration gradient between CH4 in the matrix forECBMprocess. compared to the cleat system, results CH4 to diffuse from the coal matrix into the cleat system where, by pressure drawdown towards a production well, it is produced. 2 Studyareaanddataset Manymethodshavebeenputuptomodelsorption isotherms on coal and Langmuir isotherm is among A previous study is the source of this study data set such simple methods. Langmuir isotherm is used to (Dutta et al. 2011). Fourteen coal samples were quantifytheamountofgasadsorptiononanadsorbent taken from the India’s major coal mining and CMB as a function of partial pressure or concentration at a exploration activity regions. Eight samples are from certain temperature. The Langmuir volume constant the Raniganj coalfield, four samples are from the and Langmuir pressure constant are the two constant Jharia coalfield, and the remaining two are from the that define the Langmuir isotherm model. It is well South Karanpura Coalfield (Fig. 2). Dry samples are known approach in the ECBM industry and related the source for analysis. The residual moisture of reservoir simulations. It provides a reasonable fit to samples is being represent by the moisture content mostexperimentaldata: of a coal sample. The ash content is high in the samples which strongly reduces the adsorption of a V (cid:3)p n ¼ L coal for CO gas. It is found out that the Raniganj ads P þp 2 L formation coals have higher moisture content and Here V is the Langmuir volume, representing the lower ash content than other samples. 32–45 % of L amountofgassorbedatinfinitepressureandP isthe Volatile matter content has been found out in the L Langmuirpressure(Fig. 1),equivalenttothepressure coal samples. Other parameters like carbon per- at which half of the Langmuir volume V is reached centage, Liptinite and Inertinite which are helping L (Crosdaleetal.1998a,b). in knowing organic percentage in coal etc. have TheadsorptionofCO incoalbedscanbeexplained also been obtained from experiments to understand 2 by the variation in Langmuir volume and Langmuir sorption capacity of coal for CO gas (Verma and 2 pressure constants (for CO ). These two constant Sirvaiya 2015) (Table 1). 2 123 Geomech.Geophys.Geo-energ.Geo-resour.(2016)2:97–109 99 Fig.1 Typicallangmuirvolumeandpressurecurve Fig.2 Locationofcoal samplesfromcoalfieldsin India(Duttaetal.2011) 123 100 Geomech.Geophys.Geo-energ.Geo-resour.(2016)2:97–109 Langmuirpressureconstant.MPa)(PL 2.732 2.406 2.586 1.976 1.074 0.8 1.271 1.167 0.841 0.984 1.082 1.418 1.231 1.161 muirmeantmL/g) Langvoluconst,(VL 74 68 89 58 25 18 31 26 33 41 35 42 42 39 Depth(m)(d) 160 107 46 190 172 118 201 109 914 996 520 650 450 380 an% 4 2 4 1 6 1 6 6 4 9 2 1 7 Mero(w) 0.6 0.6 0.6 0.6 0.9 1.0 0.9 1.0 1.9 1.3 1.2 1.2 1.1 0.9 Mineralmatter(%)(z) 5.7 7.4 7.9 11 18.1 17.4 8.1 16.9 36.5 81.2 8 6.8 5.3 4.5 e Inertinit(%)(i) 15.8 16.6 11.4 27.2 61.3 62.1 64.6 61.9 53.4 18.6 5.5 29.5 33.7 9.8 e Liptinit(%)(l) 4.2 9.7 6 11.3 7.3 5.8 8.3 12.8 0.3 0 0.2 0.3 0.3 1 study Semi-vitrinite(%)(s) 0.4 1 0.8 0.8 1.3 1.8 2.3 0.7 0.5 0 0.2 1.4 2.3 0.5 e h plesfort Vitrinite(%)(k) 73.9 65.3 73.8 49.7 12 2.9 16.7 7.7 9.3 0.2 86.1 62 58.4 84.2 m sa don(f) coal Fixecarb(%) 45 44 37 39 34.8 34.2 33.9 32.8 33.6 18.2 33.8 37.9 25 39.5 of ants Ash(%)(a) 10 12 15 17 28 22 30 27 31 48 20 24 34 22 st n gmuirco Volatilematter(%)(v) 38 40 41 40 36 41.5 35 38 34 32 45 36.5 35 38 n a ertiesandl Moisture(%)(m) 7 4 7 4 1.2 2.3 1.1 2.2 1.4 1.8 1.2 1.6 6 0.5 hysicalprop Nameofthecoalmines Bogra Kenda Narayankuri Satgram Kalimati LocalII Mehaladih Mugmaspecial SKAC1 SKAC3 15thSeam 16thTopseam 16thBottomseam 18thSeam p o Table1Petr Locations RaniganjformationofRaniganjcoalfield BarakarformationofRaniganjcoalfield SouthKaranpuracoalfield Jhariacoalfield 123 Geomech.Geophys.Geo-energ.Geo-resour.(2016)2:97–109 101 3 Multivariateregressionanalysis cancapableforsolvingcomplexproblemshasresultin Artificial neural network. It function as similar to a Multivariateregressionanalysisisastatisticalmethod human brain works and architecture is complex like used in estimation the relationship between a depen- anyhumanneuralnetwork.Neuralnetworkareusedto dent variable and one or more independent variables learnfromthelargedataset. (Alexopoulous2010).Itcomprisesofmanytechnique ANN consists an input layer, output layer/Target for analysis and modelling of the dependent and layerandoneormorehiddenlayers(Cilimkovic2015; independent variables. In other words, the regression Santos et al. 2013). Input Layer, hidden layers and analysispurposeittocalculateYonthebasisofXor outputlayersorTargetlayerareconnectedtoeachother todefinethedependencyofYonXi.e. bynodes.Numberofhiddenlayersandnodesvaryfrom therequiretrainingandresultrequired.Everyconnec- X ; X ;...;X )Y 1 2 k tion has some weight associated with them. Input The X (X , X , …, X ) are ‘‘independent’’ variable, parameters whose value remain same throughout the i 1 2 k whileYisa‘‘dependent’’variable. networkareassignedtoinputlayer.Hiddenlayeraccept A Linear regression is made to estimate the the data from input layer which are send down to the coefficientsofthelinearequationwhichareinvolving nodes in hidden layer. Hidden layer uses input values one or more independent variables so that best from input layer and modify them using some weight prediction of the value of the dependent variable can valueandthennewvaluesaretransferredtotheoutput bemadequantitatively. layer where it also modify by some weight from In the multivariate linear regression model, Y has connection between hidden and target layer. Target normaldistributionwithmean layer process the values from the hidden layer and produce new values called as output. After that, these Y¼b þb X þ(cid:2)(cid:2)(cid:2)þb X þrð(cid:2)Þ 0 1 1 q q outputareprocessedbyactivationfunctiontogivethe The parameters b þb þ(cid:2)(cid:2)(cid:2)þb and r are esti- results.Thenonlinearityisintroducebytheactivation 0 1 q matedfromdata. function.Therearethreetypesofactivationfunction(1) linear function (2) threshold function and (3) sigmoid b ¼intercept 0 function.Inthispaperwehaveusesigmoidactivation b ...b ¼regressioncoefficients function. The architecture can be seen in following 1 q figure. r¼r ¼residualstandarddeviation Output ‘y’ can be calculated, if each neuron has x res inputsbyequation IntheequationY¼b þb X þ(cid:2)(cid:2)(cid:2)þb X ;b is 0 1 1 q q 1 the mean increase in Y per unit increase in X, when ! i n X otherX’sarekeptfixed.Ingeneral,b isinfluenceof y¼f ðw (cid:3)x þbÞ i i i i Xi corrected (adjusted) for the other X’s. This i¼1 estimationmethodfollowstheleastsquarescriterion. wherex aretheithinput,w aretheithweight,bisthe i i If b , b , …, b are the estimates of b , b , …, b 0 1 q 0 1 q biasandfistheactivationfunctionfortheneuron. thenthe‘‘fitted’’valueofYis The new information to be obtain was get only Yfit ¼b0þb1X1þ(cid:2)(cid:2)(cid:2)þbqXq when the network is being trained. In this paper we haveusebackpropagationalgorithmwhichisthemost The b , b , …, b are computed such that 0 1 q robust and versatile technique in all algorithm. It R(Y - Y )2tobeminimal.Since(Y - Y )iscalled fit fit results in efficient learning method for multilayer theresidual;onecanalsosaythatthesumofsquared perception(MLP)neuralnetworks. residualsisminimized. Whenthereceivedinputsareforwardedthroughthe all others next layer to obtain the output then it is 4 Backpropagationneuralnetwork known as feed forward back-propagation neural network (BPNN). The learning capability of back To develop the understanding on functioning of propagation networks depend upon the internal map- human brain and desire of building a machine that pingofthecharacteristicsignalfeaturesintheprocess 123 102 Geomech.Geophys.Geo-energ.Geo-resour.(2016)2:97–109 ofnetworktraining ontothehiddenlayer.Thestored Table2 Ranges of the input and output parameters used to mappingsinthislayerduringthetrainingphaseofthe developintelligentmodels networkgetsautomaticallyretrievedduringitsappli- Parameters Unit Range SD cationphase. Input When the network is trained, data processed from input to hidden layer and hidden to output layer. At Moisture % 0.5–70 2.26 output layer, the output is compared to the measured Volatilematter % 32–45 3.44 values (targetvalues)andthe error isprocessedback Ash % 10–48 9.92 throughthenetworkbyupdatingweightsandbiasesof Fixedcarbon % 18.2–45 6.92 distinct neurons. The network is trained until mini- Vitrinite % 0.2–86.1 32.90 mum network error is reached (Verma and Sirvaiya Semi-vitinite % 0–2.30 0.73 2015).Theerror is calculated fromroot mean square Litinite % 0–12.8 4.56 error(RMSE). Intertinite % 5.5–64.6 22.31 In the paper, transfer function are nonlinear Mineralmatter % 4.5–81.2 20.37 sigmoid function (LOGSIG, TANSIG) and linear Mean Ro% 0.61–1.94 0.35 function (POSLIN, PURELIN). The logarithmic sig- Depth M 46–996 309.25 moidfunction(LOGSIG)isdefinedas Target Langmuirvolumeconstant mL/gm 18–89 20.5 1 f ¼ Langmuirpressureconstant kPa 0.8–2.732 0.66 1þe(cid:4)ex whereas the tangent sigmoid function (TANSIG) is definedas[7] eex (cid:4)e(cid:4)ex f ¼ eex þe(cid:4)ex where e is the weighted sum of the inputs for a x processingunit. The sample values of proximate and macerals parametersaretakenasinputdataandsamplevalues ofLangmuirvolumeconstantandLangmuirpressure constant of CO are consider as target data (Table 2; 2 Fig. 3). 5 Generalizedregressionneuralnetwork Fig.3 Aschematicdiagramofneuralnetwork AGeneralizedRegressionNeuralNetwork(GRNN)is a variation of the radial basis neural networks which InPatternlayereachneuronspresentstrainingpattern does not require an iterative training procedure like andoutputfromthelayer.Summationlayerisnextto back propagation networks needed (Hannan et al. the pattern layer. Summation layer with the output 2010).Hiddenorkernelregressionnetworksisabasis layer perform a normalization of output set. Radial of GRNN. Any arbitrary function between input and basisandlinearactivationfunctionsareusedinhidden output vectors are approximated by GRNN. As the and output layers while training the network. The trainingsetseizetendstoahighervalue,thefunction structureofGRNNisshowninFig. 4. estimation error approaches to zero, with minor The predicted value Y’ generated from an i constraintsonthefunction. unknowninputvectorxcanbegivenbyequation: GRNN has input layer, pattern layer, summation layerandoutputlayerasinfigureThenumberofthe Y0 ¼Pni¼1yi(cid:2)expð(cid:4)Dðx;xiÞÞ; observation parameters define number of input units. i Pn expð(cid:4)Dðx;xÞÞ i¼1 i 123 Geomech.Geophys.Geo-energ.Geo-resour.(2016)2:97–109 103 Theoptimizationoftheparametersassociatedwiththe RBFoccursinnetworktraining.Whentrainingnetwork are accurately assumed then linear combination of RBFs can give no error at the training vector. This fitting of RBFs to data for function approximation is closelyrelatedbydistanceweightregression. MoodyandDarken(1989)hasgivenaradialneural network in which they have selected exponential activation function for their radial basis function networks. The exponential activation function is similartotheGaussiandensityfunctioncenteredatc. i ! jjx (cid:4)cjj2 F ¼exp (cid:4) i i i r2 i wherethefunctionspread(r)aroundthecentrefinds i the ratio of the function decay with its distance from Fig.4 Aschematicdiagramofgeneralizedneuralnetwork the centre. Spread constant value should be selected Dðx;xÞ¼Xm (cid:2)xi(cid:4)xik(cid:3)2 small enough in order to restrict the basis function i r spreading(PalitandPopovic2005). k¼1 where y is the weight connection between the ith i 7 Layerrecurrentneuralnetwork neuron in the pattern layer, n is the number of the trainingpatterns,DistheGaussianfunction,misthe Alayerrecurrentneuralnetwork(LRNN)isthattypeof numberofelementsofaninputvector,x andx are k ik artificial neural network where connections between thejthelementofxandx,respectively. i unitsformadirectedcycle.Thisresultsinaninternalstate of the network which permits it in exhibiting dynamic 6 Radialbasisneuralnetwork temporal behavior. In this network bi-directional data flowoccurs.RNNsusetheirinternalmemorytoprocess RBF networks are similar to generalized regression arbitrary sequences of inputs which is different from neural networks. In generalized regression neural net- feedforward neural networks. There are feedback con- works, there have one neuron for each point in the nectionsbetweenunitsofdifferentlayersinRNN.This training file, whereas RBF networks have a variable explainthatthedependencyofoutputofthenetworkon numberofneurons.ThesenumberofneuroninRBFis bothexternalinputsaswellasonthestateofthenetwork usually much less than the number of training points intheprevioustimestepasshowninFig. 5.Themodel (Hannanetal.2010). explainfullfeedbackemploymentandinterconnections RBFNcomprisesthreelayers(1)aninputlayer(2) between all nodes. Advantages like the capability to hiddenlayerand(3)anoutputlayer.Allthelayersare retainvaluesfrompreviouscyclesofprocessing,which fully connected by the nodes. The input layer has can be used in current computations allows RNNs to assigned input parameters, and hidden layer neurons produce complex, time varying outputs in response to compriseGaussiantransferfunctionswhichisusedas simplestaticinputs(Nkoana2011). radialbasisfunction.Theoutputsfromhiddenlayerare inverselyproportionaltothedistancefromthecenterof 8 Resultanddiscussion the neuron. Input Layer are direct to the hidden layer withoutanyweights.ThetransferfunctionusedisRBF 8.1 Multivariateregression(MVRA)analysis which is symmetrical about a given mean or center topredictLVCandLPC point in a multidimensional space. The number of hidden nodes with RBF activation function in RBFN The macerals parameters are taken as ‘y’ variable are connected in a feed forward parallel architecture. (independentvariable)andLVC/LPCaretakenas‘x’ 123 104 Geomech.Geophys.Geo-energ.Geo-resour.(2016)2:97–109 Fig.5 Asimplerecurrent network variable (dependent variable) to develop MVRA Hezarkhani 2012; Ahmadia et al. 2013; Verma and model. A correlation has been generated for each Singh2013;SinghandVerma2010;VermaandSingh Langmuirconstantwiththegivenmaceralsparamters 2009;SinghandVerma2011). (Moisture, Volatile matter, Ash, Fixed carbon, Vitri- In this paper, different type of neural network are nite, Semi-vitinite, Litinite, Intertinite, Mineral mat- considered to predict LVC and LPC governing the ter,MeanRo%andDepth). nature of Langmuir isotherms which associate the MultivariatelinearequationforeachLVCandLPC adsorptionofcarbondioxidegasmoleculesonacoal areobtained.Theequationsare: surface to gas pressure or concentration at a fixed temperature. The architect of all different neural LVC ¼ð4:553826842(cid:3)mÞþð0(cid:3)vÞ networkareoptimallyoptimized. þð(cid:4)0:627388513(cid:3)aÞþð1:345670812(cid:3)fÞ ANN application has some issues also and the þð2:603845223(cid:3)kÞþð17:11085324(cid:3)sÞ most common one is to define optimum number of þð2:997470493(cid:3)lÞþð1:931219754(cid:3)iÞ hiddenlayers,thenumberofneuronsinthesehidden layers,functionalrelationsbetweeninputandoutput þð3:490123182(cid:3)zÞþð22:0153635(cid:3)wÞ parameters, learning algorithm and avoiding over- þð(cid:4)0:030223317(cid:3)dÞ fitting.So,differentneuralnetworkhasbeenconsid- ered in study and a simulation has been done on the And sameproblem. LPV ¼ð0:119074233(cid:3)mÞþð0(cid:3)vÞ TheRMSEmethodhasbeenusedtodeterminethe optimal model parameters. The RMSE is more þð(cid:3)(cid:4)0:039505654(cid:3)aÞþð0:030893488(cid:3)fÞ sensitive to the larger relative errors which are þð0:090233506(cid:3)kÞþð0:404806961(cid:3)sÞ resulted due to the low valued so that it offers a þð0:09012357(cid:3)lÞþð0:084394968(cid:3)iÞ balanced evaluation of the goodness of fit of the þð0:112784163(cid:3)zÞþð(cid:4)0:213303844(cid:3)wÞ model. The perfect model will have a RMSE value þð(cid:4)0:000263957(cid:3)dÞ approachingtozero.Theformulafordeterminingthe RMSEis: sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 8.2 Artificialneuralnetwork(ANN) ðO (cid:4)TÞ2 RMSE ¼ i i N Several researchers have used ANN to predict rock parameterslikeoilflowrateofreservoir,permeability, where, Ti is measured output (Target),Oi is the dynamicelasticconstants,compressivestrength,creep predicted output and N represent the number of parameters, ground vibration etc. (Tahmasebi and input–outputdatapairs. 123 Geomech.Geophys.Geo-energ.Geo-resour.(2016)2:97–109 105 Fig.6 OptimumANNNetworkwithtypicalfeed-forwardbackpropagation Table3 ComparisonofthedifferentANNarchitecturebased ‘logsig–logsig-purelin’ has been found to have the onRMSEvalues minimumRMSEofvalue0.0022(Fig. 6;Table 3). S.no. Transferfunction Model RMSE 8.2.2 Generalisedregressionneuralnetwork(GRNN) 1. Tansig-purelin 11-7-2 3.4058 2. Tansig-purelin 11-20-2 3.0854 The generalized regression neural network is consid- 3. Logsig-purelin 11-20-2 2.7874 ered and input parameters are assigned to input data 4. Tansig–tansig-purelin 11-15-5-2 0.3633 and measured LVC and LPC value are assigned to 5. Logsig-logsig-purelin 11-15-10-2 0.0022 target data. Input has been simulated with different 6. Logsig-poslin 11-15-2 6.2289 spread constant of 1, 5, and 10. The outputs are 7. Tansig-tansig-purelin 11-7-15-2 1.5198 obtained and RMSE value has been calculated from 8. Tansig-tansig-poslin 11-15-20-2 4.1952 obtainedoutputtostudytheoptimumofthenetwork. 9. Logsig-logsig-purelin 11-10-5-2 0.7368 Thenetworkviewforgeneralisedneuralnetworkisas 10. Tansig-logsig-purelin 11-10-20-2 4.8785 follows(Fig. 7;Table 4). 8.2.1 Backpropagationfeedforwardneuralnetwork 8.2.3 Radialbasisneuralnetwork(RBNN) (BPNN) The radial basis network is considered and input To determine the optimal architecture, two and three parameters are assigned to input data and measured hidden layer in network has been considered for LVCandLPCvalueareassignedtotargetdata.Input carrying out parametric simulation. In each hidden hasbeensimulatedwithdifferentspreadconstantof1, layer, number of neurons has been changed to 10, and 100. Output for both Langmuir isotherm determine the optimum model based on minimum constant has been obtained. There is no change in valueofRMSE.The80 %datasetisusedfortraining, output with the change in value of spread constant. 10 %forvalidationand10 %fortesting.Thenetwork Thenetworkviewforradialbasisneuralnetworkisas with architecture 11–15–10–2 with transfer function follows(Fig. 8). Fig.7 OptimumGRNN network 123 106 Geomech.Geophys.Geo-energ.Geo-resour.(2016)2:97–109 8.2.4 Layerrecurrentneuralnetwork(LRNN) Table7 RMSEofdifferentpredictionmodelsforLPC Model Rootmeansquare Inputdataaresimulatedwithdifferenttrainingfunction error(RMSE) and adaption learning function in Layer recurrent neural network. The training function considered are MVRA 8.1305 TRAINLM, LEARNGDM and TRAINLM whereas BPNN 0.0970 adaptionlearningfunctionconsideredareLEARNGDM GRNN 0.00064 and LEARNGD. The performance function is MSE. RBNN 2.3e-23 ThenumberofLayersare2with15neuronin1stLayer LRNN 0.00443 and transfer function as logsig-purelin. The network viewforrecurrentneuralnetworkisas(Fig. 9;Table 5): The optimized model is for network with training function = TRAINLM, Adaption Learning Funtion Table6 RMSEofdifferentpredictionmodelsforLVC Model Rootmeansquare error(RMSE) Table4 Comparison of the different GRNN architecture basedonRMSEvalues MVRA 279.6838 BPNN 1.2570 Spreadconstant RMSE GRNN 0.0140 5 0.0099 RBNN 1.4e-20 10 0.4431 LRNN 1.23 Fig.8 OptimumRB network Fig.9 Optimumrecurrent network Table5 RMSEvaluesfor S.No Trainingfunction Adaptionlearningfunction Performancefunction RMSE differentnetwork 1 TRAINLM LEARNGDM MSE 3.346640106 2 LEARNGDM LEARNGDM MSE 16.73320053 3 TRAINLM LEARNGD MSE 4.949747468 123

Description:
Abhinav Sirvaiya. Received: 20 with regression analysis (BPNN), layer recurrent neural network coals Б Langmuir volume constant Б Langmuir pressure constant .. where the function spread (ri) around the centre finds the ratio
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.