ebook img

Modeling electrical properties for various geometries of antidots on a superconducting film PDF

13 Pages·2017·2.38 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Modeling electrical properties for various geometries of antidots on a superconducting film

ApplNanosci(2017)7:933–945 https://doi.org/10.1007/s13204-017-0633-4 ORIGINAL ARTICLE Modeling electrical properties for various geometries of antidots on a superconducting film Sajjad Ali Haider1 • Syed Rameez Naqvi1 • Tallha Akram1 • Muhammad Kamran1 • Nadia Nawaz Qadri1 Received:28September2017/Accepted:9November2017/Publishedonline:17November2017 (cid:2)TheAuthor(s)2017.Thisarticleisanopenaccesspublication Abstract Electrical properties, specifically critical current Introduction density, of a superconducting film carry a substantial importanceinsuperconductivity.Inthiswork,wemeasure Various geometries of artificial pinning centers with vortex andstudythecurrent–voltagecurvesforasuperconducting latticehavebeeninvestigatedtoobservethebehaviorofvortex Nb film with various geometries of antidots to tune the motion and pinning interaction (Baert 1995; Cuppens et al. critical current. We carry out the measurements on a 2011;Heet al.2012;Jaccardet al.1998;Kamranet al.2015; commercially available physical property measurement de Lara et al. 2010; Latimer et al. 2012; Martin et al. system to obtain these so-called transport measurements. 1999,1997).Allthesestudieshaveresultedinimprovingthe Weshowthateachoftheusedgeometriesexhibitsavastly currentcarryingpropertiesofsuperconductingmaterialsforuse different critical current, due to which repeatedly per- applications. The critical current density is one of the most forming the measurements independently for each geom- importantpropertiesofsuperconductors.Thesuperconducting etry becomes indispensable. To circumvent this filmwithanarrayofartificialpinningcentersbynanolithog- monotonous measurement procedure, we also propose a raphytechniquescanenhancethecurrentdensity.Ifeachdefect framework based on artificial neural networks to predict accommodatesoneflux,thenthecurrentdensityincreasesand the curves for different geometries using a small subset of resistancedecreases,butasthevortexlatticebecomesdisorder, measurements, and facilitate extrapolation of these curves thecurrentdensitydecreasesandresistanceincreases.Inthis overawiderangeofparametersincludingtemperatureand way, the current voltage (IV) characteristics or curves of the magnetic field. The predicted curves are then cross- pinningcentersonasuperconductingfilmshowabruptchan- checked using the physical measurements; our results ges.SuchabruptchangesgivetheIVcurvesmeasurement,also suggest a negligible mean-squared error—in the order of calledthetransportmeasurement,asubstantialprominencein 10(cid:2)9. thefieldofsuperconductivity. During our experimental work, it has been recognized Keywords Critical fields (cid:3) Transport properties (cid:3) Critical thattransportmeasurementsarenotoriousandcumbersome currents (cid:3) Vortex pinning (cid:3) Artificial neural networks to measure, especially when they are repeatedly needed (Kamran et al. 2016). We believe that an approximation model, such as one based on artificial neural networks (ANN) proposed in this work, can help in evading the repeated measurements, and rather extrapolate the curves using their smaller subset for unforeseen parameters. The proposed methodology not just relieves the researchers from this tedious procedure—saving them time, cost, and & SyedRameezNaqvi energy, but is applicable to various geometries of anti- [email protected] dots—giving this study a significant importance. 1 DepartmentofElectricalEngineering,COMSATSInstitute ANNaresaid tobemathematicalequivalentsofa human ofInformationTechnology,G.T.Road,WahCantt,Pakistan brainthattendtolearninonesituation,andrepeatinanother 123 934 ApplNanosci(2017)7:933–945 (Guojinet al.2007).Theyaresupposedtorecognizepatterns, thicknessisdepositedonanS0 substrateforobtainingthe i 2 andestablishrelationshipsbetweenindependentvariablesofa desired arrays of antidots. The micro bridges and nanos- smallersubsetofvalues.Theserelationshipsarethenusedin tructured array of antidots are, respectively, fabricated by solving problems thatneed approximation and prediction for photo- and e-beam lithography techniques on a resistive a larger sample space having unforeseen values, where the layer of polymethyl metacrystalline (Mostako and Alika ANN are typically found extremely useful (Cybenko 1989). 2012; Shahid et al. 2016). Fabrication of the microbridges ANN may simply be described as a network of intercon- and array is followed by scanning of the samples to the nected nodes, each having some weight and bias—called desired antidots using scanning electron microscopy network coefficients. The purpose of such a network is to (SEM). We subsequently mount our sample in the PPMS providemappingbetweeninputsandoutputs,wherethelatter for transportmeasurements, which are carriedoutby four- areobtainedafterasuccessfullearningphase.Theprocessof probemethodinPPMSwithtemperaturefluctuationwithin learning comprises training and validation. In the training ±3 mK, and the external magnetic field applied perpen- phase, several pairs of inputs and outputs are iteratively dicular to the plane of the film. We have also swept the provided to the network to enable the ANN to establish a fieldfrom-8to8 mAatconstanttemperature.Duringthe mathematical relationship. The coefficients are updated in entiremeasurementprocess,whichisalwayscarriedoutin each iteration until the network converges to an optimal high vacuum, a small amount of liquid helium is placed solution.Themanner,inwhichthesecoefficientsareupdated, inside the chamber to prevent overheating. distinguishes one training algorithm from the rest (Hornik Figure 1 presents SEM of the four geometries that we 1991, 1993; Hornik et al. 1989). investigate in this work, and Fig. 2 corresponds to their While the ANN have been widely ratified in engineering respective IV curves measured at different values of tem- applications(Elminiret al.2007;Ghanbariet al.2009;Reyen perature. The top left, top right, bottom left, and bottom et al.2008),theiruseinthefieldofmaterialscience,especially right in both figures, respectively, correspond to the rect- forthepurposeofmodelingelectricalproperties,isinordinately angular, square, honeycomb, and kagome arrays of anti- constrained (Haider et al. 2017; Kamran et al. 2016; Quan dots. These curves may be divided into three regions et al. 2016; Zhao et al. 2016). In this work, we explore the according to their slopes: in the first region, the voltage is predictionofIVcurvesofasuperconductingfilmwithvarious almostzeroforgraduallyincreasingcurrent,followedbya geometries of antidots using three commonly used ANN sudden jump in the voltage in the second region, and architectures,trainedbyvariouslearningalgorithms.Thepre- finally,inthethirdregion,thereexistsalinearrelationship dictedresultsarethencomparedwiththeactualmeasurements; between the two variables. Two important observations thisstepistermedvalidation.Althoughslightlydifferentfrom that may be conveniently made from these figures are: eachother,ourapproximatedresultsfromeachtrainingalgo- 1. The IV curves show a sudden jump at critical current rithm achieve high accuracy—smaller mean-squared error (I ) in the second region, which resembles Shapiro (MSE). Besides describing the approximation methodology, c steps. These steps usually appear when the interstitial this work is intended to present a comparison between three vortex lattice is formed, and due to high vortex ANN architectures, and three training algorithms in terms of velocities, instability may occur; as a result of which predictionaccuracy(giveninMSE),trainingtime,andnumber the system shows a step. ofiterationstakentoconvergetoanoptimalsolution. 2. Thesharpnessinthecurvessignificantlyvariesforeach The rest of the article is organized as follows: Sec- geometry: those having larger interstitial area may tion‘‘Physicalmeasurementsystemandreadings’’presents accommodate a larger number of vortices, leading to detail of the experimental setup to obtain the transport increased energy conservation in those geometries. measurements, and a brief commentary on their charac- Therefore, the honeycomb and kagome arrays will teristics. In Section ‘‘Artificial neural network model’’, we exhibit flatter or smoother curves in comparison with present the approximation methodology based on ANN, sharpstepsforrectangularandsquarearraysofantidots. followed by the results and discussions in Section ‘‘Re- search methodology and simulation results’’. We conclude After successively performing the transport measure- the paper in Section ‘‘Conclusion’’. ments, we had obtained a three-dimensional (H, T, I) data setcomprising½4(cid:4)4(cid:4)1600(cid:5)valuesforeachfilmhavinga different geometry.Note that we have takenoutone curve Physical measurement system and readings from this data set for each geometry, and kept it isolated from the entire ANN modeling process. These four curves Our experimental setup for transport measurements pri- should be used for cross-checking our approach on, so to marily comprises a physical properties measurement sys- say, unforeseen data values, once the system had been tem (PPMS) from quantum design. An Nb film of 16 nm completely designed using the MATLAB’s ANN toolbox 123 ApplNanosci(2017)7:933–945 935 Fig.1 SEMofrectangular(top left),square(topright), honeycomb(bottomleft),and kagome(bottomright)arrays Fig.2 Transportmeasurementsatvarioustemperaturesandzeromagneticflux 123 936 ApplNanosci(2017)7:933–945 on the modified data set (the one excluding the curves extracted for cross-checking). The toolbox, by default, divides the provided data set into three: the first subset is usedforthetrainingpurpose,50%oftheremainingvalues isusedforvalidation,whiletherestisstrictlykeptisolated, which is then used for the testing purposes. The modified data set values were still copious enough to give us con- fidence in allocating a large data set exclusively for the training purpose. However, while performing the simula- tions, we realized that increasing the size of training set beyond70%wouldnotgiveusaconsiderableadvantagein terms of prediction accuracy. MATLAB’s ANN toolbox alsousesseventypercentofthedatavaluesforthetraining purpose by default, which further justifies our selection of training and testing data sets. In the next section, we elaborate on the ANN’s operation principle, training Fig.3 Structureoffullyconnectedfeed-forwardneuralnetworkwith algorithms, and architectures used in this work. asinglehiddenlayer Selectionofthenumberofhiddenlayersandthenumber Artificial neural network model of neurons per layer are a critical process, having a high impactonthesystemsstability.MostoftheavailableANN training algorithms utilize MSE (difference between the The structure and operation of a neuron—the basic building expected and observed responses) as their objective block of ANN—have been described on several occasions function: (Guclu et al. 2015). Briefly, it is a cascaded design, from an input layer to an output layer, where functional blocks are 1XM 1XM sandwiched either between input and hidden, or hidden and /¼2 ðyk(cid:2)dkÞ¼2 e2k ð2Þ output layers, where each layer may comprise a specific k¼1 k¼1 number of neurons. It is believed that the number of hidden where y is the kth output value calculated by d , and it k k layers in a network is directly proportional to the prediction represents the expected value. To the best of our knowl- accuracy, i.e., the greater the number of hidden layers, the edge, all the ANN architectures follow backpropagation more accurate the results will be at the expense of com- technique to minimize their objective function—from the plexity. However, problems similar to the one addressed in basicfeed-forwardneuralnetworktothosewidelyadopted this work merely require up to two hidden layers for an architectures,suchasconvolutionalneuralnetwork(CNN), acceptable accuracy level in most cases (Setti et al. 2014). allthetrainingalgorithmsbackpropagatetheirerrorinthe The mapping between input and output layers, achieved form of sensitivities from the output to input layer. throughafewhiddenlayers,mayfollowoneofthethreemost widely adopted architectures: feedforward, cascaded, and ANN architectures layer-recurrentneural nets. While thedifferencebetween the three architectures will be highlighted later, in what follows Feedforward wemakeuseofthesimplestone,feedforward,justtodescribe the operation of ANN. Figure 3 depicts a fully connected The simplest architecture that an ANN model may follow feed-forward neural net with a single hidden layer (dH). R is termed afeed-forward neuralnet, inwhich each layer is number of inputs are connected to input layer (dI) with S only connected to its immediate neighbors. The mapping numberofneurons,whereasoutputsgeneratedbyinputlayer from input through the hidden to the output layers, there- acts as a source for the hidden layer, having T number of fore, is achieved in a serial and linear manner. neurons. Here, d is termed activation or threshold function, whichinawayquantizetheoutputofthenetwork.Themost Cascaded commonlyusedthresholdfunctionsarestep,linear,andtan- sigmoid (hyperbolic tangent sigmoid): Unlikethefeed-forwardnets,theoutputlayerinacascaded !! networkisnotonlyconnectedtoitsimmediatepredecessor M M Y ¼d Xwyd XwHP : ð1Þ (hidden)layer,butalsohasadirectconnectiontotheinput k kl H ij R j¼0 j¼0 layer.Thisallowsthenetworktoexploittheinitialweights 123 ApplNanosci(2017)7:933–945 937 in addition to those provided by the predecessor to facili- o/ðkÞ rwðkÞ¼(cid:2)a tate optimal convergence at the cost of added complexity. owðkÞ where a is the learning parameter. Layer-recurrent Because of its broad range of contributions, several variantsofbackpropagationalgorithmhavebeenproposed. Unlike the feed-forward nets, the layer-recurrent nets are Although each variant has its own pros and cons, in what nonlinear, i.e., they have a feedback loop in the hidden follows, we discuss only the ones that have proven their layers with additional tap delays. The latter especially efficiency for the problems as such addressed in the pro- prove helpful in analyzing time-series data, where the posed work (Haider et al. 2017). network is supposed to have a dynamic response. Parts (a), (b), and (c) in Fig. 4 depict feed-forward, Levenberg Marquardt framework cascaded, and layer-recurrent neural nets, respectively, where circles in (c) depict the additional tap delays. The Levenberg Marquardt (LM) algorithm is a pseudo- second-order training algorithm, which works in conjunc- Benchmark backpropagation algorithms tionwiththesteepest descentmethod. Ithas beenreported that this algorithm promises better stability and conver- The backpropagation technique is an iterative method, gence speed (Levenberg et al. 1944). which works in conjunction with gradient descent algo- Let us consider the output response of feedforward rithm (Reed et al. 1993). While in each iteration, the net- neuralnetwork,calculatedusingEq. 1,whereinitialoutput work coefficients are updated, the method continues to response is given as y ¼r . The network error is calcu- compute gradient of the cost function accordingly. The 0 k lated using Eq. 2. objective of this iterative method is to minimize the cost The network sensitivities are backpropagated through function in terms of MSE: the network to update the learning rules (Demuth et al. o/ðwÞ r/ðwÞ¼ ¼0 8 j ð3Þ 2014). Derived from the Newton algorithm and steepest ow j descent method, the update rule for LM algorithm is defined as update rule: wðkþ1Þ¼wðkÞþrwðkÞ DW ¼(cid:2)JwTeJweþdrI(cid:3)(cid:2)1JwTee ð4Þ where or the above equation can be written as Fig.4 ANNarchitectures: Hidden layer 1 Output afeedforward,bcascaded,and clayerrecurrent w w Output Input b b (a) w w Input w Output b b (b) 1 w w 1:k w Output Input b b (c) 123 938 ApplNanosci(2017)7:933–945 Dx ¼(cid:2)(cid:4)JT ðx ÞJ ðx Þþd I(cid:5)(cid:2)1JT ðx Þvðx Þ ð5Þ (CGF) (Naqvi et al. 2016; Johansson et al. 1991; Powell k we k we k r we k k 1977). where J has dimensions (P(cid:4)Q(cid:4)R) and error vector is Letusconsidersetofinputvectorsr ,whichismutually we k thematrixofdimensions(P(cid:4)Q(cid:4)1).TheJacobianmatrix conjugate with respect to positive definite Hessian matrix is defined using relation: H , according to condition: wb 2oe11 oe11 ... oe11 oe11 3 rkTHwbrk ¼0: ð7Þ 6 ow1 ow2 owR ob1 7 6 7 Thequadraticfunctionisminimizedbysearchingalongthe 66oe12 oe12 ... oe12 oe12 77 eigenvectors of the Hessian matrix H . For the give 6 ow ow ow ob 7 wb 6 ...1 ...2 ... ...R ..1. 7 iterative function 6 7 6oe oe oe oe 7 66 1Q 1Q ... 1Q 1Q77 5FðwÞ(cid:2)Hwbrkþ- ð8Þ 6 ow1 ow2 owR ob1 7 Jwe ¼66 ... ... ... ... ... 77 ð6Þ 52FðwÞ¼Hwb: ð9Þ 6oe oe oe oe 7 6 P2 P2 ... P2 12 7 6 ow ow ow ob 7 For the iteration kþ1, the change in gradient can be 6 1 2 R 1 7 6oe oe oe oe 7 calculated from equation: 6 P2 P2 ... P2 P27 6 7 66 o.w..1 o.w..2 ... o.w..R o.b.1. 77 MUk ¼Ukþ1(cid:2)Uk ¼ðHwbrkþ1þ-Þ(cid:2)ðHwbrkþ-Þ¼HwbMrk 6 7 4oePQ oePQ ... oePQ oePO5 ð10Þ ow ow ow ob 1 2 R 1 where where P is the number of training patterns with Qoutputs, Mr ¼ðr (cid:2)r Þ¼dkr ð11Þ k kþ1 k R k R is the number of weights and elements in error vector, and e is calculated using Eq. 2. Conventionally, Jacobian where d is a learning rate and is selected to minimize R matrix J is initially calculated and later computations are function F(w) in the direction of r . The first search k performed on stored values for weights and biases upda- direction is arbitrary: tion.Withfewerpatterns,thismethodworkssmoothlyand r ¼(cid:2)U ð12Þ 0 0 efficiently while with large sized patterns, calculation of Jacobian matrix faces memory limitations. This concludes where LM algorithm performance degraded with larger training U (cid:6)5FðwÞj : ð13Þ patterns. k w¼w0 Conjugate gradient TheGram–SchmidtOrthogonalization(Messaoudi1996)is used to construct r for each iteration, orthogonal to k The Conjugate Gradient (CG) algorithm is known for its fMU ;MU ;...;MU g as 0 1 k(cid:2)1 fast convergence rate, and has been employed for solving r ¼(cid:2)U þb r ð14Þ k k k k(cid:2)1 spare linear equations on numerous occasions. Few of its variantsincludeScaledCG(SCG)andFletcher–PowellCG 123 ApplNanosci(2017)7:933–945 939 where b is scalar and given as large set of IV curves. We divide the set into two: (1) k MUT U training and testing samples and (2) sample for cross- b ¼ k(cid:2)1 k ð15Þ checking, and use the first subset to train our 90 ANN k UT U k(cid:2)1 k(cid:2)1 models (3 architectures (cid:4) 10 configurations (cid:4) 3 training and d can be calculated using relation: algorithms), where each configuration refers to a different R number of neurons in the hidden layers. Once trained, the (cid:2)UTU dk ¼ k k: ð16Þ system provides us the training time and epochs taken by R rTH r k wb k Bayesian regularization each ANN model to converge; we record these values. Following the successful training, validation, and testing TheBayesianRegularization (BR)algorithmmakes useof phases (combined called ANN Learning), we predict the LM algorithm (hence not so different) to search for the response for the second sample, and record the MSE minima from Hessian matrix of a given function. Refer between the predicted and measured responses. The same to MacKay (1992) for the detailed description. process is repeated for ten configurations. Objective statement Research methodology and simulation results Let X(cid:7)Rl, where l! { R(cid:4)D }, such that R[D, be a Research methodology bounded experimental data. Let /¼/ðiÞji2X be selec- ted features set, / ðiÞ;...;/ ðiÞ(cid:7)/ are n features asso- 1 n Figure 5conciselypresentsourdatacollectionandanalysis ciated with training and testing of ANN’s network to methodology. We have used the PPMS for obtaining a predictoutputvector/~ .Formally,/ismappedto/~ : pred pred Fig.5 Adoptedresearch Physical methodology Measurements Configuration Cross−Validation Training Sample Sample New Input Response Training MSE Feedforward Cascaded Recurrent Time Record Parameters Epoch LM CG BR LM CCGG BR LM CG BR Prediction Validation Terminate Testing ANN Learning 10? Config. <10? Count 123 940 ApplNanosci(2017)7:933–945 Table1 Curvesusedforcomparison MSE, number of epochs, and training time. The cost function is selected to be MSE and calculated using Eq. 2 Geometry Temperature(K) Magneticflux(Oe) Rectangle 8.8 0 Simulation results Square 8.65 41 Honeycomb 8.6 200 Theentirepurposeofthisresearchworkistohighlightthe Kagome 8.4 2000 ability of the proposed approach to predict the IV curves for different values of temperature and magnetic flux, which were not available while training the ANN model. /!/~pred.Theoutputvector/~pred isthepredictedversion As already stated in ‘‘Physical measurement system and of / in terms of readings’’ section, we kept four IV curves (one for each n/~ ¼D(cid:2)dk ;d2 ;d ;d (cid:3)2f(cid:2)1:1g;(cid:2)dL ;dC ;dB (cid:3)2Rlo geometry of antidots) isolated from the modeling data, pred WB NN R S WB WB WB which we should use for cross-checking of the proposed approach on unforeseen data. Table 1 presents those IV where the input parameters, dk is a vector of randomly WB curves. Figure 6 presents a plot of the predicted values initialized weights and biases, d2 represents two hidden NN against the physically measured ones. The thicker curves, layers with different combinations of neurons, and d and R for example, for the square array (shown in the top right d are the learning rate and step size, respectively. The s corner),representslargernumberofdatapointsavailablein output parameters comprised of optimized weights and the transport measurements. The reason for choosing a biases vector from three different training algorithms of different number of data points for each geometry was dL (LM algorithm), dL (CG algorithm) and dB (BR twofold. First, we deliberately wanted to evaluate perfor- WB WB WB algorithm). Performance parameters are selected to be mance of each training algorithm in the absence of Fig.6 PredictedIVcurves:rectangular(topleft),square(topright),honeycomb(bottomleft),andkagome(bottomright) 123 ApplNanosci(2017)7:933–945 941 Fig.7 MSE:rectangular(topleft),square(topright),honeycomb(bottomleft),andkagome(bottomright) sufficiently large data sets, and thereby estimate the mini- all the geometries and architectures. Instead, let us com- mum data points needed for an acceptable MSE. Second, mentontheobtainedresultsinacase-by-casemanner.BR we wanted to showcase the applicability of our approach algorithm outperforms the other two in terms of MSE for onvariousgeometrieswithavaryingnumberofdatapoints the square and honeycomb arrays, which had more data in the transport measurements. It may be clearly observed points than the remaining geometries: rectangular and that prediction for each geometry results in a negligible kagome. However, this happens only at the cost of error. increased training time and number of iterations to con- verge. The increased MSE in case of the latter two Figures 7, 8, 9, respectively, present MSE, number of geometries by BR reflects its impotence in approximating iterations to converge, and training time by each of the curves with a dearth of data points. For these two geome- training algorithms. Note that the horizontal axis in each tries, LM appears to be more promising, except for a few figurecorrespondstodifferentANNmodels;eachhavinga random peaks, see Fig. 7. different number of neurons in the hidden layer; we call LM and CGF prove to be a better option if fast con- this network configuration. In essence, each plot corre- vergence,bothintermsofnumberofiterationsandtraining spondstoadifferentgeometryofantidots,trainedbythree time, is decisive. This is evident in Figs. 8 and 9. It is benchmark algorithms for thirty different configurations. interestingtonotethatCGF,incontrasttoBR,takesalarge Considering the fact that training an ANN model is a number of iterations to converge for square and honey- stochastic process—immensely relying upon random comb arrays; those having large data sets, and its training numbers—it is difficult to ascertain the reason behind time is minimal with geometries having smaller data sets. diversity,especiallysharppeaks,ineach result.Therefore, This advocates its usage in systems requiring real-time it is not possible to advocate the use of one algorithm for approximation, where accuracy could be slightly 123 942 ApplNanosci(2017)7:933–945 Fig.8 Epochs:rectangular(topleft),square(topright),honeycomb(bottomleft),andkagome(bottomright) compromised. However, for the application presented in outperform the rest of the algorithms in terms of MSE, this work, CGF is not the best available option. On the whereas LM and CGF can be good options for minimum other hand, LM stands between the other two alternative trainingtimeandepochs,evenintheabsenceoflargedata algorithms, both in terms of prediction accuracy and con- sets. For the purpose of predicting IV curves in supercon- vergence rate. While it has better approximation accuracy ductingfilms,thisworkwillbeusedasbenchmark,sinceit for smaller data sets, it tends to converge faster for the points out the best pairs of architecture and algorithm for geometries having large number of data points. the most widely adopted assessment parameters, namely: Table 2 presents the best results, in terms of minimum MSE, epochs, and training time. MSE, epochs, and training time, obtained from the pre- dictionprocessfor eachgeometry.Notethatnumber(No.) of neurons, expressed as [x, y], represents x and y neurons Conclusion in the first and second layers of each neuron, respectively. The table should be interpreted as follows: for the rectan- Motivated by the experience that transport measurements gular array, the layer-recurrent architecture having eleven insuperconducting filmsare notoriousand cumbersometo andfiveneuronsinthehiddenlayers,oncetrainedwithLM obtain, a predictive model, based on artificial neural net- algorithm,achievestheMSEof3:3(cid:4)10(cid:2)7,whichisbetter works,hasbeenproposed.Themodeltakesafiniteamount thananyotherpairorarchitectureandalgorithm.Similarly, of data points, for each of the four geometries of antidots for the same geometry, BR converges in the least number including rectangular, square, honeycomb, and kagome, of iterations of 11 with the layer-recurrent architecture, and extrapolates the curves for awide range ofunforeseen having 18 and 10 neurons in the hidden layer, while CGF valuesoftemperatureandmagneticflux.Wehaveassessed trains the cascaded network with [5, 2] neurons in just three different architectures of artificial neural networks, 0.093 s, which is faster than all other options. It is evident trained by three renowned training algorithms for the that BR, if provided with a sufficiently large data set, can purpose of predicting these current–voltage curves. Our 123

Description:
etry becomes indispensable. To circumvent this monotonous measurement procedure, we also propose a framework based on artificial neural networks to predict the curves for different geometries using a small subset of measurements, and facilitate extrapolation of these curves over a wide range of
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.