ebook img

Quasi-Conscious Multivariate Systems PDF

0.69 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Quasi-Conscious Multivariate Systems

Quasi-Conscious Multivariate Systems JonathanW.D.Mason,MathematicalInstitute,UniversityofOxford,UK(SubmittedtoComplexity2015) Abstract Conscious experience is awash with underlying relationships. Moreover, for various brain regions such as the visual cortex,thesystemisbiasedtowardsomestates.Representingthisbiasusingaprobabilitydistributionshowsthatthesystem candefineexpectedquantities.Themathematicaltheoryinthepresentpaperlinksthesefactsbyusingexpectedfloatentropy (efe),whichisameasureoftheexpectedamountofinformationneeded,tospecifythestateofthesystem,beyondwhatis alreadyknownaboutthesystemfromrelationshipsthatappear asparameters. Under therequirement thattherelationship parametersminimiseefe,thebraindefinesrelationships. Itisproposedthatwhenabrainstateisinterpretedinthecontext oftheserelationshipsthebrainstateacquiresmeaningintheformoftherelationalcontentoftheassociatedexperience. For 5 a given set, the theory represents relationships using weighted relationswhich assign continuous weights, from 0to1, to 1 theelementsoftheCartesianproductofthatset. Therelationshipparametersincludeweightedrelationsonthenodesofthe 0 systemandontheirsetofstates. ExamplesobtainedusingMonte-Carlomethods(whererelationshipparametersarechosen 2 uniformlyatrandom)suggestthatefedistributionswithlonglefttailsaremostimportant. g u 1 Introduction A 9 In the present paper we further develop the theory introduced in the article ‘Consciousness and the structuring property ] C of typical data’ (see [1]), and demonstrate and investigate the theory through applications in a number of examples using N computationalmethods. . o It is intended that the theory will provide a way into the mathematics that underpins how the brain defines the relational i b - contentofconsciousness.Indeed,consciousexperienceclingstoasubstrateofunderlyingrelationships:pointsinaperson’s q [ fieldofviewcanbestronglyrelated(ifclosetogether)orunrelated(iffarapart),givinggeometry;colourscanappearsimilar 3 (e.g.redandorange)orcompletelydifferent(e.g.redandgreen).Wecanmakeaverylonglistofsuchexamplesofrelations v 2 involvingdifferentsounds, smells, tastes and locationsof touch. Furthermore,at a highersemantic levelinvolvingseveral 8 6 brainregions,ifweseesomeoneweknowandhearaperson’snamethenweknowwhetherthenamerelatestothatperson. 0 0 It is hard to think of any conscious experience that does not involve relations. Whilst it is difficult to explain how the . 1 0 braindefinesthecolourblue, inthepresentpaperwehopetoprovidethebeginningsofamathematicaltheoryforhowthe 5 1 braindefinesalloftherelationsunderlyingconsciousnessand,therefore,explainwhy,forexample,blueappearssimilarto : v turquoisebutdifferenttored. Itisproposedthatwhenabrainstateisinterpretedinthecontextofalltheserelations,defined i X by the brain, the brain state acquires meaning in the form of the relational content of the experience. If we consider the r a relations defined by the brain to be a type of statistic then we have the following analogy. A single observation of a one dimensionalrandomvariableisalmostmeaningless,butinthecontextofthestatisticsoftherandomvariable,suchasmean andvariance,theobservationhasmeaning.Forargumentsinsupportofthisapproach,thereaderisreferredto[1]. The issue of how a system such as the braindefines relationsis crucial. Importantly,for variousbrain regionssuch as the visualcortex,(undertemporallywellspacedobservationsofthesystem)theprobabilitydistributionoverthedifferentpossible statesofthesystemisfarfrombeinguniformowingtolearningrulesofwhichtheBienenstock,CooperandMunro(BCM) versionofHebbiantheoryisonecandidate;see[2],[3]and[4]. Hence,thebrainisnotmerelydrivenbythecurrentsensory input,butisbiasedtowardcertainstatesasaresultofalonghistoryofsensoryinputs. Theprobabilitydistributionoverthe statesofthesystemisthereforeapropertyofthesystemitselfallowingthesystemtodefineexpectedquantities. 1 Inthe theorypresentedin the presentpaper,the braindefinesrelationsunderthe requirementthattheexpectedquantityof aparticulartypeofentropyisminimised. Wecallthisentropyfloatentropy. Foracollectionofrelationsonthesystemand anygivenstate ofthesystem, thefloatentropyof thestate is ameasureoftheamountofinformationrequired,in addition to the information given by the relations, in order to specify that state. We make the definition of float entropy precise inSubsection1.1. However,laterinthepresentpaperwewillgiveamoregeneraldefinition(multi-relationalfloatentropy) whichallowstheinvolvementofmorethantworelations;seeSubsection4.1. Wewillalsoconsideratimedependentversion, andthetheoryofthepresentpaperwillbecomparedwithIntegratedInformationTheoryandShannonentropy. 1.1 Definitions Inthissubsectionweprovidethemaindefinitionsinthepresentpaper.Systemssuchasthebrain,anditsvariousregions,are networksofinteractingnodes. Inthecaseofthebrainwemaytakethenodesofthesystemtobetheindividualneuronsor possiblylargerstructuressuchascorticalcolumns. Thenodesofthesystemhavearepertoire(range)ofstatesthattheycan bein. Forexample,thestatesthatneuronscanbeincouldbeassociatedwithdifferentfiringfrequencies.Inthepresentpaper weassumethatthenoderepertoireisfinite(aswasassumedin[1]),andthestateofthesystemistheaggregateofthestates ofthenodes. Theoriginaltheoryin [1]used a mainlysettheoreticapproach,wherea relationona nonemptysetS wasusuallytakento bea binaryrelationR⊆S2. Weightedrelations(seebelow)are slightlymoregeneralthanbinaryrelations,andthefurther development(presentedin the presentpaper)oftheoriginaltheoryusesweightedrelationsbecausetheyallow asystem to defineaweightedrelationontherepertoireofitsnodes.Thisisdesirableaswewillseefromexampleslaterinthepaper. InDefinition1.1theelementsofthesetSaretobetakenasthenodesofthesystem. Definition1.1. LetSbeanonemptyfiniteset, n:=#S. ThenadataelementforSisaset(havingauniquearbitraryindex labeli) S :={(a,f(a)): a∈S, f :S→V}, where f isamap i i i i andV :={v ,v ,...,v } is the node repertoire. The set of all data elements for S given V is W so that #W =mn. 1 2 m S,V S,V Fortemporallywellspacedobservations,itisassumedthatagivenfinitesystemdefinesarandomvariablewithprobability distributionP:W →[0,1]forsomefinitesetSandnoderepertoireV. IfT isafinitesetofnumberedobservationsofthe S,V systemthenT iscalledthetypicaldataforS.TheelementsofT (calledtypicaldataelements)arehandledusingafunction t :{1,...,#T}→{i: S ∈W }, i S,V whereSt(k) isthevalueofobservationnumberkfork∈{1,...,#T}. Inparticular,thefunctiont neednotbeinjectivesince smallsystemsmaybeinthesamestateforseveralobservations. 2 Remark1.1. NotethatPinDefinition1.1extendstoaprobabilitymeasureonthepowerset2W S,V ofW S,V bydefining (cid:229) W P(A):= P(Si), forA∈2 S,V. Si∈A Hence,wehaveaprobabilityspace(W S,V,2W S,V,P)withsamplespaceW S,V,sigma-algebra2W S,V,andprobabilitymeasureP. Wenowneedthedefinitionofaweightedrelation. Definition1.2(Weightedrelations). LetSbeanonemptyset. AweightedrelationonSisafunctionoftheform R:S2→[0,1], where[0,1]istheunitinterval. WesaythatRis: 1. reflexiveifR(a,a)=1foralla∈S; 2. symmetricifR(a,b)=R(b,a)foralla,b∈S. Thesetofallreflexive,symmetricweighted-relationsonSisdenotedY . S Remark 1.2. Exceptwhere stated, the weighted relationsused in the presentpaperare reflexive and symmetric. Relative to such a weighted relation, the value R(a,b) quantifies the strength of the relationship between a and b, interpreted in accordancewith the usualorder structure on [0,1] so thatR(a,b)=1 is a maximum. For a small finite set, it is useful to displayaweightedrelationonthatsetasaweightedrelationtable(i.e.asamatrix). BeforegivingthedefinitionoffloatentropywerequireDefinitions1.3and1.4. Definition1.3. LetSbeasinDefinition1.1andletU :V2→[0,1]beareflexive,symmetricweighted-relationonthenode repertoireV;i.e.U ∈Y . Then,foreachdataelementS ∈W ,wedefineafunctionR{U,S}:S2→[0,1]bysetting V i S,V i R{U,S}(a,b):=U(f(a),f(b)) foralla,b∈S. i i i ItiseasytoseethatR{U,S}∈Y . i S Definition1.4. LetSbeanonemptyfiniteset. EveryweightedrelationonScanbeviewedasa#S2-dimensionalrealvector. Hence,thed metricisametriconthesetofallsuchweightedrelationsbysetting n 1/n d (R,R′):= (cid:229) |R(a,b)−R′(a,b)|n , n (cid:18)(a,b)∈S2 (cid:19) whereRandR′ areanytwoweightedrelationsonS.Similarlywehavethemetricd¥ (R,R′):=maxS2|R(a,b)−R′(a,b)|. 3 Definition1.5(Floatentropy). LetSbeasinDefinition1.1,letU ∈Y ,andletR∈Y . Thefloatentropyofadataelement V S S ∈W ,relativetoU andR,isdefinedas i S,V fe(R,U,S):=log (#{S ∈W : d(R,R{U,S })≤d(R,R{U,S})}), i 2 j S,V j i where, inthepresentpaper(unlessotherwise stated), disthe d metric. Furthermore, letP:W →[0,1]andT beasin 1 S,V Definition1.1. Theexpectedfloatentropy,relativetoU andR,isdefinedas (cid:229) efe(R,U,P):= P(S)fe(R,U,S). i i Si∈W S,V Theefe(R,U,T)approximationofefe(R,U,P)isdefinedas 1 (cid:229)#T efe(R,U,T):= #T fe(R,U,St(k)), k=1 wheret neednotbeinjectivebyDefinition1.1. Byconstruction,efeismeasuredinbitsperdataelement(bpe). Itisproposedthatasystem(suchasthebrainanditssubregions)willdefineU andR(uptoacertainresolution)under therequirementthattheefeisminimised.Hence,foragivensystem(i.e.forafixedP),weattempttofindsolutionsinU and Rtotheequation efe(R,U,P)= min efe(R′,U′,P). (1) R′∈Y S,U′∈Y V Inpracticewereplaceefe(·,·,P)in(1)withefe(·,·,T). Remark1.3. InDefinition1.5thed metricisused. Itturnsoutthat,amongstmanymetrics,achangeinmetrichasonlya 1 smalleffectonthesolutionsto(1). Therearealsoplentyofpathologicalmetricswhich,whenused,willsignificantlychange thesolutionsto(1).InRemark1.2wementionedthat,foraweightedrelation,thevalueofR(a,b)isinterpretedinaccordance with the usualorderstructure on [0,1]. We arguethatthe orderstructure to be usedon [0,1] shouldbe determinedby the metric thatisbeingusedin Definition1.5. Hence,fora pathologicalmetric, whilstthesolutionsto (1) willhavechanged, their interpretation as weighted relations may be largely unchangedwhen the order structure used on [0,1] is determined by the metric beingused (when this makes sense). In practice, we wantto use the usualorderstructure on [0,1], and this requirement limits which metrics should be used in Definition 1.5. We will look at the issue of metrics in some detail in Subsection3.3. Remark1.4. ThetheorypresentedinthepresentpaperusesthedefinitionsinSubsection1.1. Supposewe restrictedthese definitionsso thatthe onlyweightedrelationwe coulduse onthe noderepertoireV wasthe Kroneckerdelta, andthe only elementsofY wecouldusewereweightedrelationstakingvaluesinthetwopointset{0,1}.Then,undertheserestrictions, S 4 Definition1.5wouldyieldadefinitionoffloatentropyequivalenttothatgivenin[1]. Indeed,notethataweightedrelation R:S2→{0,1}isgivenbytheindicatorfunctionfortherelation{(a,b)∈S2: R(a,b)=1}⊆S2. Hence,thetheorypresented inthepresentpaperisindeedadevelopmentofthetheorypresentedin[1]. W Remark1.5. WithreferencetoRemark1.1andDefinition1.5,forA∈2 S,V,wehavetheweakconditionalefe (cid:229) efe(R,U,P|A):= P(S |A)fe(R,U,S). i i Si∈W S,V Weakconditionalefecanbeusefulwhenconsideringasystemthathasenteredaparticularmodesuchthatthismoderestricts thesystemtoaparticularsetofdataelements. Theremaybeotherusefuldefinitionsofconditionalefe. 1.2 Advantages ofthe theory and overview Theexamplesinthepresentpaperareintendedtohaverelevancetothevisualcortexandourexperienceofmonocularvision. Inlieuoftypicaldataforthevisualcortexweapplythetheorytotypicaldatafordigitalphotographsoftheworldaroundus. Ifthetheory,asusedintheexamples,isrelevanttothevisualcortexthentheexamplesshowthattheperceivedrelationships betweendifferentcolours,theperceivedrelationshipsbetweendifferentbrightnesses,andtheperceivedrelationshipsbetween different points in a person’s field of view (giving geometry) are all defined by the brain in a mutually dependent way. Hence,inthiscase,thereisaconnectionbetweentherelationshipsthatunderlycolourperceptionandourperceptionofthe underlyinggeometryoftheworldaroundus. Ofcoursethestatesofthevisualcortexaresomewhatmorecomplicatedthan digital photographs since some neurons have sophisticated receptive fields. However, the theory presented in the present paperdoesnotassumethatthe nodesof thevisualcortexhaveto be individualneurons. Instead, eachnodecan consistof manyneurons; effectivelyrepresentingthe dataelementsusing a largerbase (notethatwe can thinkof thenoderepertoire asbeinganalogoustoachoiceofbaseintherepresentationofintegers). Hence,theexamplescouldwellberelevanttothe visualcortex.ApreliminarydiscussionandinvestigationregardingbaseispresentedinSubsection3.1. WealsoapplythetheorytoasystemwheretheprobabilitydistributionPinDefinition1.1isuniformoverW . Inthiscase S,V the solutions to (1) vary greatly (instead of all being similar) and, hence, the system fails to define weightedrelationsthat giveacoherentinterpretationofthestatesofthesystem. Thevariationinthesolutionsto(1)ispartlyduetosymmetries,and thisisdiscussedinExample3.4. It is argued in [1] that the theory presented there provides a solution to the binding problem and avoids the homunculus fallacy.Thoseargumentsalsoapplytothetheorypresentedinthepresentpaper.Inparticular,consciousnessisnottheoutput ofsomealgorithmicprocessbutitmayinstead,largely,bethestatesofthesysteminterpretedinthecontextoftheweighted relationsthatminimiseexpectedfloatentropy,whereherewearetalkingaboutadefinitionoffloatentropythatinvolvesmore thanthetwoweightedrelationsusedin(1);seeSubsection4.1. Thisargumentmaybecomeclearerforthereaderaftergoing throughtheexamplesinthepresentpaper. Therestofthepaperisorganisedasfollows. Section2looksatobtainingtypical 5 datafromdigitalphotographs,andspecifiesthecomputationalmethodsusedforfindingsolutionsto(1). Section3provides sixexamplesinwhichthetheoryisapplied. Wecontinuethedevelopmentofthetheorybylookingatchangingthebaseof asystem,joiningandpartitioningsystems,andmetricindependence. Section4providesgeneralisationsofDefinition1.5,a comparisonbetweenthepresenttheoryandbothGiulioTononi’sIntegratedInformationTheory(IIT)andShannonentropy, followedbytheconclusion.AppendixAliststhesoftwareused,andAppendixBprovidesalistofnotation. 2 Typical data and computational methods In this section we look at obtaining typical data from digital photographs, a binary search algorithm for finding solutions to(1),andusingefe-histogramstoassessguesseswhenguessingsolutionsto(1). 2.1 Typical data from digitalphotographs Whenobtaininga typicaldataelementfroma digitalphotograph,inthepresentpaper,onlya smallpartofthe photograph isused. Thisisbecausethecomputationalmethodsusedinthepresentpaperaresuitableforsmallsystems(#W ≤106) S,V although,attheexpenseofclarityandeaseofimplementation,othermoreefficientcomputationalmethodsarepossiblefor investigating larger systems; see Appendix A which lists the software used during the research for the present paper and providesadiscussiononmoreefficientcomputationalmethods. Figure1showsthesamplingofadigitalphotographsuchthatthetypicaldataelementobtainedisforasystemcomprised offivenodeswithafourstatenoderepertoire(#W =1024).Also,inthecaseofFigure1,weareusingpixelbrightnessto S,V 1 2 3 4 5 Figure1:Digitalphotographsamplingusingfivenodesandafourshadegrayscale. 6 1 2 3 4 5 Figure2:Digitalphotographsamplingusingfivenodesandaninecolourred/greenpalette. determinenodestate. Fromtop-lefttobottom-right,thefirstimageistheoriginal. Thisimageisdesaturated(thecoloursare turnedintoshadesofgray)andthenthecontrastisenhanced. Thecontrastenhancementisnotrequired,butitwasthought thatitmightreducethenumberoftypicaldataelementneededinordertoobtainmeaningfulresults. Indeed,whensimilar, the solutionsto (1) are ratherlike a type of statistic and, therefore, when using typicaldata we needto make sure thatthe samplesizeislargeenough. Theimageisthenposterised(inthiscasethenumberofshadesisreducedtofourgivingafour statenoderepertoire). Finally,fivepixelsaresampledgivingthestateofeachofthefivenodesofthetypicaldataelement; seeTable1. Toobtainthetypicaldataforthesystem,thiswayofobtainingtypicaldataelementsisusedforseveralhundred Table1:NodestatesofthetypicaldataelementobtainedfromthesamplinginFigure1. node1 node2 node3 node4 node5 St(1) 0.000 147.224 441.673 441.673 147.224 digitalphotographs. Importantly,whatever the geometriclayoutof the pixelsampling locations(in Figure 1 the layoutis partof a gridthathasadjacentlocationseveryten pixels), the same layoutmustbeused forall ofthe digitalphotographs. Similarly,thesamecriteriamustbeusedfordeterminingthenodestates. ThesamplinginFigure2obtainsatypicaldataelementforasystemcomprisedoffivenodeswithaninestatenoderepertoire (#W =59049). Herenodestateisdeterminedbypixelcolouroverared/greenpalette. Fromtop-lefttobottom-right,we S,V firsthavetheoriginalimagetowhichcolourcontrastenhancementisapplied.Theimageisthenrestrictedtocoloursmadeup ofredandgreenbysettingbluevaluestozero. Theimageisthenposterised(threevaluesforredandthreevaluesforgreen areusedgivinganinestatenoderepertoire).Finally,fivepixelsaresampled;theresultisgiveninTable2. Wenowconsider computationalmethodsforfindingsolutionsto(1). 7 Table2:NodestatesofthetypicaldataelementobtainedfromthesamplinginFigure2. node1 node2 node3 node4 node5 St(1) 128,128 255,255 255,255 128,128 128,128 2.2 Binary search algorithm Foranygivensystem,letn=#Sandm=#V. Step1. Theinitialapproximationofasolutionto(1)istakentobethepairU∈Y andR∈Y withU(v,v′)=1 andR(a,b)=1 V S 2 2 forallv,v′∈V,v6=v′,anda,b∈S,a6=b,respectively. Step2. ForU andR(showninTable3)agivenapproximatesolutionto(1),letk=2−(q+1)whereq=min{i∈N: 2iu ∈N}. 1,2 We now calculate the efe value of the system for each combination of the entries in Table 4 that give symmetric Table3:ApproximatesolutiontoEquation(1). U v1 v2 v3 ··· R node1 node2 node3 ··· v1 1 u1,2 u1,3 ··· node1 1 r1,2 r1,3 ··· v2 u2,1 1 u2,3 ··· node2 r2,1 1 r2,3 ··· v3 u3,1 u3,2 1 ··· node3 r3,1 r3,2 1 ··· ... ... ... ... ... ... ... ... ... ... weightedrelations.Thisisabinarysearchinthesensethattherearetwooptionsperentry. Table4:Binaryentriesoverwhichtosearchforapproximatesolutionsto(1). U v1 v2 v3 ··· R node1 node2 node3 ··· v1 1 u1,2±k u1,3±k ··· node1 1 r1,2±k r1,3±k ··· v2 u2,1±k 1 u2,3±k ··· node2 r2,1±k 1 r2,3±k ··· v3 u3,1±k u3,2±k 1 ··· node3 r3,1±k r3,2±k 1 ··· ... ... ... ... ... ... ... ... ... ... Step3. Iftheminimumoftheefevalues,obtainedinStep2,wasgivenbyonlyoneofthepairsofweightedrelationstestedin Step2thenredefineU andRasthisnewpairofweightedrelationsandreturntoStep2. Otherwise,outputU,Rand theirassociatedefevalue,andstop. Ifthealgorithmdidnotstopthenthechronologyofapproximatesolutions,givenbytheapplicationsofStep3, wouldbea convergentsequence with respect to d and any of the metrics in Definition 1.4. However, for m≥2 and n≥2, both Y 1 V andY areuncountableinfinitesets;whereasthenumberofpossibleefevaluesisfinite. Hence,someefevaluesresultfrom S infinitelymanyweightedrelationsinY andY . Itisnotsurprisingthenthat,astheapproximatesolutionsbecomecloser V S withrespecttod ,ultimatelythealgorithmstopsatStep3. Inshort,thesystemdefinesU andR(uptoacertainresolution) 1 undertherequirementthattheefeisminimised. This search algorithm works well; see its use in Section 3. However, the number of efe values calculated during each 8 applicationofStep2is2(n(n−1)+m(m−1))/2. Forexample,asystemwith#S=#V =5canresultinthealgorithmcalculating morethan107 efevaluesbeforestopping. Hence,thepresentpaperalsousesthefollowing,computationallylessexpensive, methodforapproximatingsolutionsto(1);alsoseeAppendixAconcerningmoreefficientcomputationalmethods. 2.3 Using efe-histograms obtained from Monte-Carlo methods Here we chooseU ∈Y and R∈Y uniformlyat random. With referenceto Table 3, this is done by choosingeach off- V S diagonalupper-triangularentry ofU and R uniformly at random from the interval [0,1] (the off-diagonallower-triangular entries are then those makingU and R symmetric). The efe value is then calculated and stored, and the whole process is repeated producinga list of many thousands of efe observationsfrom which an efe-histogram can be obtained. With this setup, if we wish to treat efe as a random variable then standard methods can be used for approximating the probability distribution fromthe efe values (althoughthis can be difficultfor distributionswith verythin tails). In any case, provided enoughobservationsare made, the efe-histogramcan be used to help assess guesseswhen guessingapproximatesolutions to(1). However,weneedtobecarefulconcerningwhatismeantby‘chooseuniformlyatrandomfromtheinterval[0,1]’. Usually, this meansthat all subintervalsof the same length are equallyprobableevents. Thisis fine for us as long as the length of subintervals is determined by the metric used in Definition 1.5, which convenientlyis d ; see Subsection 3.3 for relevant 1 details. Wearenowreadytoapplythetheory. 3 Examples and investigations This section providesinsight concerninghow the theory performsin practice by way of severalinformativeexamplesand investigations. Example3.1. Inthisexample200digitalphotographsoftheworldaroundusareused. Thetypicaldataisobtainedusing exactlythemethodshowninFigure1,wherethephotographshaveafourshadegrayscale.Hence,#T =200andthesystem iscomprisedoffivenodeswithafourstatenoderepertoire(#W =1024). ThebinarysearchalgorithmofSubsection2.2 S,V wasappliedtoT and,aftertencycles,returnedtheweightedrelationsinTable5. Figure3providesagraphillustrationof theweightedrelations. ForU,valuesabove0.2areindicatedwithasolidline,whilstvaluesfrom0.02to0.2areindicated with a dash line. ForR, valuesabove0.9are indicatedwith a solid line, whilst valuesfrom 0.75to 0.9are indicatedwith a dash line. Although#T =200 is rather small, T has defined the correct relationships under the requirement that efe is minimised. As described in Subsection 2.3, Figure 4 provides an efe-histogram for T. ForU and R in Table 5 we have efe(R,U,T)=4.91623,to six sf, andthisvalueisindicatedinFigure 4by the triangularmarkerfurthestto theleft. The efe-histogramisnegativelyskewedwithalonglefttailandthisshapeisusualforsystemswheretheprobabilitydistributionP, 9 Table5:ApproximatesolutionforExample3.1. U 0 147.224 294.449 441.673 0 1 0.30908203125 0.05224609375 0.00439453125 147.224 0.30908203125 1 0.41064453125 0.10400390625 294.449 0.05224609375 0.41064453125 1 0.34228515625 441.673 0.00439453125 0.10400390625 0.34228515625 1 R node1 node2 node3 node4 node5 node1 1 0.99853515625 0.62353515625 0.92041015625 0.78369140625 node2 0.99853515625 1 0.94580078125 0.75244140625 0.93505859375 node3 0.62353515625 0.94580078125 1 0.73486328125 0.88330078125 node4 0.92041015625 0.75244140625 0.73486328125 1 0.98193359375 node5 0.78369140625 0.93505859375 0.88330078125 0.98193359375 1 1 2 3 4 5 Figure3:GraphillustrationoftheweightedrelationsinTable5,showingstrongestrelationships (solidlines)andintermediaterelationships(dashlines). 4000 3500 3000 2500 2000 1500 1000 500 0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0 Figure4: Anefe-histogramforExample3.1using200,000observationsandabinintervalof 0.01.Foreachcycleofthebinarysearchalgorithm,theefevalueoftheapproximatesolution obtainedisshown(triangularmarker). inDefinition1.1,isfarfromuniformoverW . S,V Example3.2involvesa largersystem thanthatofExample3.1. Here enlargingthe systemresultsin anincrease inthe differencebetweentheminimumefeandthelocation(meanormedian)oftheefe-histogram. Example3.2. Inthisexample400digitalphotographsoftheworldaroundusareused. Thetypicaldataisobtainedusing themethodshowninFigure1,exceptthenumberofsamplinglocationsisincreasedfromfivetoninetoformathreebythree grid. Since, #T =400 and #W =49 =262144, this system is too large to apply the binary search algorithm. Instead, S,V Table5inExample3.1wasusedtoguessanapproximatesolution. Figure5providesanefe-histogramforT. Theefevalue 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.