ebook img

On q-non-extensive statistics with non-Tsallisian entropy PDF

0.41 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview On q-non-extensive statistics with non-Tsallisian entropy

Physica A PhysicaA00(2015)1–23 On q-non-extensive statistics with non-Tsallisian entropy 5 1 Petr Jizbaa,b, Jan Korbela 0 2 aFacultyofNuclearSciencesandPhysicalEngineering,CzechTechnicalUniversityinPrague,Brˇehova´7,11519,Prague,CzechRepublic y bInstituteofTheoreticalPhysics,FreieUniversita¨tinBerlin,Arnimallee14,14195Berlin,Germany a M 9 2 Abstract ] h Wecombine anaxiomaticsofRe´nyi withtheq–deformed versionof Khinchinaxiomstoobtainameasureofinformation(i.e., p entropy) which accounts both for systems with embedded self-similarity and non-extensivity. We show that the entropy thus - obtainedisuniquelysolvedintermsofaone-parameterfamilyofinformationmeasures.Theensuingmaximal-entropydistribution h is phrased in terms of a special function known as the Lambert W–function. We analyze the corresponding “high” and “low- t a temperature”asymptoticsandrevealanon-trivialstructureoftheparameterspace. m [ c 2014PublishedbyElsevierLtd. (cid:13) 2 Keywords: Multifractals,Re´nyi’sinformationentropy,THCentropy,MaxEnt,Heavy-taileddistributions v PACS:65.40.Gr,47.53.+n,05.90.+m 6 8 1. Introduction 3 7 Inhis1948paper[1]Shannonformulatedthetheoryofdatacompression. Thepaperestablishedafundamental 0 . limittolosslessdatacompressionandshowedthatthislimitcoincideswiththeinformationmeasurepresentlyknown 1 as Shannon’s entropy . In words, it is possible to compress the source, in a lossless manner, with compression 0 H 5 rate close to , it is mathematicallyimpossible to do better than . However, manymoderncommunicationpro- H H 1 cesses,includingsignals,imagesandcoding/decodingsystems,oftenoperateincomplexenvironmentsdominatedby : conditionsthatdonotmatchthe basic tenetsof Shannon’scommunicationtheory. For instance, buffermemory(or v i storage capacity)of a transmitting channelis often finite, codingcan have a non–trivialcost function, codesmight X havevariable-lengthcodes,sourcesandchannelsmayexhibitmemoryorlosses,etc. Informationtheoryoffersvari- r ousgeneralized(non–Shannonian)measuresofinformationtodealwithsuchcases. Amongthemostfrequentlyused a one can mention, e.g., Havrda–Charva´t measure [2], Sharma–Mittal measure [3], Re´nyi’s measure [4] or Kapur’s measures[5]. Informationentropiesget even more complexby consideringcommunicationsystems with quantum channels[6, 7]. There exists even attempts to generalize Shannon’smeasure of informationin the direction where no use of the concept of probability is needed hence demonstrating that information is more primitive notion than probability[8]. Inmid1950Jaynes[9]proposedtheMaximumEntropyPrinciple(MaxEnt)asageneralinferenceprocedurethat, amongothers,bearsadirectrelevancetostatisticalmechanicsandthermodynamics.TheconceptualframeofJaynes’s MaxEntis formedby Shannon’scommunicationtheory with Shannon’sinformationmeasure as an inference func- tional. ThecentralroˆleofShannon’sentropyasatoolforinductiveinference(i.e.,inferencewherenewinformation Emailaddresses:[email protected](PetrJizba),[email protected](JanKorbel) 1 P.JizbaandJ.Korbel/PhysicaA00(2015)1–23 2 isgivenintermsofexpectedvalues)wasfurtherdemonstratedinworksofFaddeyev[10], ShoreandJohnson[11], Wallis[12],Topsøe[13]andothers. InJaynes’sprocedurethelawsofstatisticalmechanicscanbeviewedasinfer- encesbasedentirelyonpriorinformationthatisgivenintheformofexpectedvaluesofenergy,energyandnumberof particles,energyandvolume,energyandangularmomentum,etc.,thusre-derivingthefamiliarcanonicalensemble, grand-canonicalensemble,pressureensemble,rotationalensemble,etc.,respectively[14]. Remarkablefeatureofthis procedureisthatitentirelydispenseswithsuchtraditionalhypothesesasergodicityormetrictransitivity. Following Jaynes, one should view the MaxEnt distribution (or maximizer) as a distribution that is maximally noncommittal with regard to missing information and that agrees with all what is known about prior information, but expresses maximumuncertainty with respect to all other matters [9]. By identifyingthe statistical sample space with the set ofall(coarse-grained)microstatesthecorrespondingmaximizeryieldstheShannonentropythatcorrespondsto the Gibbsentropyofstatisticalphysics. Surprisingly,despitetheaforementionedconnectionbetweeninformationtheoryandphysicsanddespiterelated advancementsinnon-Shannonianinformationtheory,tendenciesaimingatsimilarextensionsoftheGibbs’sentropy paradigmstartedtopenetrateintostatisticalphysicsonlyinthelasttwodecades. Thishappenedwhenevidenceac- cumulatedshowingthatthereareindeedmanysituationsofpracticalinterestrequiringmore“exotic”statisticswhich donotconformwithGibbsianexponentialmaximizers.Percolation,proteinfolding,criticalphenomena,cosmicrays, turbulence,granularmatterorstockmarketreturnsmightprovideexamples. InattackingtheproblemofgeneralizationofGibbs’sentropytheinformationtheoreticroutetoequilibriumstatis- ticalphysicsprovidesaveryusefulconceptualguide. Thenaturalstrategythatfitsthisframeworkwouldbethento revisittheaxiomaticrulesgoverningShannon’sinformationmeasureandpotentialextensionstranslateintolanguage of statistical physics. In fact, the usualaxiomaticsof Khinchin[15] is proneto several“plausible” generalizations. Among those, the additivity of independentmean informationis a natural axiom to attack. Along those lines, two fundamentallydistinctgeneralizationschemeshavebeenpursuedintheliterature;oneredefiningthestatisticalmean andanothergeneralizingtheadditivityrule. ThefirstmentionedgeneralizationwasrealizedbyRe´nyibyemployingthemostgeneralmeansstillcompatible withKolmogorovaxiomsofprobabilitytheory. These,socalled,quasi-linearmeanswereindependentlystudiedby Kolmogorov[16]andNagumo[17].Itwasshownthatthegeneralizationbasedonquasi-linearmeansunambiguously leads to information measures known as Re´nyi entropies [4, 18]. Although, the status of Re´nyi entropies (RE’s) in statistical physics is still debated, they nevertheless provide an immensely important analyzing tool in classical statisticalsystemswithanon-standardscalingbehavior(e.g.,fractals,multifractals,etc.)[19,20]. On the other hand, the second approachgeneralizesthe additivity prescriptionbutkeeps the usuallinear mean. Currentlypopulargeneralizationis the q-additivityprescriptionand related q-calculus[21, 22]. The corresponding axiomatics[23] providestheentropyknownasTsallis–Havrda–Charva´t’s(THC)entropy1. Astheclassicaladditiv- ityofindependentinformationis destroyedin thiscase, a newmoreexoticphysicalmechanismsmustbesoughtto complywithTHCpredictions. Recenttheoreticaladvancesinsystemswithlong-rangeinteractions[26],ingeneral- ized (and specifically q-generalised)centrallimit theorems[27], in theoryof asymptotic scaling [28], etc., indicate that the typical playground for THC entropy should be in cases where two statistically independent systems have non-vanishinglong-range/timecorrelationsorwherethenotionofstatisticalindependenceisanill-definedconcept. Examplesinclude,long-rangeIsingmodels,gravitationalsystems,statisticalsystemswithquantumnon-locality,etc. Itisclearthatanappropriatecombinationoftheabovegeneralizationscouldprovideanewconceptualparadigm suitableforastatisticaldescriptionofsystemspossessingbothself-similarityandnon-locality.Suchsystemsarequite pertinentwith examplesspanningfromthe earlyuniversecosmologicalphasetransitionsto currentlymuchstudied quantumphasetransitions(frustratedspinsystems,Fermiliquids,etc.).Inpassingweshouldmentionthatthereexists anumberofworkstryingtocomparebothRe´nyiandTHCentropiesfromboththetheoreticalandobservationalpoint ofview(see,e.g,Refs.[29,30]). Nevertheless,themergerofbothentropicparadigmshasnotbeenstudiedyet. Itis aimofthispaperto pursuethislineofreasoningsandexploretheresultingimplications. Inorderto seta stagefor ourconsiderationswe reviewinthefollowingsectionsomeaxiomaticessentialsforbothShannon,Re´nyiandTHC entropiesthatwillbeneededinthemainbodyofthepaper. InSection3wethenformulateanewaxiomaticswhich 1OtherimportantapproachessuchasKaniadakis’s[24]andNaudts’s[25]deformedHartley’slogarithmicinformationalsoutilizelinearmeans andgeneralizedadditivityrule(e.g.,κ-additivity)butasyettheystilllacktheinformation-theoreticaxiomaticsthatiscrucialinourreasonings.For thisreasonweexcludetheseworksfromourconsideration. 2 P.JizbaandJ.Korbel/PhysicaA00(2015)1–23 3 aimsatbridgingtheRe´nyiandTHCentropies. Itisfoundthatsuchaxiomaticsallowsforonlyoneone-parametric familyofsolutions.Basicpropertiesofthenewentropythatwedenoteas arediscussed. Asimplificationthat q q D D undergoesinmultifractalsystemsisparticularlyemphasized.ThecorrespondingMaxEntdistributionsarecalculated inSection4. We utilizebothlinearandnon–linearmomentconstraints(appliedto theenergy)toachievethisgoal. In both aforementionedcases the distributionsare expressible throughthe LambertW–function. Since the analytic structure of MaxEnt distributions is too complex we confine our analysis to the corresponding “high” and “low- temperature”asymptoticsanddiscusstheensuingnon-trivialstructureoftheparameterspace.InSection5wediscuss the concavityand Schur-concavityof . Section 6 is devoted to conclusions. The paper is substituted with three q D appendiceswhichclarifysomefinermathematicalpoints. 2. Briefreviewofentropyaxiomatics The information measure, or simply entropy, is supposed to represent the measure or degree of uncertainty or expectationinconveyedinformationwhichisgoingtoberemovedbytherecipient. Asaruleininformationtheory the exact value of entropydependsonly on the informationsource — more specifically, on the statistical nature of the source. Generallyspeaking, the higheris the informationmeasurethe higheris theignoranceaboutthe system (source) and thus more information will be uncovered after the message is received (or an actual measurement is performed). Asoftenhappens,thissimplescenarioisnotfrequentlytenableasvariousrestrictivefactorsarepresent inrealisticsituations;finitebuffercapacity,globalpatternsinmessages,topologicallynon–trivialsamplespaces,etc.. Onemayevenentertainvariousinformationtheoreticimplicationsrelatedwith the quantumprobabilitycalculusor quantum communication channels. Thus, as we go to somewhat more elaborate and realistic models, the entropy prescriptionsgetmorecomplicatedandrealistic! To see why a new generalization of the entropy is desirable let us briefly dwell into 3 most common entropy protagonist,namelyShannon’s,Re´nyi’sandTHCentropy. 2.1. Shannon’sentropy—Khinchinaxioms ThebestknownandwidelyusedinformationmeasureisShannon’sentropy. Forthecompletenesssakewenow brieflyrecapitulatetheKhinchinaxiomaticsasthiswillproveimportantinwhatfollows.Itconsistoffouraxioms[15]: 1. Foragivenintegernandgiven = p ,p ,...,p (p 0, np = 1), ( )isacontinuouswithrespectto P { 1 2 n} k ≥ k k H P allitsarguments. P 2. Foragivenintegern, (p ,p ,...,p )takesitslargestvaluefor p =1/n(k=1,2,...,n). 1 2 n k H 3. For a given q R; (A B) = (A)+ (BA) with (BA) = p (BA = A ), and distribution ∈ H ∪ H H | H | k k H | k P correspondstotheexperimentA. P 4. (p ,p ,...,p ,0)= (p ,p ,...,p ),i.e.,addinganeventofprobabilityzero(impossibleevent)wedonot 1 2 n 1 2 n H H gainanynewinformation. Thecorrespondinginformationmeasure,Shannon’sentropy,thenreads(uptothenormalizationconstant2 n ( )= p lnp . (1) k k H P − Xk=1 Inpassingweshouldstresstwoimportantpoints.Firstly,3rdaxiom(knownasseparabilityorstrongadditivityaxiom) indicatesthatShannon’sentropyoftwoindependentexperiments(sources)isadditive. Secondly,thereisanintimate connectionbetweentheBoltzmann–GibbsentropyandShannon’sentropy.Infact,thermodynamicscanbeviewedas aspecificapplicationofShannon’sinformationtheory:thethermodynamicentropymaybeinterpreted(whenrescaled to“bit” units)asthe amountofShannoninformationneededto definethe detailedmicroscopicstate ofthe system, whichremains“uncommunicated”byadescriptionthatissolelyintermsofthermodynamicstatevariables. 2Thenormalizationinfluencesthebaseofthelogarithm. Ininformationtheory,itiscommontochoosenormalization (1,1)=1,leadingto H 2 2 binarylogarithms.Weadoptphysicalconventionsandinthewholetextusethenormalizationleadingtonaturallogarithms. 3 P.JizbaandJ.Korbel/PhysicaA00(2015)1–23 4 2.2. Re´ny’sentropy:entropyofmultifractalsystems Asalreadymentioned,RE representsastepfurthertowardsmorerealisticsituationsencounteredininformation theory.Amongamyriadofinformationmeasures,RE’sdistinguishthemselvesbyfirmoperationalcharacterizations. These were established by Arikan [31] for the theory of guessing, by Jelinek [32] for the buffer overflow problem inlosslesssourcecoding,byCambell[33]forthelosslessvariable–lengthcodingproblemwithanexponentialcost constraint,etc. Recently,aninterestingoperationalcharacterizationofREwasprovidedbyCsisza´r [34]intermsof blockcodingandhypothesestesting.InthelattercasetheRe´nyiparameterqwasdirectlyrelatedtoso–calledβ-cutoff rates[34]. ApartfrominformationtheoryRE’shaveprovedtobeanindispensabletoolalsoinnumerousbranchesofphysics. Typicalexamplesare providedby chaoticdynamicalsystemsandmultifractalstatistical systems (see e.g.,[35] and citationstherein). Fullydevelopedturbulence,earthquakeanalysisandgeneralizeddimensionsofstrangeattractors provideexamples. REoforderq(q>0)ofadiscretedistribution = p ,...,p aredefinedas 1 n P { } n 1 ( )= ln (p )q . (2) Iq P (1 q)  k  − Xk=1  Inhisoriginalwork,Re´nyi[4,18]introducedaone-parameterfamilyofinformationmeasures(2)whichhebasedon axiomaticconsiderations. InthecourseoftimetheseaxiomshavebeensharpenedbyDaro´tzy[36]andothers[37]. MostrecentlyitwasshownthatREcanbeuniquelyderivedfromthefollowingsetofaxioms[35]: 1. Foragivenintegernandgiven = p ,p ,...,p (p 0, np = 1), ( )isacontinuouswithrespectto P { 1 2 n} k ≥ k k I P allitsarguments. P 2. Foragivenintegern, (p ,p ,...,p )takesitslargestvaluefor p =1/n(k=1,2,...,n). 1 2 n k I 3. For a given q R; (A B) = (A) + (BA) with (BA) = g 1 ̺ (q)g( (BA= A )) , and ̺ (q) = pqk/ kpqk (dist∈ributioInP∪corresponIdstothIeex|perimentIA).|Heregi−si(cid:0)nPvekrtikbleanIdpo|sitiveikn[(cid:1)0,∞). k 4. (p1P,p2,...,pn,0)= (p1,p2,...,pn). I I Formeraxiomsmarkedlydiffer fromthose utilized in [4, 18, 36, 37]. Particularlydistinctive is the presenceof the escort (or zooming) distribution ̺(q) in the 3rd axiom. Distribution ̺(q) was originally introduced by Re´nyi [4] to define the entropy associated with the joint distribution. Quite independentlywas ̺(q) introduced by Beck and Schlo¨gl[39]inthecontextofnon-lineardynamics. Webrieflyremindsomeelementarypropertiesof : itissymmetricinallarguments,forq 1is aconcave q q I ≤ I functionand ( ) ( ), whileforq 1itis neitherconcavenorconvexand ( ) ( ). Ontheother q q H P ≤ I P ≥ I P ≤ H P hand, RE of any order are Schur-concavefunctions[38]. In fact, every function f( ) which is Schur concave can P represent a reasonable measure of information, since it is maximized by a uniform probability distribution, while minimumisprovidedwithconcentrateddistributions = pi = 1,pj,i = 0 . Somefurtherpropertiescanbefound, P { } e.g.,inRefs.[4,18,35]. NoteparticularlythatREoftwoindependentexperiments(sources)isadditive. Infact,itwasprovedinRef.[4] that RE is the most general information measure compatible with additivity of independent information and Kol- mogorovaxiomsofprobabilitytheory. 2.3. THCentropy:entropyoflongdistancecorrelatedsystems THC entropywas originallyintroducedin 1967by Havrda and Charva´tin the contextof informationtheoryof computerizedsystems[2]andtogetherwiththeα-normentropymeasure[40]itbelongstoclassofpseudo-additive entropies. IncontrastwithRe´nyi’sorShannon’sentropyTHCentropydoesnothave(asyet)anoperationalcharac- terization. Havrda–Charva´tstructuralentropy,thoughquitewellknownamonginformationtheorists, hadremained largelyunknowninphysicscommunity.IttookmorethantwodecadestillTsallisinhispioneeringwork[41]ongen- eralized(ornon-extensive)statisticsrediscoveredthisentropy. SincethenTHCentropyhasbeenemployedinmany physicalsystems.Inthisconnectiononemayparticularlymention,Hamiltoniansystemswithlong-rangeinteractions, granularsystems,complexnetworks,stockmarketreturns,etc.. Forrecentreviewsee,e.g.,Ref.[42]. Inthecaseofadiscretedistribution = p ,...,p theTHCentropytakestheform: 1 n P { } 4 P.JizbaandJ.Korbel/PhysicaA00(2015)1–23 5 n 1 ( )= (p )q 1 , q>0. (3) Sq P (1 q) k −  − Xk=1  VariousaxiomatictreatmentsofTHCentropywereproposedintheliterature. Forourpurposethemostconvenient setofaxiomsisthefollowing[23]: 1. Foragivenintegernandgiven = p ,p ,...,p (p 0, np = 1), ( )isacontinuouswithrespectto P { 1 2 n} k ≥ k k S P allitsarguments. P 2. Foragivenintegern, (p ,p ,...,p )takesitslargestvaluefor p =1/n(k=1,2,...,n). 1 2 n k S 3. Foragivenq R; (A B)= (A)+ (BA)+(1 q) (A) (BA)with ∈ S ∪ S S | − S S | (BA)= ̺ (q) (BA=A ), S | k k S | k and̺ (q)= pq/ pq (distrPibution correspondstotheexperimentA). k k k k P 4. (p1,p2,...,pn,P0)= (p1,p2,...,pn). S S As we said before, one keeps here the linear mean but generalizes the additivity law. In fact, the additivity law in axiom3isnothingbuttheJacksonsumknownfromtheq-calculus[43];thereonedefinestheJacksonbasicnumber [X] ofquantityXas q { } [X] =(qX 1)/(q 1) [X+Y] =[X] +[Y] +(q 1)[X] [Y] . (4) q q q q q q { } − − ⇒ { } { } { } − { } { } Theconnectionwithaxiom3isthenestablishedwhenq (2 q). Nicefeatureoftheq-calculusisthatitformalizes → − manymathematicalmanipulations.Forinstance,usingtheq-logarithm 1 1 ln x = ln = (x1 q 1), (5) q 2 q − { } − { − } x! 1 q − − THCentropycanbeconciselywrittenastheq-deformedShannon’sentropy,i.e., n n n 1 ( ) = p ln p = pqln p = p ln . (6) Sq P −Xk=1 k {2−q} k −Xk=1 k {q} k Xk=1 k {q} pk! Someelementarypropertiesof arepositivity,concavity(andSchurconcavity)forallvaluesofqandindeednon- q S extensivity.Thereholdalsoinequalitiesbetweenallthreeentropies,namely: ( ) ( ) ( ), (7) q q H P ≤ I P ≤ S P for0<q 1,and ≤ ( ) ( ) ( ), (8) q q S P ≤ I P ≤ H P forq 1. ForamonographthatcoverthissubjectinmoredepththereaderisreferredtoRef.[44]. ≥ 3. J-Aaxiomsandsolutions ItwouldbeconceptuallydesirabletohaveaunifyingaxiomaticframeworkinwhichbothpropertiesofRe´nyiand THCentropiesarebothrepresented. InRef.[56]oneofusproposedthefollowingnaturalsynthesisoftheprevious twoaxiomatics: 1. Foragivenintegernandgiven = p ,p ,...,p (p 0, np = 1), ( )isacontinuouswithrespectto P { 1 2 n} k ≥ k k D P allitsarguments. P 2. Foragivenintegern, (p ,p ,...,p )takesitslargestvaluefor p =1/n(k=1,2,...,n). 1 2 n k D 5 P.JizbaandJ.Korbel/PhysicaA00(2015)1–23 6 Hybridentropy Tsallisentropy Renyientropy DHpL SHpL IHpL 0.7 0.8 0.8 0.6 q=0.4 0.6 0.5 q=0.5 0.6 0.4 q=0.75 0.4 0.4 0.3 q=1 q=1.5 0.2 0.2 0.2 q=2 0.1 q=3 p p p 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0 Figure1.Comparisonofentropiesforseveralvaluesofqfortwo-eventsystems( = p,1 p).Thedashedcurverepresentsthehybridentropy P { − } whichviolatesthemaximalityaxiom. D0.4 3. Foragivenq R; (A B)= (A)+ (BA)+(1 q) (A) (BA)with ∈ D ∪ D D | − D D | (BA)= f 1 ̺ (q) f ( (BA= A )) , D | − k k D | k and̺ (q) = pq/ (cid:0)Ppq (distribution corres(cid:1)pondstotheexperimentA). Function f isinvertibleandpositive k k k k P in[0, ). P ∞ 4. (p ,p ,...,p ,0)= (p ,p ,...,p ). 1 2 n 1 2 n D D Noteparticularlythatduetothenon-linearnatureofthenon-additivityconditionthereisnoneedtoselectanormal- izationconditionfor . InRef.[56]itwasshownthataboveaxiomsallowforonlyoneclassofsolutions,leading q D toanentirelynewfamilyofphysicallyconceivableentropyfunctions.Forreader’sconveniencearethebasicstepsof theproofsketchedinAppendix A.Inparticular,theresultinghybridentropyhasthefollowingform: n 1 Dq(A) = 1−q e−(1−q)2dIq/dqXk=1(pk)q−1 = ln{q}e−hlnPiq. (9) Letusfurtherremarkthataxiom4restrictsthepossiblevaluesofqtoq 0. Thisisbecause wouldotherwise q ≥ D tend to infinity if some of p would tend to zero. The latter would be counter-intuitive, because without changing k the probabilitydistributionwe wouldgain an infiniteinformation. Valueq = 0 mustbe also ruled outon the basis of axiom 2, because would yield an expressionnot dependenton the probabilitydistribution butonly on the 0 D P numberofoutcomes(orevents)—i.e., wouldbea system(source)insensitive. In addition,byfurtheranalysis 0 D inAppendix A,supportedbytheconceptofSchur-concavityinSection5weshowthat iswell-definedonlyfor q D q 1. Inparticular,forq< 1 theentropy hasalocalminimumat = 1/n,...,1/n (ratherthanmaximum)and ≥ 2 2 Dq P { } thereforeitdoesnotfulfillaxiom2. Somebasicpropertiesofthehybridentropy arepresentedinAppendix B. q D Before studying further implications of the formula (9), there are two immediate consequences which warrant specialmention.Thefirstisthat,fromtheconditiond /dq 0(seeSection2.2)wehave q I ≤ (A) ifq 1 Dq(A)=( ≥ Sqq(A) ifq≤1 , (10) ≤ S ≥ whereequalityholds,ifandonlyif,q = 1ord /dq = 0. Thesemeanthateither (A)and (A)jointlycoincide q q q I D S withShannon’sentropyorthat isuniformor 1,0,...,0 . Hence,combiningthiswithinequalitiesbetweenTHC, P { } Re´nyiendShannonentropy,weobtain 1 0 ( ) ( ) ( ) ( ) ln n for q 1, q q q q ≤ H P ≤ I P ≤ S P ≤ D P ≤ { } 2 ≤ ≤ 0 ( ) ( ) ( ) ( ) lnn for q 1. (11) q q q ≤ D P ≤ S P ≤ I P ≤ H P ≤ ≥ The result (11) implies that by investigatingthe informationmeasure with q < 1 we receive more information q D thanrestrictingourinvestigationjusttoentropies or . Ontheotherhand,whenq > 1thenboth and are q q q q I S I S 6 P.JizbaandJ.Korbel/PhysicaA00(2015)1–23 7 moreinformativethan . Thefirstsetofinequalitiesisalsovalidforq< 1,butthelastrelationtoln nisnottrue Dq 2 {q} forthehybridentropy.ThepracticalillustrationoftheaboveinequalitiescanbeseeninFig.1forsimpledistribution = p,1 p . P { − } Inpracticalcasesoneusuallyrequiresmorethanoneqtogainmorecompleteinformationaboutthesystem. In fact, when entropies or are used, it is necessary to know them for all q in order to obtain a full information q q I S onagivenstatisticalsystem[35]. ForensuingapplicationsinstrangeattractorsthereadermayconsultRef.[46],for reconstructiontheoremssee,e.g.,Refs.[4,35]. Thesecondcommenttobemadeconcernsthefactthatwhenthestatisticalsysteminquestionisamultifractal3 thenrelations(C.2)-(C.6)assertthat d N(a) (1 q)2 Iq = (a f(a))lnε = ln p (ε) , (12) − dq −  k  Xk  wheresummationrunsonlyoversupportboxesofsize ε with thescalingexponenta. Alternatively,we couldhave startedwiththefirstrelationinEq.(A.19)andusethemultifractalcanonicalrelations(seeRef.[35])inwhichcase theresultwouldhavebeenagain(12). Soforthecoarse-grainedmultifractalwiththemeshsizeεthecorresponding entropy reads q D 1 n (p (ε))q (A) = k=1 k 1 . (13) Dq (1 q)P N(a)p (ε) −  −  Pk k  Now,thepassagefrommultifractalstosingle–dimensionalstatisticalsystemsisdonebyassumingthatthea-interval isinfinitesimallynarrowandthatPDFissmooth[35,47]. InsuchacaseCvitanovic’scondition[47]holds,namely bothaand f(a)collapsetoa = f(a) Dandq = f (a) = 1. So,forexample,forastatisticalsystemwithasmooth ′ PDF and the support space Rd the re≡lation (13) implies that the entropy coincides with Shannon’s . In this q D H connectionitisimportanttostressthatthesimilarityof(13)withTHCentropyisonlyapparent.InordertohaveTHC entropyoneneedsto have N(a) = n, i.e., the entireprobabilitymeasuremustbe accumulatedaroundthe unifractal withthe scalingexponenta. Accordingto theBillingsley (orcurdling)theorem[48, 49] thisispossibleonlywhen a= f(a)=D,i.e.,onlywhen = .AsabyproductofEq.(11)wemaynoticethatforsingle-dimensionalsystems q D H withsmoothPDF’s and mustapproachShannon’sentropy[35]. We remarkthatthismayhelpto understand q q S I whyShannon’sentropyplayssuchapredominantroˆleinphysicsofsingle-dimensionalsets. Inwhatfollows,weexaminetheclassofdistributionsthatrepresentmaximizersfor (A)subjecttoconstraint q D imposedbytheaveragevalueofenergy. 4. MaxEntdistribution Accordingtoinformationtheory,theMaxEntprincipleyieldsdistributionswhichreflectleastbiasandmaximum uncertainty about information not provided to a recipient (i.e., observer). Important feature of the usual Gibbsian MaxEntformalismisthatthemaximalvalueofentropyisaconcavefunctionofthevaluesoftheprescribedconstraints (moments), and maximizing probabilities are all grater than zero [50]. The first is importantfor thermodynamical stabilityandthesecondformathematicalconsistency. Inthissectionwewillsee thatbothmentionedfeatureshold truealsointhecaseofthe entropy. q D Let us first addressthe issue of maximizersfor . To this end we shall seek the conditionalextremumof q q D D subjecttotheconstraintsimposedbytheaveragedvalueofenergyE (orgenerallyanyrandomquantityrepresenting theconstantofthemotion)intheform E = ̺ (r)E . (14) r k k h i Xk 3ThenecessaryessentialsonmultifractalsarepresentedinAppendix C. 7 P.JizbaandJ.Korbel/PhysicaA00(2015)1–23 8 Forthefutureconvenienceweinitiallykeeprnotnecessarycoincidentwithq. Takingintoaccountthenormalization conditionfor p weoughttoextremizethefunctional i (p )rE L ( ) = ( ) Ω k k k Φ p , (15) q,r P Dq P − P k(pk)r − Xk k P withΩandΦbeingtheLagrangemultipliers.SettingthederivativesofL ( )withrespectto p ,p ...,etc.,tozero, q,r 1 2 P weobtain ∂L∂q,pr(iP) = e(q−1)Pk̺k(q)lnpkhq(hlnPiq−lnpi)−1i (pki()pqk−)1q (p)r 1 P rΩ(E E ) i − Φ = 0, i=1,2,...,n. (16) − i−h ir (p )r − k k P Notethatwhenbothqandrapproach1,(16)reducestotheusualconditionforShannon’smaximizer. This,inturn, ensuresthatinthe(q,r) (1,1)limitthemaximizerof(15)isGibbs’scanonical-ensembledistribution. Letusnow → concentrateonthetwomostrelevantsituations,namelywhenr =qandr =1. 4.1. Ther=qcase Whenwedecidetouser = q(i.e.,whenthenon-linearmomentconstraintsareimplementedviaescortdistribu- tion)itfollowsfrom(16)that Φ(pi)1−q (pk)q = e(q−1)hlnPiq q ln q lnpi 1 qΩ(Ei E q). (17) h Pi − − − −h i Xk h (cid:16) (cid:17) i Multiplyingbothsidesof(17)by̺(q),summingoveriandtakingthenormalizationcondition p =1weobtain i k k ln( Φ) 1 P Φ= e(q−1)hlnPiq − = ln q q( )max = (Φ+1). (18) − ⇒ q 1 h Pi ⇒ D P | q 1 − − Pluggingresult(18)backinto(17)weobtainaftersomealgebra qln( Φ) qΩ (pk)q = (pi)q−1"qlnpi+ 1− q −1 − Φ (Ei−hEiq)!# , (19) Xk − whichmustbetrueforanyindexi. Onthesubstitution qln( Φ) qΩ = 1 − (E E ), (20) Ei − q 1 − Φ i−h iq − thisleadstotheequation κ(p)1 q = qlnp + . (21) i − i i E Herewehavedenoted (p )q κ. Equation(21)hasthesolution k k ≡ P pi = "κ(qq−1) W κ(qq−1) e(q−1)Ei/q!#1/(1−q) = expW(cid:16)κ(qq(−q1)−e(1q)−1)Ei/q(cid:17) −Ei/q , (22) withW(x)beingtheLambert–Wfunction[51].   Acoupleofcommentsarenowinorder. First, p’sasprescribedby(22)arepositiveforanyvalueofq>0. This i isastraightforwardconsequenceofthefollowingtwoidentities[51]: W(x) = ∞ (−1)n−1nn−2 xn, (23) (n 1)! Xn=1 − W(x) = xe W(x). (24) − 8 P.JizbaandJ.Korbel/PhysicaA00(2015)1–23 9 Indeed,Eq.(23)ensuresthatforx<0alsoW(x)<0andhenceW(x)/x>0. Thusfor0<q<1thepositivityof p’s i isasimpleconsequenceofthefirstpartof(22). Positivityforq 1followsdirectlyfromtherelation(24)andthe ≥ secondpartof(22). Second, as q 1 the entropy and we expect that p’s defined by (22) should approach the Gibbs q i → D → H canonical-ensembledistributioninthislimit. Toseethatthisisindeedthecase,letusnotethat Φ = 1, = 1+ +Ω(E E ), and κ = 1. (25) |q=1 − Ei|q=1 H i−h i |q=1 Then pi|q=1 = e1−[1+H+Ω(Ei−hEi)] = eΩF−ΩEi = e−ΩEi/Z, (26) (hereF istheHelmholtzfreeenergy)whichafteridentification Ω = βleadstothedesiredresult. Notealsothat |q=1 (22) is invariantunder uniform translation of the energy spectrum, i.e., the corresponding p is independentof the i choiceoftheenergyorigin. Third,therearesituations,whenEq.(21)hasnosolution,oritgivessolutionfor p < [0,1]. Toseethis,wemay i notice thatwhen q > 1, the left-handside of (21) is greater than κ, from which followsthat κ for all i’s. For i E ≥ q < 1 the left-handside of (21) acquiresvalues from[0,κ] which (after using the fact that q < 1 < κ) leads again to the condition κ. In bothcases are therefore positive. Thus, for energies, for which ∆ E = E E is i i q i i q E ≥ E −h i too negative, Eq. (21) has no solution, and the corresponding occupation probability is zero. Contrary to MaxEnt distributions of other commonly used entropies, there exist energy levels here, for which MaxEnt distributions of have zero occupation probabilities. This might provide a natural conceptualplayground for statistical systems q D withenergygaps(e.g.,disorderedsystems,carbonnanotubes)orforsystemwithvarioussuper-selectionrules(e.g., first-quantizedrelativisticsystems). Finally,theredoesnotseemtobyanysimplemethodforauniquedeterminationofΦandΩfromtheconstraint conditions4. Infact, onlyasymptoticsituationsforlargeandvanishinglysmallΩ canbesuccessfullyresolved(this willberelegatedtoSections4.1.2and4.1.3). Thereexists,however,systemsofapracticalinterest—namelymulti- fractalsystems,wherewecangivetorelations(22)averysatisfactoryphysicalinterpretation,withoutresolving E q h i intermsofΦandΩ. 4.1.1. Multifractalcase In case whena statistical system underinvestigationfits the multifractalparadigm5, we can castEq. (21) in the form Ω ετ(q)+ai(1−q) ∼ 1 + q ai−haiq(ε) 1+ Φ!lnε, (27) h i whereτanda arecorrelationexponentandLipshitz–Ho¨lderexponent,respectively. Notethattheq-mean a (ε)at i q h i thecoarse-grainedscaleεisproportionaltotheq-meanoflog-PDF,namely ln = ̺ (q)lnp ̺ (q)a lnε = a (ε)lnε. (28) q k k k k q h Pi ∼ h i Xk Xk So,inparticular,Φ= ε(q−1)haiq ascanbedirectlydeducedfromEq.(18). − Equation(27) hasseveralimportantimplications. Firstly, weremindthereaderthatinthelong-wavelimit(i.e., whenε 0),onecanuseanalogywithordinarystatisticalthermodynamicsandinterpret a asthemostlikelyvalue q → h i of“energy”ofa system immersedin a heatbathwith theeffective inversetemperatureβ = q (see, e.g., Ref. [35]). ThisisaversionoftheBillingsley(orcurdling)theorem[48,49,68],whichstatesthattheHausdorffdimensionofthe setonwhichtheescortprobability̺ (q)isconcentratedis f( a )=q a τ(q).Inaddition,therelativeprobability k q q h i h i − of the complement set approaches zero when ε 0. This in turn means that for each q there exists one scaling → 4InconventionalstatisticalphysicsonedoesnotsolveΩintermsofaveragedenergy(i.e.,internalenergyU)sinceΩcanbeidentifiedwith inversetemperaturewhichismuchmorefundamentalquantitythanU.Infact,itisUthatistypicallygivenasafunctionofΩ. 5ForabriefintroductiontomultifractalsseeAppendix C. 9 P.JizbaandJ.Korbel/PhysicaA00(2015)1–23 10 exponent,namelya = a whichdominates,e.g.,thepartitionfunctionκ,whereas p’swithotherLipshitz–Ho¨lder i q i h i exponentshaveonlymarginalcontribution. Note that the aforesaid indeed mimics the situation occurring in equilibrium statistical physics. There, in the canonicalformalismoneworkswith(usuallyinfinite)ensembleofidenticalsystemswithallpossibleenergyconfig- urations. ButonlytheconfigurationswithE E dominatepartitionfunctioninthethermodynamiclimit. Choice i β ∼h i oftemperatureT =1/βthenprescribesthecontributingenergyconfigurations. Secondly,forsmallεwehave Ω τ(q)+a(1 q) ln 1 + q a a (ε) 1+ lnε /lnε. (29) i − ∼ ( i−h iq Φ! ) h i Theright-handsideisnon-trivialonlywhen Ω 1 q 1+ < . (30) (cid:12)(cid:12) Φ!(cid:12)(cid:12) √ lnε (cid:12)(cid:12) (cid:12)(cid:12) − (cid:12) (cid:12) [notethat a a 1/√ lnε,seeAppen(cid:12)dix C].In(cid:12)suchacaseEq.(29)canberecastintheform i q | −h i |∼ − Ω τ(q)+a(1 q) q 1+ a a (ε) , (31) i − ∼ Φ! i−h iq h i implyingthatΩ/Φ = (2q 1)/q. Withthehelpof(30)thismeansthatq [1 1/√ lnε,1+1/√ lnε]. Bearing | | − ∈ − − − thisinmindwecabwritethesingle-cellprobabilitypi εai as ∼ 1/(1 q) p 1+(1 q)(a a )lnε − . (32) i i q ∼ − −h i h i Inmultifractalsitismorecustomarytoconsiderthetotalprobabilityofaphenomenonwithascalingexponenta, i i.e.,Pi(a) ε−f(ai)+ai. Tothisendwecanfirstutilizeasimplequadraticexpansion ∼ 1 1(a a )2 f(a) f( a ) = q(a a ) + f ( a )(a a )2 + = q(a a ) + i−h iq + . (33) i − h iq i−h iq 2 ′′ h iq i−h iq ··· i−h iq 2 (∆a)2lnε ··· InthelastequalitywehaveemployedEqs.(C.7)–(C.8). Notealsothatthehigher-ordertermsintheexpansion(33) areoftheorder (( lnε) 3/2). From(27)and(33)wethenget − O − Ω 1(a a )2 Pi(1−q)(a) ∝ 1+ q ai−haiq 1+ Φ!lnε−(1−q)q ai−haiq lnε−(1−q)2 i(−∆ah)2iq . (34) h i h i Sinceforvaluesa closeto a thedistributionP mustacquire(duetocurdlingtheorem)anon-trivialvalueinthe i q i h i limitε 0, the logarithmicdivergencesin(34) mustcanceleachother,yieldingthe simpleconditionΩ = qΦ . → | | Withthiswecanfinallywrite (ai a q)2 1/(1−q) P 1 (1 q) −h i . (35) i ∝ " − − 2(∆a)2 # This distribution is encountered in a number of multifractal systems. A paradigmatic example can be found in a statisticaldescriptionofthe intermittentevolutionoffully-developedturbulence. In sucha case P(a)describesthe i distributionof singularityexponentsof the velocitygradient[61]. In addition, the parameter q satisfies the scaling relation 1/(1 q) = 1/a 1/a , (36) + − − − wherea aredefinedby f(a )=0.Suchascalingisamanifestationofthemixingproperty.InRef.[61]itwasfurther shownth±attheqvariance(∆±a)2canberelatedtothephenomenologicallyimportantintermittencyexponentµ. 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.