1 Realization Theory for LPV State-Space Representations with Affine Dependence Miha´ly Petreczky Member, IEEE, Roland To´th, Member, IEEE and Guillaume Merce`re Member, IEEE 6 Abstract—In thispaperwepresent aKalman-style realization representations. More precisely, we will make a distinction 1 theory for linear parameter-varying state-space representations between LPV state-space representations, which are mathe- 0 whose matrices depend on the scheduling variables in an affine matical models in terms of difference/differential equations, 2 way (abbreviated as LPV-SSA representations). We show that and their input-output behavior (i.e. the set of input-output such a LPV-SSA representation is a minimal (in the sense p of having the least number of state-variables) representation trajectories which they generate). The question realization e of its input-output function, if and only if it is observable theorytries to answer is how to characterizethose LPV state- S and span-reachable. We show that any two minimal LPV-SSA space representations which describe the same set of input- 9 representationsof thesame input-outputfunctionarerelated by output trajectories, and how to construct such an LPV state- 1 a linear isomorphism, and the isomorphism does not depend space representation from the set of input-output trajectories. on the scheduling variable. We show that an input-output ] functioncanberepresentedbyaLPV-SSArepresentationifand Thereasonthatthisproblemisanimportantoneisasfollows. C only if the Hankel-matrix of the input-output function has a Notice that it in general, there is no justification to claim O finite rank. In fact, the rank of the Hankel-matrix gives the that a designated LPV state-space representation is the ‘true” dimension of a minimal LPV-SSA representation. Moreover, we . modelofthephysicalphenomenonofinterest.AnyotherLPV h can formulate a counterpart of partial realization theory for state-space representation which generates the same input- t LPV-SSA representation and prove correctness of the Kalman- a Hoalgorithm formulated in[1]. Theseresultsthusrepresent the outputtrajectoriesasthisdesignatedstate-spacerepresentation m basis of systems theory for LPV-SSA representation. can also be viewed as a model of the physical phenomenon. [ For example, two different system identification techniques 2 or modelling approachescould yield two differentstate-space v I. INTRODUCTION representations which are equivalent, in the sense that they 7 Linear parameter-varying (LPV) systems represent an in- describe the same set of input-output trajectories. In this 7 termediate system class between the class of linear time- case, there is no reason to prefer one model over the other 7 invariant(LTI)systems andsystems with nonlinearand time- one. Hence, any controller developed using one such LPV 2 0 varying behavior.The underlyingidea behind the use of LPV state-space representation should be shown to work for all . systemsistoapproximatelymodelnonlinearandtime-varying the other LPV state-space representation generating the same 1 systemsbylineartime-varyingdifferenceordifferentialequa- input-outputbehavior. In order to address this issue, we need 0 6 tions, where the time varying coefficients are functions of a realization theory. As for system identification, the best we 1 time-varying signal, the so-called scheduling variable. Such can hope for a system identification algorithm is that it will : equations are called LPV systems [2], [3]. That is, LPV find one LPV state-space representation which generates (at v i systems are a class of mathematical models having a certain least approximately, with some error) the observed input- X structure(linearandtime-varying).TheuseofLPVsystemsis output trajectories. Hence, we need realization theory for r motivated by the fact that control design for these systems is analyzing system identification algorithms, as it tells us the a well developed [4]–[10]. More recently, system identification set of possible correct outcomes of any system identification of LPV models has gained attention [11]–[21]. algorithmunderidealcircumstances(nonoise,etc.).Thesame Despite these advances and the popularity of LPV models, goes for analyzing model reduction algorithms. Moreover, therearesignificantgapsintheirsystemstheory,inparticular, from a practical point of view, what we are interested in is their realization theory. By realization theory we mean a the interplay between system identification, model reduction systematic characterization of the relationship between the andcontroldesign.Roughlyspeaking,we wouldliketo know input-output behavior of LPV systems and their state-space when we can hope that a controller which was calculated basedonaplantmodelobtainedfromsystemidentificationand Miha´ly Petreczky (Corresponding author) s with Centre de modelreductionalgorithmswill work for the originalsystem. Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) Inordertounderstandthisinterplay,weneedtounderstandthe [email protected] Guillaume Merce`re is with the University of Poitiers, Labora- relationship between various LPV state-space representations toire d’Informatique et d’Automatique pour les Syste`mes, 2 rue P. which are consistent with the same input-output trajectories, Brousse, batiment B25, B.P. 633, 86022 Poitiers Cedex, France. Email: i.e. we need realization theory. [email protected] R. To´th is with the Control Systems Group, Department of Electrical The only systematic effort to address this gap was made Engineering, Eindhoven University of Technology, P.O.Box 513, 5600 MB in [2], [3], where behavioral theory was used to clarify Eindhoven, TheNetherlands. Email:[email protected]. realization theory, concepts of minimality and equivalence ThisworkwaspartiallysupportedbyESTIREZprojectofRegionNord-Pas deCalais, France classes of various LPV representation forms. However, the 2 LPV modelsconsideredin [2], [3] assumed non-linear(mero- ability,topologyofminimalLPV-SSAs,identifiablecanonical morphic)and dynamicaldependenceof the modelparameters forms(followingtheideaof[24]–[26]),forfindingconditions on the scheduling variable. More precisely, the LPV model forpersistenceofexcitationofLPV-SSAs(followingtheideas parameters were assumed to be meromorphic functions of of [27]) or for model reduction using moment matching (see the scheduling variable and its derivatives (in continuous- [28] for preliminary results). time), or of the current and future values of the scheduling Many of concepts related to those used in this paper have variable (discrete-time). As a result, the system theoretic already been published in various works, but without the transformations (passing from input-output behavior to state- existence of a coherent connection and underlying formal spacerepresentation,transformingastate-spacerepresentation proofs. In particular, the idea of Hankel-matrix has appeared to a minimal one, etc.) described in [2], [3] introduce LPV in [16], [19], [29]. The Markov-parameters and the realiza- models with a dynamic and nonlinear dependence on the tion algorithm were already described in [29]. In contrast parameters.However,forpracticalapplicationsitispreferable to [16], [19], [29], in this paper, the Markov-parameters to use LPV models with a static and affine dependence and the related Hankel-matrix are defined directly for input- on the scheduling variable, i.e., LPV models whose system output functions, without assuming the existence of a finite parameters are affine functions of the instantenous value of dimensional LPV-SSA realization. In fact, the finite rank the scheduling variable. That is, from a practical point of of the Hankel-matrix represents the necessary and sufficient view it make sense to concentrate on systems theory of LPV condition for the existence of an LPV-SSA realization. In models with static and affine dependence. In particular, the addition, we discuss the conditions for the correctness of the following fundamental question which directly pops up in realization algorithm in more details. Extended observability the engineering context has remained unanswered: when is it and reachability matrices were also presented in [2], [30]. possible to give a simple state-space model with affine static However, their system-theoretic interpretation as well as the dependenceforanidentifiedormodeledLPVsystembehavior relationship with minimality were not explored. Realization and howto accomplishthis realizationstep with ease. To find theory of more general linear parameter-varying systems was ananswertothisquestionisthemainmotivationofthispaper. already developed in [2], the system matrices were allowed InthispaperwepresentaKalman-likerealizationtheoryfor to depend on the scheduling parameters in a non-linear way, LPV state-space representationswith affine static dependence however, the results published in [2] do not always imply of coefficients, abbreviated as LPV-SSA representations. We the ones, for the restricted LPV-SSA case, presented in this will consider both the discrete-time (DT) and the continuous- paper. Furthermore, the results presented in this paper can time (CT) cases. In particular, we show existence, uniqueness be also seen as generalization of system theoretical results of a special form of equivalent infinite impulse response available for linear switched systems [31]–[34]. The current representations(IIR)ofsystemswithsuchrepresentationsboth paper is partially based on [35]. With respect to [35], the in DT and CT cases. We show that all input-outputfunctions maindifferencesareasfollows.First,[35]presentstheresults which can be described by an LPV-SSA representation admit without proofs. Second, [35] deals only with the DT case, an IIR, and conversely, if an input-output function admits an while the extension of the results to CT case is challenging IIRanditsHankelmatrixhasfiniterank,thenthisinput-output and technically more involved than the DT case. Finally, the function can be represented by an LPV-SSA representation. expositionhasbeenimprovedandsimplifiedin comparisonto In this case, the finite rank of the Hankel matrix equals the [35]. The technical report [36] differs from [35] only in the dimension of a minimal LPV-SSA realization of the input- presenceofsomesketchesoftheproofsfortheresultsof[35]. outputfunction.Furthermore,theconceptof(state)minimality The paper is organized as follows: In Section II, basic and Kalman decomposition in terms of observability and notions and concepts are introduced, which is followed, in reachability is clarified. It is proven that for the LPV-SSA Section III, by the definition of SSA representations, input- case, state-minimality is equivalent with joint observability outputfunctions,equivalenceandminimalityintheconsidered and span-reachability. It is shown that the construction of LPV context. In Section IV, the main results of the paper a minimal LPV-SSA form of an arbitrary LPV input-output in terms of existence, uniqueness and convergence of SSA function can be always (if it exists) carried out with the inducingimpulseresponserepresentationsandthecorrespond- application of the Ho-Kalman realization algorithm. We also ing conceptsof Hankelmatrix and SSA realizationtheory are discusspartialLPV-SSArealizationforinput-outputfunctions. explained. For the sake of readability, all proofs are collected Moreover, it is formally proven that all minimal LPV-SSA in Appendix A. representations of the same input-outputfunction are isomor- phic, and this isomorphism is linear and it does not depend II. NOTATION on the scheduling parameter. Finally, we show that under someverymildconditions,minimalLPV-SSArepresentations The following notation is used: for a (possibly infinite) set are also minimal among the meromorphic LPV state-space X,denotebyS(X)thesetoffinitesequencesgeneratedfrom representations from [22]. The results of this paper could X, i.e., each s ∈ S(X) is of the form s = ζ ζ ···ζ with 1 2 k be useful for model reduction and system identification of ζ ,ζ ,...,ζ ∈ X, k ∈ N. |s| denotes the length of the 1 2 k LPV-SSAs. For example, they could be useful for improved sequence s, while for s,r ∈ S(X), sr ∈ S(X) corresponds subspace identification algorithm (see [23] for preliminary the concatenation operation. The symbol ε is used for the results),foridentifiabilityanalysis,characterizationofidentifi- empty sequence and |ε| = 0 with sε = εs = s. Furthermore, 3 XN denotes the set of all functions of the form f : N → X. conditions, nonlinear/time-varying dynamical aspects and /or For each j = 1,...,m, e is the jth standard basis in Rm. externaleffectsinfluencingtheplantbehavioranditisallowed j FuLrtehterTm=oreR,+let=Iss[210,=+∞{s)∈beZt|hse1ti≤mesa≤xiss2i}nbtheeacnoinntdineuxosuest-. tooftvenaryusientthheessheotrPth,asneden[2o]taftoiorndetails. In the sequel, we will 0 time (CT) case and T = N in the discrete-time (DT) case. Σ=(P,{A ,B ,C ,D }np ) Denote by ξ the differentiation operator d (in CT) and the i i i i i=0 dt forward time-shift operator q (in DT), i.e., if z : T → Rn, to denote an LPV-SSA representation of the form (1) and use then (ξz)(t) = dz(t), if T = R+, and (ξz)(t) = z(t+1), dim(Σ)=nx to denote its state dimension. dt 0 if T = N. As usual, denote by ξk the k-fold application of By a solution of Σ we mean a tuple of trajectories ξ, i.e. for any z : T → Rn, ξ0z = z, and ξk+1z = ξ(ξkz) (x,y,u,p)∈ (X,Y,U,P) satisfying (1) for almost all t ∈ T for all k ∈ N. Both for CT and DT, for any τ ∈ T, define in CT case, and for all t ∈ T in DT, where in CT, X = the time shift operator qτ as follows: for any f : T → Rn, Ca(R+0,X),Y = Cp(R+0,Y),U = Cp(R+0,U),P =Cp(R+0,P), qτf :T→Rn is defined by (qτf)(t)=f(t+τ), t∈T. and in DT X =XN,Y =YN,U =UN,P =PN. A function f :=R+0 →Rn is called piecewise-continuous, Remark 1 (Zero initial time). Notice that without loss of if f has finitely many points of discontinuity on any com- generality, the solution trajectories in CT can be considered pact subinterval of R+0 and, at any point of discontinuity, on the half line R+0 with to =0. Indeed, if (x,y,u,p) satisfy the left-hand and right-hand side limits of f exist and are (1), then (qτx,qτy,qτu,qτp) satisfies (1) for any τ ∈R (see finite. We denote by Cp(R+0,Rn) the set of all n-dimensional [3]). Here qτ is the shift operator defined in Section II. piecewise-continuous functions of the above form. The no- tation C (R+,Rn) designates the set of all n-dimensional Notethatforanyinputandschedulingsignal(u,p)∈U×P absolutealy co0ntinuous functions [37]. andanyinitialstatexo ∈X,thereexistsauniquepair(y,x)∈ Recall from [38] the following notions on affine hulls and Y×X suchthat(x,y,u,p)isasolutionof(1)andx(0)=xo, affine bases. Recall that b ∈ Rn is an affine combination of see[2].Thatis,thedynamicsofΣarethusdrivenbytheinputs a ,...,a ∈ Rn, if b = N λ a for some λ ,...,λ ∈ u∈U as well as the scheduling variables p∈P. This allows 1 N i=1 i i 1 N R, N λ = 1. The affine hull Aff A of a set A is the to define input-to-state and input-output functions as follows. i=1 i P bse1t,P.o.f.,abfmfin∈e cRonmbairneatsiaoindstoofbeeleamffiennetslyoifndAep.eTndheentveifctoforsr sDtaetfienoitfioΣn.1D(eIfiSnaentdheIOfufnucntciotinosns). Letxo ∈Rnx beaninitial every j = 1,...,m, b cannot be expressed as an affine j combination of {bi}mi=1,i6=j. The vectors b1,...,bm are an XΣ,xo :U ×P →X, (3a) affine basis of Rn if m = n+ 1, b1,...,bn+1 are affinely YΣ,xo :U ×P →Y, (3b) independent and Aff {b ,...,b }=Rn. 1 n+1 suchthatforany(x,y,u,p)∈X×Y×U×P,x=X (u,p) Σ,xo and y = Y (u,p) holds if and only if (x,y,u,p) is a III. PRELIMINARIES solution of (Σ1),xaond x(0) = x . The function X is called o Σ,xo In this paper, we consider the class of LPV systems theinput-to-statefunctionofΣinducedbytheinitialstatex , o that have LPV state-space (SS) representations with affine and the function Y is called the input-to-output function Σ,xo linear dependence on the scheduling variable. We use the Σ induced by x . o abbreviation LPV-SSA to denote this subclass of state-space Prompted by the definition above, we formalize potential representations, defined as input-output behavior of LPV-SSA representations as func- ξx(t) = A(p(t))x(t)+B(p(t))u(t), tions of the form Σ (1) y(t) = C(p(t))x(t)+D(p(t))u(t), F:U ×P →Y. (4) (cid:26) where x(t)∈X =Rnx is the state variable, y(t)∈Y=Rny Notethataninput-outputmapofanyLPV-SSArepresentation isthe(measured)output,u(t)∈U=Rnu representstheinput is of the above form. However, not all maps of the form (4) signalandp(t)∈P⊆Rnp isthesocalledschedulingvariable arise as input-outputmaps of some LPV-SSA representation. of the system represented by Σ, and Definition 2 (Realization). The LPV-SSA representation Σ is np np a realization of an input-output function F of the form (4) A(p)=A0+ Aipi, B(p)=B0+ Bipi, from the initial state xo ∈ X, if F = YΣ,xo; Σ is said to be Xi=1 Xi=1 (2) a realization of F, if there exist an initial state xo ∈ X of Σ, np np such that Σ is a realization of F from x . o C(p)=C + C p , D(p)=D + D p , 0 i i 0 i i Similarly to [31], [33], [34], the results of this paper could i=1 i=1 X X beextendedtofamiliesofinput-outputfunctionswithmultiple foreveryp=[ p ... p ]⊤ ∈P,withconstantmatrices 1 np initial states. However, in order to keep the notations simple, fAoir∈alRl nix×∈nxI,nBp.i ∈ItRisnxa×snsuum,Cedi ∈thRatnyA×ffnxPand=DRin∈p,Rin.ey.×,nPu we only deal with systems having one initial state. 0 contains an affine basis of Rnp, see Section II or [38] for Definition 3 (Input-output equivalence). Two LPV-SSA rep- the definitionofaffinespan andaffine basis. Accordingto the resentations Σ and Σ′ are said to be weakly input-output LPV modeling concept, p corresponds to varying-operating equivalent w.r.t. the initial states x ∈ Rnx and x′ ∈ Rnx′, 4 if YΣ,x = YΣ′,x′. They are called strongly input-output • ∃xo ∈X such that YΣ,xo =F. equivalent, if for all x ∈ Rnx there is a x′ ∈ Rnx′ such • foreverLPV-SSArepresentationΣ′whichisarealization that YΣ,x = YΣ′,x′, and vice versa, for any x′ ∈ Rnx there of F, dim(Σ)≤dim(Σ′). is a x∈Rnx′ such that YΣ,x =YΣ′,x′. We say that Σ is minimal w.r.t. the initial state xo ∈ X, if Σ is a minimal realization of the input-output function Y . Remark 1 (IO functionsvs behaviors). So far, we formalized Σ,xo the input-output behavior of the system represented by Σ as Note that due to the linearity of the system class, we can aninput-outputfunctioninducedbysomeinitialstate.Another assumethatD(·)≡0withoutanylossofgeneralityregarding optionistouseabehavioralapproach,wheretheinput-output the concepts of reachability, observability and minimality. (manifest) behavior of a given LPV-SSA Σ is defined as Therefore, in the sequel, unless stated otherwise, we will assume that D = 0 for all i ∈ Inp. Rewriting the results B(Σ)= (y,u,p)∈Y ×U ×P |∃x∈X i 0 of the paper for D(·)6≡0 is an easy exercise and it is left to (cid:8) s.t. (x,y,u,p) satisfies (1) . (5) the reader. Then,aΣrealizesaB⊆Y×U×P,ifandonlyifB=(cid:9)B(Σ). Notice that B(Σ)={(y,u,p)|∃x∈Rnx :YΣ,x(u,p)=y}, IV. MAIN RESULTS i.e., B(Σ) is just the union of graphs of the input-output In this section, we present the main results of the paper. functions Y . This prompts us to consider the following First, we formally define the notion of an impulse response Σ,x definition.LetΦ be aset ofinput-outputfunctionsoftheform representation (IIR) of an input-outputfunction F:U×P → F:U×P →Y. Similarly to [34], [39], we say that an LPV- Y both in CT and DT. We then show that all input-output SSA Σ is a realization of Φ, if for every F ∈ Φ there exists functions which are realizable as a LPV-SSA representation a state x of Σ such that F = Y . Definition 2 represents admit such an IIR. This is followed by the establishment of Σ,x a particular case of the definition above with Φ = {F}. The a Kalman-like realization theory (relationship between input- results of the paper can be extended to include the definition output functions and LPV-SSA representations, rank condi- above, similarly to [34], [39]. tions for the Hankel matrix, minimality of LPV-SSA repre- sentations, uniqueness (up to isomorphism) of minimal LPV- Next,wedefinereachabilityandobservabilityofLPV-SSAs. SSA representations). Finally, we present a minimization and Definition 4 (Reachability & observability). Let Σ be an Kalman-decompositionalgorithms,we discussthe correctness LPV-SSA representation of the form (1). We say that Σ ofKalman-Hoalgorithmof[1],andweconcludebyclarifying is (span) reachable from an initial state xo ∈ Rnx, if therelationshipbetweentheminimalityconceptsofthecurrent Span{XΣ,xo(u,p)(t) | (u,p) ∈ U × P,t ∈ T} = X. We paper and that of [22]. say that Σ is observable if, for any two states x1 ∈Rnx and x2 ∈Rnx, YΣ,x1 =YΣ,x2 implies x1 =x2. A. Impulse response representation Notice that, in this definition, observability means that for First,weintroduceaconvolutionbasedrepresentationofan anytwodistinctstatesofthesystem,theresultingoutputswill input-outputfunction.Letp denotetheqth entryofthevector q differ from each other for some input and scheduling signals. p ∈ Rnp if q ∈ In1p and let p0 = 1. Consider the following Notice that while span-reachability depends on the choice of notation to handle the resulting p-dependence of the Markov the initial state x , observability does not. Furthermore, these coefficients: o concepts of reachability and observability are strongly related Definition 7. For a given index sequence s ∈ S(Inp), time totheextendedcontrollabilityandobservabilitymatricesused 0 moments t,τ ∈T, τ ≤t, and any scheduling trajectories p∈ in subspace-based identification of LPV-SSA models [40]. P, define the so-called sub-Markovdependence(w ⋄p)(t,τ) As explained previously, the relation between two realiza- s as follows: tions of the same I-O function is of interest in this paper. Thus,itis essentialto recallthe notionofisomorphismforan • Continuous-time: For the empty sequence, s=ǫ, (wǫ⋄ LPV-SSA model. p)(t,τ)=1. If s=s1s2···sn for some s1,s2,...,sn ∈ Inp and n>0, then Definition 5 (State-space isomorphism). Consider two LPV- 0 t (SPS,A rAep′i,rBesi′e,nCtai′t,iDoni′s Σnp=) w(Pit,h{Adiim,B(Σi,)C=i,Ddii}mni=(pΣ0)′)a=ndnΣx′.=A (wst⋄p)(t,τ)=τnZτ psn(δ)·(ws1·τ··ns−n−11 ⋄p)(δ,τ) dδ = nonnsingularmatrixoTi=∈0Rnx×nx issaidtobeanisomorphism psn(τn)( psn−1(τn−1)( ···)dτn−1)dτn Zτ Zτ Zτ from Σ to Σ′, if • Discrete-time: If the sequence s is of the form s = A′iT =TAi Bi′ =TBi Ci′T =Ci Di′ =Di, (6) s1s2···sn, for some s1,s2,...,sn ∈ In0p and n = t−τ +1, then for all i∈Inp. 0 (w ⋄p)(t,τ)=p (τ)p (τ +1)···p (t), Next, we define minimality of an LPV-SSA representation: s s1 s2 sn else (w ⋄p)(t,τ)=0. Definition 6 (State-minimal realization). Let F be an input- s outputfunction.AnLPV-SSAΣisa(state)minimalrealization Example 1. In order to illustrate the notation above, of F, if consider the case when n = 1 and take s = 0101, p 5 |s| = n = 4. Then, for DT (w ⋄ p)(5,2) = g⋄pandh⋄p,respectively. Thevaluesofthe functionθ will s F p (2)p (3)p (4)p (5) = p(3)p(5). For CT, (w ⋄p)(5,2) = be called the sub-Markov parameters of F. 0 1 0 1 s 5p (s )( s1p (s )( s2p (s )( s3p (s )ds )ds )ds )ds , a2nd1by1usin2g p 0=21, (w2 ⋄1p)(35,22)=0 54p(s 4)( s31 s22(s −1 Note that in DT, the sums appearingon the right-handside R R 0 R s R 2 1 2 2 3 of (12) are actually finite sums, as for |s| > t, w ⋄p = 0. 2)p(s )ds ds )ds . s 3 2 3 1 R R R In the case of CT, however, the right-hand side of (11) is an The IIR of an input-outputfunction is defined as follows. infinitesum,whichraisesthequestionofitsconvergence.This question is addressed below. Definition 8 (Impulse response representation). Let F be a function of the form (4). Then F is said to have a impulse Lemma 1. Assume θ satisfies the growth condition (9) for F response representation (IIR) if there exists a function some K,R>0. Then the infinite sum on the right-hand side θF :S(In0p)7→R(np+1)ny×(nu(np+1)+1), (8) of (11) is absolutely convergent. TheproofofLemma1 is presentedin Appendix.Existence such that, of an IIR of F implies that F is linear in u and can be repre- 1) it satisfies an exponential growth condition, i.e., there sentedasa convergentinfinite sumofiteratedintegralsinCT, exist constants K,R>0 such that while, in DT, F is a homogeneous polynomial in {p (t)}np . i i=1 ∀s∈S(Inp):||θ (s)|| ≤KR|s| (9) It is important to notice that, in CT, using the terminology 0 F F of [41], the entries of g ⋄ p and h ⋄ p correspond to the where k.k denotes the Frobenius norm; F generating series defined by the coordinates of the functions 2) for every p ∈ P, there exist functions gF⋄p : T →Rny s 7→ θ (s)p (t)p (τ) and s 7→ η (s)p (t). Furthermore, the and hF ⋄p : {(τ,t) ∈ T×T | τ ≤ t} → Rny×nu, such abovedi,ejfinitiionofjIIRs,inprinciplie,coirrespondstoaspecific that for each (u,p)∈U ×P, t∈T, caseofIIRforgeneralLPVsystemsdefinedfortheDTcasein t [2].Notethatin[2]theuseofaninitialconditionwasavoided F(u,p)(t)=(g ⋄p)(t)+ (h ⋄p)(δ,t)·u(δ)dδ, (10a) F F Z0 by assuming that the input-output function is asymptotically in CT and stable. The contribution in the definition proposed in the currentpaper is twofold:(i) it providesthe conceptof IIR for t−1 F(u,p)(t)=(g ⋄p)(t)+ (h ⋄p)(δ,t)·u(δ), (10b) the CT case, (ii) it restricts the dependencies of the Markov F F parameters to the subclass that can results from the series δ=0 X expansion of LPV-SSA representations. As we will see, this forDT.Moreover,foranyi,j ∈In0p,letηi,F(s)∈Rny×1 will be crucial to decide when it is possible to derive a LPV- and θi,j,F(s)∈Rny×nu be such that SSArealizationofaninput-outputfunction.Inturn,thatresult η (s) θ (s) ··· θ (s) provides the basis for system identification with state-space 0,F 0,0,F 0,np,F η (s) θ (s) ··· θ (s) model structures using static dependence only. θF(s)= 1,F.. 1,0,..F 1,np..,F . . . ··· . Example 2. To better explain the meaning of this definition, η (s) θ (s) ··· θ (s) we demonstrate the underlying constructive mechanism by np,F np,0,F np,np,F writing out the formulas explicitly for a single example. Then g ⋄p and h ⋄p can be expressed via θ as F F F Assume that P = R, n = n = 1 and let F be an input- u y (g ⋄p)(t)= output function of the form (4) and assume it has an IIR. F Then in DT, using that p (t)=1 for all t∈T, p (t)η (s)·(w ⋄p)(t,0), 0 i i,F s (h ⋄p)(2,5)= i∈XIn0ps∈XS(In0p) (11) F θ (00)+p(4)θ (01)+···+ (h ⋄p)(δ,t)= 0,0,F 0,0,F F p(2)p(5)p(3)θ (10)+p(2)p(5)p(3)p(4)θ (11) θ (s)p (t)p (δ)·(w ⋄p)(t,δ), 1,1,F 1,1,F i,j,F i j s (g ⋄p)(2)=η (00)+p (1)η (01)+p (0)η (10)+ i,jX∈In0ps∈XS(I0np) F 0,F 1 0,F 1 0,F p (0)p (1)η (11)+···+p (2)η (00)+ in CT and, in DT, 1 1 0,F 1 1F p (2)p (1)η (01)+···p (2)p (0)p (1)η (11). 1 1 1,F 1 1 1 1,F (g ⋄p)(t)= F η (s)p (t)·(w ⋄p)(t−1,0), For CT, i,F i s (h ⋄p)(2,5)=[θ (ǫ)+3θ (0)+ i∈XIn0p F 0,0,F 0,0,F s∈S(Inp) 5 s1 s2 0 (12) +···+θ (101) p(s ) p(s )ds ds ds +···] (h ⋄p)(δ,t)= 0,0,F 1 3 3 2 1 F Z2 Z2 Z2 θ (s)p (t)p (δ)·(w ⋄p)(t−1,δ+1). +···+ i,j,F i j s si∈,jSX∈(IIn0npp) p(2)p(5)[θ1,1,F(ǫ)+3θ1,1,F(0)+θ1,1,F(1) 5p(s)ds+ 0 Z2 IfFisclearfromthecontext,thenwedropthesubscriptFand 5 s1 s2 we denoteθ ,θ ,η ,i,j ∈Inp,g ⋄p,h ⋄p by θ,θ ,η +···+θ1,1,F(101) p(s1) p(s3)ds3ds2ds1+···] F i,j,F i,F 0 F F i,j i Z2 Z2 Z2 6 2 (g ⋄p)(2)=[η (ǫ)+2η (0)+η (1) p(s)ds+ The proof of this result is given in the Appendix. F 0,F 0,F 0,F Z0 2 s1 s2 Remark2(FurtherintuitionbehindIIRrepresentation). From +···+η0,F(101) p(s1) p(s3)ds3ds2ds1+···] the proof Lemma 3 it also follows that, if F is realized by an Z0 Z0 Z0 LPV-SSArepresentationΣoftheform(1)fromtheinitialstate 2 +p(2)[η1,F(ǫ)+2η1,F(0)+η1,F(1) p(s)ds xo, for all τ ≤t∈T, p∈P, Z0 ]C(p(t))Φ (t−1,0)x DT 2 s1 s2 (g ⋄p)(t)= p o +···+η1,F(101) p(s1) p(s3)ds3ds2ds1+···]. F (cid:26) C(p(t))Φp(t,0)xo CT Z0 Z0 Z0 C(p(t))Φ (t,τ +1)B(p(τ)) DT (h ⋄p)(τ,t)= p That is, in DT, (hF ⋄ p)(2,5) is a polynomial of F C(p(t))Φp(t,τ)B(p(τ)) CT (cid:26) p(2),p(3),p(4),p(5), and the degree of p(2),p(3),p(4),p(5) Here Φ (t,τ) is the fundamental matrix of the time-varying in each monomial is at most one. Moreover, θ (s s ), p i,j,F 1 2 linear system ξx(t) = A(p(t))x(t), i.e. Φ (τ,τ) = I and for each i,j,s1,s2 ∈ {0,1}, are the coefficients of this d p nx polynomial. In particular, only the components of the sub- for all τ ≤ t ∈ T, Φ (t,τ) = A(p(t))Φ (t,τ) in CT and p p dt Markov parameters the form θF(s), with s being of length 2, Φp(t+1,τ)=A(p(t))Φp(t,τ) in DT. occur in (h ⋄p)(2,5). In contrast, in CT, (h ⋄p)(2,5) is F F an infinite sum of iterated integrals of p, all the components B. State-space realization theory for affine dependence of the form θ (s), i,j = 0,1, with s being a sequence of i,j,F Below, we present a novel Kalman-style realization theory arbitrary length, occur in the expression for (h ⋄p)(2,5). F forLPV-SSArepresentations,which,inouropinion,opensthe The picture for (g ⋄p)(2) is analogous. F door for the development of a new generation of state-space Recall that for LTI systems, there is a one-to-one corre- identification, model reduction and control methodologies. spondencebetween input-outputfunctionsand IIRs (see, e.g., Theorem 1 (Minimality,weak sense). An LPV-SSA represen- [42]). A similar result holds for those functions of the form tation Σ is minimal w.r.t. a given initial state x ∈ X, if and (4) which admit an IIR. o only if, Σ is span-reachable from x and Σ is observable. If o Lemma2(UniquenessoftheIIR). Ifaninput-outputfunction Σ is an LPV-SSA representation which is minimal w.r.t. some F of the form (4) has an IIR, then the function θF is uniquely initialstate x0, andΣ′ is anLPV-SSArepresentationwhich is determinedbyF,i.e.,ifFˆ :U×P →Y isanotherinput-output minimal w.r.t. some initial state x′, and Σ and Σ′ are weakly 0 function, which admits an IIR, then input-outputequivalent w.r.t the initial states x and x′, then 0 0 Σ and Σ′ are isomorphic. 1 F=Fˆ ⇐⇒ θ =θ . F Fˆ The proof of this result is given in the Appendix. Another, Moreover, there exists a unique extension Fe of F to U ×Pe, equivalent way to state Theorem 1 is as follows: where Pe =Cp(R+0,Rnp) in CT or Pe =(Rnp)N in DT. The extension F also admits an IIR and θ =θ . Theorem 2 (Minimal realizations, alternative statement). As- e F Fe sumeF is aninput-outputmapoftheform (4). IfanLPV-SSA TheproofofthisresultisgivenintheAppendix.Thisresult Σ is a realization of F from the initial state x , then Σ is o not only yields a one-to-one correspondence between input- a minimal realization of F if and only if Σ is observable output maps and sub-Markov parameters, but it also tells us and span-reachable from x . Any two minimal LPV-SSA o that the choice of scheduling space does not matter, since realizations of F are isomorphic. we can always extend an input-output function to a larger If we restrict our attention to the case of zero initial state, scheduling space or restrict it to a smaller one in a unique then Theorem 1 can be restated as follows: an LPV-SSA fashion. In particular, it will allow us to reduce realization representationisminimalw.r.t.zeroinitialstate, ifandonlyif theory of LPV-SSA representations to that of linear switched it is observable and span-reachable from zero. Any two LPV- system, and use the results of [33], [34]. In the sequel, we SSA representations which are minimal and weakly input- will restrict our attention to input-output functions which outputequivalentw.r.t.thezeroinitialstate(i.e.,whichinduce admit an IIR in the previously defined form. This is not a serious restriction since any input-output function of a LPV- the same input-output function from the zero initial state and which are both minimal realizations of this input-output SSA representation always admits an IIR: function from zero), are isomorphic. Another consequence Lemma3(ExistenceoftheIIR). TheLPV-SSArepresentation of Theorem 1 is that weak input-output equivalence of two Σ of the form (1) is a realization of an input-output function LPV-SSA representations with respect to some initial states F of the form (4), if and only if, F has an IIR and, for all implies strong input-output equivalence of these representa- i,j ∈In0p, s∈S(In0p), this IIR is such that tions,providedthatbothrepresentationsareminimalw.r.t.the η (s)=C A x , (13a) designated initial states. This follows by noticing that these i,F i s o LPV-SSA representations are isomorphic, and hence they are θ (s)=C A B (13b) i,j,F i s j 1Infact,withthenotationofDefinition5,wecanshowthatthereexistsa sw1h·e·re·sfnoransd=s1ǫ,,.A..ssnde∈noItn0eps,tnhe>id0e,nAtisty=mAatsrnixA,sann−d2·fo·r·Ass=1. maftaetrrixtheTpsruocohfothfaTthineoaredmditi1onintoAp(p6e),ndTixx.0 = x′0 holds. See the discussion 7 stronglyequivalent.This opensup the possibility to deal with The proof is given in the Appendix. This Theorem directly strongminimality.LetuscallanLPV-SSAΣstronglyminimal, leads to algorithms for reachability, observability and mini- if Σ is minimal w.r.t. all x ∈X. mality reduction of LPV-SSA models. These algorithms are o similar as those for linear switched systems (see, e.g., [33], Theorem 3 (Minimality, strong sense). An LPV-SSA repre- [34]). sentation Σ is strongly minimal, ⇐⇒ it is minimal w.r.t. 0 ⇐⇒ it is observable and span-reachable from the zero Procedure 1 (Reachability reduction). Let rank(R ) = nx−1 initial state. Any two strongly minimal and strongly input- r and choose a basis {bi}ni=x1 ⊂ Rnx such that outputequivalentLPV-SSArepresentationsareisomorphic.In Span{b ,...,b } = Im{R }. In the new basis, the addition, two strongly minimal LPV-SSA representations are matrices1{A ,Br ,C }np beconmx−e1 i i i i=0 weakly input-output equivalent w.r.t. to some initial states if and only if they are strongly input-outputequivalent. AR A′ BR Aˆ = i i , Bˆ = i , (17a) The proof of Theorem 3 is presented in the Appendix. i (cid:20) 0 A′i′(cid:21) i (cid:20) 0 (cid:21) Theorem 3 implies that LPV-SSA representations which are xR Cˆ = CR C′ , xˆ = o , (17b) minimal w.r.t. the zero initial state have particularly nice i i i o 0 (cid:20) (cid:21) properties. Note that it is perfectly possible for an LPV-SSA (cid:2) (cid:3) representation to be minimal w.r.t. some initial state, and not where AR ∈ Rr×r,BR ∈ Rr×nu, and CR ∈ Rny×r. Define i i i to be minimal w.r.t. the zero initial state. ΣR = (P,{AR,BR,CR}np ). Then ΣR is span-reachable i i i i=0 A remarkable observation is that, similarly to the linear from xR and Σ and ΣR are weakly input-output equivalent 0 time-invariant case, rank conditions for observability and w.r.t. x and xR, i.e. Y =Y . reachability can be obtained to verify state minimality for o o Σ,xo ΣR,xRo LPV-SSA, which is not the case for general LPV-SS repre- Intuitively,ΣR isobtainedfromΣbyrestrictingthedynam- sentations (see [3]). To this end, let us recall the definition of ics andthe outputfunctionof Σ to the subspace Im{Rnx−1}. the extended reachability and observability matrices for LPV- Procedure 2 (Observability reduction). Let rank(O ) = SSA representations (see, e.g., [1]). Let Σ be an LPV-SSA nx−1 representation of the form (1)-(2) with D(·)≡0. o and choose a basis {bi}ni=x1 ⊂ Rnx such that Span{b ,...,b } = Ker{O }. In the new basis, the Definition 9 (Ext. reachability & observabilitymatrices). For matriceso+{1A ,B ,nCx }np becomnex−1 i i i i=0 an initial state x , the n-step extended reachability matrices o Rn of Σ from xo, n∈N, are defined recursively as follows Aˆ = AOi 0 , Bˆ = BiO , (18a) i A′ A′′ i B′ R0 = xo B0 ··· Bnp , (14a) (cid:20) i i(cid:21) (cid:20)xOi(cid:21) Rn+1 =(cid:2) Rn A0Rn ··· A(cid:3)npRn , (14b) Cˆi = CiO 0 , xˆo = xo′ , (18b) The extended n-(cid:2)step observability matrices On(cid:3)of Σ, n ∈ N, (cid:2) (cid:3) (cid:20) o(cid:21) are given as where AO ∈ Ro×o,BO ∈ Ro×nu and CO ∈ Rny×o. i i i O = C⊤ ··· C⊤ ⊤, (15a) Define ΣO = (P,{AOi ,BiO,CiO}ni=p0). Then, any xOo ∈ Ro is 0 0 np observable,andΣandΣO areweaklyinput-outputequivalent On+1 =(cid:2) On⊤ A⊤0On⊤ ··(cid:3)· A⊤npOn⊤ ⊤. (15b) w.r.t. xo and xOo, i.e. YΣ,xo =YΣO,xOo. It is not diffi(cid:2)cult to show that (cid:3) Intuitively, ΣO is obtained from Σ by merging any two ∞ states x1, x2 of Σ, for which Onx−1x1 =Onx−1x2. Im{R }= Im{R }, (16a) nx−1 i Procedure 3 (Minimal representation). Given an LPV-SSA i=0 tahnadtRx∗∈:=RIm,{ImRBnx−⊆1}Ris,thie∈sImXnapllaensdt siunvbaspriaacnetionftRhenxsesnusceh: Prerporceesdeunrteati1o,ntraΣnsafonrdmaΣnwi.nri.tt.iaxlosttoatea sxpoan∈reaRcnhxa.bleUsΣinRg. A R o⊆R ∗, ∀i∈Iinp. Si∗milarly0, Subsequently, transform ΣR w.r.t. xRo to an observable ΣM i ∗ ∗ 0 with xM using Procedure 2. Then, ΣM is a minimalLPV-SSA o ∞ w.r.t.xM andΣM is weaklyinput-outputequivalenttoΣ w.r.t. Ker{Onx−1}= Ker{Oi}, (16b) initial sotates xMo and xo. i=0 \ asuncdhhtehnacteOO∗⊆:=KKeerr{{COn}x−an1d} iAs tOhe l⊆argOest,su∀bis∈pacInepo.fNRontxe decPoromcpeodsuirtieosn1as–fo2llocwans. be combined to yield a Kalman- ∗ i i ∗ ∗ 0 thatwhile the extendedreachabilitymatricesare definedfrom Procedure4 (Kalmandecomposition). ConsideranLPV-SSA a particular initial state, the extended observability matrices Σ of the form (1) and an initial state x0 ∈ Rnx. Choose a do not depend on the choice of the initial state. basis{bi}ni=x1 ⊂Rnx suchthatSpan{b1,...,br}=Im{Rnx−1} Theorem4(Rankconditions). TheLPV-SSArepresentationΣ and Span{brm+1,...,br}=(Im{Rnx−1}∩ker{Onx−1}) for −1 iΣs sispaonb-sreeravcahbalbe,leiffraonmdxoon,lyififanrdanokn(lOy if ran)k=(Rnnx.−1)=nx. sAˆom=e rT,ArmT≥−10,.BˆDe=finTeBT,=Cˆ =b1CbT2−1.,.i.∈bInnxp, xˆ,=anTdxlet. nx−1 x i i i i i(cid:2) i 0 (cid:3) o o 8 Then in [2]. Notice that these two definitions of minimality are not Am 0 A′′ Bm (Cm)⊤ ⊤ a-priori the same. Recall from [2, Definition 3.37, 3.34] the i i i i definition of structural reachability and structural observabil- Aˆ= A′ Aˆ′ A′′′ , Bˆ = B′ , Cˆ= 0 , i i i i i i (19) ity. Recall from [2] that minimal state-space realizations are 0 0 A′′′′ 0 (C′)⊤ i i structurally observable and structurally reachable. xˆ =(xm)⊤ x¯⊤ 0 ⊤, o o o Theorem6(Implicationofstructuralproperties). IfΣsatisfies where A(cid:2)m ∈ Rrm×rm,B(cid:3)m ∈ Rrm×nu, and Cm ∈ Rny×rm. the regularity certificate, then i i i Clearly, Σˆ = (P,{Aˆi,Bˆi,Cˆi,0}ni=p0) is isomorphic to Σ and • ifitisobservable,thenitisstructurallystate-observable. can be viewed as its Kalman-decomposition of Σ. • if is span-reachable from xo = 0, then it is structurally Corollary 1. The LPV-SSA Σm = (P,{Am,Bm,Cm,0}np ) state reachable. i i i i=0 is a minimal realization of F = YΣ,x0 from the intial state Corollary 4 (Joint minimality). If Σ satisfies the regularity xm. certificate and it is weakly minimal w.r.t. x = 0, then Σ is o o also jointly state minimal in the sense of [2]. The proof of Corollary 1 is presented in Appendix. In order to demonstrate what the corresponding span- Finally, we can supply the necessary and sufficient con- reachable and observable representations really describe let ditions for the existence of an LPV-SSA realization for a fix the scheduling trajectory p ∈ P. Then, the LPV-SSA giveninput-outputfunction.Theseconditionsandtheresulting representation Σ is equivalent with a a linear time-varying realization algorithm will utilize the previously introduced (LTV) representation concept of IIR and the corresponding Markov parameters. More precisely, this characterization will be achieved by ξx(t)=A(t)x(t)+B(t)u(t), (20a) constructingaHankelmatrixfromtheMarkovparametersand y(t)=C(t)x(t)+D(t)u(t), (20b) by proving that F has an LPV-SSA realization if and only if the rank of the aforementioned Hankel-matrix is finite. Note where A(t) := A(p(t)),...,D(t) := D(p(t)). Let us intro- that in general, the existence of an IIR and the corresponding duce the following definitions: Markov parameters for a given input-output function F, are Definition 10 (Regularity certificate). Let Σ be an LPV- only necessary for the existence of a finite order LPV-SSA SSA representation of the form (1). It satisfies the regularity representation. certificate if In order to define the Hankel-matrix of F, a lexicographic 1) P is convex with non-empty interior; ordering on the set S(Inp) (all possible sequences of the 0 2) in DT, the matrix A(p¯) is invertible for all p¯∈P. scheduling dependence) must be introduced. Theorem 5 (Implication of observability). Let Σ be an ob- Definition 11 (Ordering of sequences). Recall that Inp = 0 servable LPV-SSA representation of the form (1) such that {0,··· ,np}. Then, the lexicographic ordering ≺ on S(In0p) Σ satisfies the regularity certificate. There is at least one can be defined as follows. For any s,r ∈S(Inp), r≺s holds 0 scheduling trajectory p ∈ P and t > 0 such that for any if either o o two states x ,x of Σ, Y =Y if and only if (i) |r|<|s| (smaller length), or 1 2 Σ,x1 Σ,x2 (ii) 0<|r|=|s|=n, and the following holds Y (0,p )(τ)=Y (0,p )(τ), ∀τ ∈[0,t ]. Σ,x1 o Σ,x2 o o r =r ···r , s=s ···s , r ,s ∈Inp (21) In CT, p can be chosen to be analytic. 1 n 1 n i j 0 o and for some l ∈ {1,··· ,n}, r < s with the usual The proof is given in the Appendix. We will call such a p l l o ordering of integers and r =s for i=1,...,l−1. to be a revealing scheduling trajectory on [0,t ]. i i o Note that ≺ is a complete ordering on S(Inp), i.e., all Corollary2 (Observabilityrevealing). IfΣ is observableand sequences s(i) ∈ S(Inp) are ordered as ǫ = s(00) ≺ s(1) ≺ it satisfies the regularity certificate, then there exists a reveal- s(2) .... Furthermore,0for all s,r ∈ S(Inp), s ≺ sr if r 6= ǫ. ing p ∈ P and a t > 0, such that the LTV representation 0 o o Then, the so called Hankel-matrix of F both in CT and DT associated with Σ and p is completely observable on [0,t ]. o o can be defined as follows. By duality, the following holds true: Definition 12 (Hankel matrix). Consider the input-output Corollary 3 (Reachability revealing). If Σ is span-reachable functionFwhichhasanIIR.TheHankel-matrixHFassociated from x = 0 and Σ satisfies the regularity certificate, then with F is defined as the infinite matrix o there exists exists a revealing p ∈P and a t >0, such that r r θ (s(0)s(0)) θ (s(1)s(0)) ··· θ (s(τ)s(0)) ··· F F F theLTVrepresentationassociatedwithΣandp iscompletely r θ (s(0)s(1)) θ (s(1)s(1)) ··· θ (s(τ)s(1)) ··· controllable on [0,tr]. HF =θF(s(0)s(2)) θF(s(1)s(2)) ··· θF(s(τ)s(2)) ··· F F F Notice that LPV-SSA representations can be viewed as a .. .. .. . . ··· . ··· subclass of LPV state-space representations according to [2]. Theorem6 presentedbelowallowsusto relate the minimality where a n (n +1)×(n (n +1)+1) block of H in the y p u p F conceptofDefinition6withtheconceptofminimalitydefined blockrow i andblock columnj equalsthe Markov-parameter 9 θ(s), where s=s(j)s(i) ∈S(Inp) is the concatenation of the Algorithm 1 Ho-Kalman realization 0 sequences s(i) and s(j). Require: size parameters n,m ∈ N with m > n, a Hankel matrixH (n,m)foraninput-outputfunctionF. F Theorem 7 (Existence of realization). An input-output func- 1: Singularvaluedecomposition(SVD)ofHF(n,m): tion F of the form (4) has a LPV-SSA realization, if and only H (n,m)=USV⊤ if F has an IIR and F whereS isblockdiagonalwithstrictlypositiveelements. rank(HF)=nF <∞. (22) 2: LetOˆ =US1/2andRˆ =S1/2V⊤ withHF(n,m)=OˆRˆ. Any minimal LPV-SSA realization of F has a state dimension 3: LetR¯ bethefirstCarn(S(In0p))nu(np+1)columnsofRˆ. which equal to nF. 4: Let R˜i = [ R(s0)i) ··· R(s(N)i) ], where N = The proof is given in the Appendix. Note that this is an Carn(S(In0p)) and Rˆ = R(s(0)) ··· R(s(M)) is a important point to clarify two things: partitioning of Rˆ such that Mh = Car (S(Inp)) andi each m 0 • Not all input-output functions of the form (4) will have nx × nu(np + 1) block R(s(i)) is associated with s(i) in an IIR and hence an LPV-SSA realization. In that case, S(Inp).NotethatR˜ canbeviewedasthematrixcomposed 0 i state-space realization can be only available with a more ofsomeleft-shiftedblocksofRˆ. general form of coefficient dependence, e.g., rational, 5: return :Σ={Ai,Bi,Ci,0}ni=p0andxosuchthat dynamic, etc. • [xo B0 ··· Bnp]: the first nu(np +1)+1 columns • FThceadnimbeenslaiorgnenrFthoafnathmeindimimaelnLsiPoVn-SoSfAanreaLlPizVatisotnatoe-f • oCfR0⊤ˆ C1⊤ ··· Cn⊤p ⊤:thefirstny(np+1)rowsofOˆ, space realization which allows dynamic dependence of • A(cid:2) i = R˜iR¯† where(cid:3)R¯† is the Moore-Penrose pseudo- the state-matrices on the scheduling signal. inverse. An important application of Theorem 7 is the proof of correctness of the Ho-Kalman-like realization algorithm for Thatis,anLPV-SSAΣisan-momentpartialrealizationof LPV-SSAforms,e.g.,in[1] andthevalidityoftheunderlying F from x if Σ recreates the first N = Car (S(Inp)) values assumptionsofLPVsubspaceschemes[16],[19],[43].Notice 0 n 0 ofthesub-MarkovparametersofF.Here,we orderthevalues that similar results have been shown for linear switched according to the lexicographic ordering of the arguments. systems in [32]–[34]. Recall that in DT, the response F(u,p)(t) is a polynomial Let us complete our results by briefly reviewing the Ho- function of {p(s),u(s)}t whose coefficients are the sub- Kalman-likerealizationalgorithmforLPV-SSAforms.Forthe s=0 sequence set S(Inp) and a given n∈N, let Car (S(Inp)) be Markov parameters. Similarly, in CT, F(u,p)(t) is an infinite the number of al0l sequences s ∈ S(Inp) with lenngth 0at most sum of iterated integrals of p,u on [0,t], such that the sub- 0 Markov parameters are the coefficients of these iterated inte- n, i.e., |s| ≤ n. Due to the properties of the lexicographic ordering, it follows that if N =Car (S(Inp)), then grals. Hence, if some of the sub-Markovparameters of F and n 0 Y coincide, the intuitively, the values of F and of Y Σ,xo Σ,xo {s(0),...,s(N)}={s∈S(Inp)||s|≤n}. (23) shouldbeclose.Infact,ifΣisann-momentpartialrealization 0 of F from x , then in DT, F(u,p)(t)=Y (u,p)(t) for all For a given n,m ∈ N, now we can denote by HF(n,m) the t = 0,...,no−1, p ∈ P, u ∈ U. The LPΣV,-xSoSA returned by Nny(np + 1) × M(nu(np + 1) + 1) upper-left sub-matrix Algorithm 1 can then be characterized as follows. of H with N = Car (S(Inp)) and M = Car (S(Inp)). F n 0 m 0 Consider a LPV-SSA Σ and pick an initial state xo ∈ Rnx Theorem 8. Let F be an input-output function and assume of Σ. Let On be the n-step extended observability matrix of thatF admits a IIR. Let Σ andxo be the LPV-SSA and initial Σ and let R be the m-step extended reachability matrix of state respectivelyreturnedbyAlgorithm1. Thenthefollowing m Σ w.r.t. x . Then, the Hankel matrix H of Σ satisfies holds. o YΣ,xo thatHYΣ,xo(n,m)=OnRm.Thisobservationcanbeusedto • Σ is a n-moment partial realization of F from xo. deriveaKalman-Ho-likerealizationalgorithm.Thisalgorithm • If rank HF(n,n) = rank HF(n + 1,n) = is presented in Algorithm 1. rank H (n,n+1), then Σ is a 2n+1-moment partial F In order to explain the properties of the LSS-SSA returned realization of F from x . o by Algorithm 1, we introduce the notion of a partial realiza- • If rank HF(n,n) = rankHF, then Σ is a minimal tion. realization of F from x . o Definition 13 (Partial realization). Let F be an input-output • The condition rank HF(n,n) = rank HF holds if there exists an LPV-SSA realization of F of dimension at most function admitting an IIR, and let H be its Hankel ma- F n−1. trix as defined in Definition 12. The LPV-SSA Σ is an n- moment partial realization of F from the initial state x , if Thatis,Algorithm1returnsaminimalLPV-SSArealization o ∀s ∈ S(Inp),|s| ≤ n : θ (s) = θ (s). We say that Σ is of F, if n is large enough. Otherwise, it returns a partial 0 F YΣ,xo a n-moment partial realization of F, if there exists an initial realization.Note that Algorithm1 mayreturn a 2n+1partial state x ∈ X such that Σ is an n-moment partial realization realization, even if F is not a realizable by an LPV-SSA o of F from x . representation. o 10 V. CONCLUSIONS tradition of [41], [44] we will denote Fc(p) by Fc[p]. Notice that for the DT case, we do not have to require the growth We have presented a fairly complete realization theory for LPV-SSA representations. We have also compared the condition||c(v)||F ≤KR|v|, v ∈S(In0p) to hold, in order for obtained results with those of [22]. Note that unlike [22], we Fc[p] to be well-defined. did notuse the languageof the behavioralapproach,focusing Note that the functionFc is defined on Cp(R+0,Rnp) in CT instead on input-output functions. A behavioral theory in the and (Rnp)N in DT. Recall that P denotes Cp(R+0,P) in CT, style of [22] remains a topic of further research. Important and it denotes (P)N in DT. Hence, in general, P is a proper directions for future research include application of the ob- subset of the domain of definition Fc. However, if P contains tainedresultstosystemsidentificationandmodelreductionof an affine basis, then the restriction of Fc to P determines c LPV-SSA representations. uniquely. Lemma 4. In CT and DT the following holds. Assume that APPENDIX P ⊆ Rnp contains an affine basis of Rnp. Then for any two A. Proof of the results on IIR generating series c1,c2, In this section, we will prove Lemma 2 and Lemma 3. (∀p∈P :F [p]=F [p]) =⇒ c =c . c1 c2 1 2 However, in order to present the proofs of these results for the CT case, we will have to recall from [41], [44] some Note that for P=Rnp and CT, the statement of Lemma 4 technical facts on generating series (Fliess series) and their is a well-known, see [45], [46]. input-output functions. These facts will be used later on in Proof: For i = 1,2 and integer k > 0 define the map several proofs. To begin with, a generating series over Q is a Gi,k on Rnpk by function c : S(Inp) → R such that there exists K,R > 0 0 G (p ,...,p )= c (q ···q )p ···p , which satisfies ∀s ∈ S(Inp) : |c(s)| ≤ KR|s|. Let us i,k 1 k i 1 k 1,q1 k,qk apply Definition 7 for all s0∈ S(In0p) and p ∈ Cp(R+0,Rnp) q1···Xqk∈In0p Ftoc d:eCfipn(eR+0(w,Rsn⋄p)p)→(t,τC)p(Rin+0,CRT). gTehneenratdeedfinbey athegenfuenracttiinogn lwh=ere1,.p.l,.0,k.=We1wainlldshpolw=that(pifl,1F,c.1.[.p,]pl=,npF)Tc2[p∈] foRrnapll, series c as Fc(p)(t) = v∈S(Inp)c(v)(wv ⋄ p)(t,0) In the p ∈ P, then G1,k(p1,...,pk) = G2,k(p1,...,pk) for all sequel,byabuseofnotation,follo0wingtheestablishedtradition p ,...,p ∈P, and for all k >0. P 1 k of [41], [44] we will denote Fc(p) by Fc[p]. From [41] it For DT, notice that Fci[p](k) = Gi,k(p(0),...,p(k −1)) followsthatFc iswelldefined.Notethatthegrowthcondition for all p ∈ P, k > 0, hence in this case, clearly ∀p ∈ P : ∀s∈S(In0p):|c(s)|≤KR|s|isnecessaryforFc[u]tobewell Fc1[p] = Fc2[p] implies G1,k(p1,...,pk) = G2,k(p1,...,pk) defined. for all p ,...,p ∈P, and for all k >0. 1 k Inthesequel,wewillextendthedefinitionofgeneratingse- For CT, consider a piecewise-constant p ∈ P, i.e. assume riestoincludematrixandvectorvaluedseries.Tothisend,we that there exists 0<t ,··· ,t ∈R, such that p(s)=p ∈P, 1 k i define a generating series as a function c:S(In0p)→Rnr×nl s∈[ ij−=11ti, ij=1ti), i=1,...,k. From [45, Lemma 2.1] f∀ovr s∈omSe(Iin0npte)ge:rs||cn(lv,)n||rF>≤0,KsuRch|vt|h.aHtethree,re||e.|x|FistsdeKn,oRtes>th0e: aanredaLPneamlymticaf5uPnitcttihoennsfoofllto1w,.s.t.h,attk,Facin[dt1+···+tk], i=1,2 Frobenius norm for matrices. It is clear that using any other ∂k standard matrix norm would yield an equivalent definition. If F [p](t +···+t )| = nl = 1, then c is just a vector valued generating series. It is ∂t1,...,∂tk ci 1 k t1=···=tk=0 (24) easytoseethatc isa generatingseriesaccordingtotheabove =Gi,k(p1,...,pk) definition, if and only if each entry of c is a generatingseries If ∀p ∈ P : F [p] = F [p], then ∂k F [p](t + in the sense of [41]. c1 c2 ∂t1,...,∂tk c1 1 Hence,wecandefineFc :Cp(R+0,Rnp)→Cp(R+0,Rnr×nl) ··· + tk)|t1=···=tk=tk+1=0 = ∂t1,∂..k.,∂tkFc2[p](t1 + ··· + as Fc[u](t)= v∈S(Inp)c(v)(wv⋄p)(t,0), where the infinite tk)|t1=···=tk=0, for any piecewise-constant p ∈ P, and summation is understo0od in the usually topology of matrices. hence by (24), G (p ,...,p ) = G (p ,...,p ) for all 1,k 1 k 2,k 1 k Clearly, if ci,jPdenotes the (i,j)th component of c, ci,j is a p1,...,pk ∈P, gtheene(ria,tjin)tghseenriterys ionftthheecmlaastsriicxalFsce[pn]s(et)a,nid=Fc1i,,j.[p.].(,tn)re,qjua=ls G2T,ko(pc1o,n.c.l.u,dpek)thefoprroalolf,pw1,e..s.h,opwk t∈hatPG, 1a,nkd(p1f,o.r..a,llpkk) >= 1,...,nl. 0 implies that c1 = c2. Notice that ci(q1···qk) = Although generating series were originally defined for CT, G (e ···e ) for all q ,...,q ∈ Inp, where e = 0 i,k q1 qk 1 k 0 0 by a slight abuse of terminology, we will use them for the and ei is the ith standard basis vector of Rnp. Consider DThTatcaisse, aasfwunecllt.ioTnhics w: iSll(Ian0llpo)w→usRtonur×ninflywthielltebremcinaollleodgya. eani =affinenj=pb0aλsiis,jbBj f=or s{obm0,e..λ.i,,jbn∈p}R⊆, jP∈oIfn0pRnsupc.hTthheant generating series, and the input-output function generated by np λ = 1 for all i ∈ Inp. Hence, G (e ,...,e ) = CcsuTwcqh1i·lc·l·atqhsbtae∈et,In0dFbpeyccfi((npqae)1b(du·t)s·ae·sq=tto)hfpePqn1fov(u0∈tna)Sct·ti(ioI·on0n·pn,p)qFcfto((cvtll):o−(ww(R1ivn)ng.⋄pS)ptNih)me(→tilea−srYltya1b,=tl0ois)hYthe=Nde iPGP1t,12jnlt,1=hkp=ae(0nb0nPqd·1i·f,,ajo·.lPll.lo.q,nlw1kbp,=sq.k0.t)λh.,a=q1qt,klGc1∈·2(,·qkI·(n0λb·pqq0·.k1·,,Slqk.i.Gn).ci,e,kb=q(fkob)lrG1,,aa.lisl.,k.(qb,e1qb1,ql,.k1,..).....,.f,qo,bkerqk∈qik∈)In0P==p,, P 1 1 k 1,k q1 qk