ebook img

Modern Control Engineering PDF

282 Pages·1972·11.19 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Modern Control Engineering

Pergamon Unified Engineering Series GENERAL EDITORS Thomas F. Irvine, Jr. State University of New York at Stony Brook James P. Hartnett University of Illinois at Chicago Circle EDITORS William F. Hughes Carnegie-Mellon University Arthur T. Murphy Widener College William H. Davenport Harvey Mudd College Daniel Rosenthal University of California, Los Angeles SECTIONS Continuous Media Section Engineering Design Section Engineering Systems Section Humanities and Social Sciences Section Information Dynamics Section Materials Engineering Section Engineering Laboratory Section Modem Control Engineering Maxwell Noton University of Waterloo, Ontario, Canada Pergamon Press Inc. New York · Toronto · Oxford · Sydney · Braunschweig PERGAMON PRESS INC. Maxwell House, Fairview Park, Elmsford, N.Y. 10523 PERGAMON OF CANADA LTD. 207 Queen's Quay West, Toronto 117, Ontario PERGAMON PRESS LTD. Headington Hill Hall, Oxford PERGAMON PRESS (AUST.) PTY. LTD. Rushcutters Bay, Sydney, N.S.W. VIEWEG & SOHN GmbH Burgplatz 1, Braunschweig Copyright © 1972. Pergamon Press Inc. Library of Congress Catalog Card No. 77-181056 All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form, or by any means, electronic, mechanical, photo- copying, recording or otherwise, without prior permission of Pergamon Press Inc. Printed in the United States of America 08 016820 5 Preface At the time of writing, control theory as taught at the undergraduate level of most North American universities is dominated by the study of linear feedback systems, usually single-loop. Such theory was well developed by the 1950's, including the extension to sampled-data systems; it is appropriate and usually adequate for the analysis and synthesis of servomechanisms and elementary regulators. Emphasis is placed on stability either by frequency response methods or by the root locus for the time domain. The continued inclusion of such material in under- graduate courses is assured, if for no other reason than the importance of the Principle of Negative Feedback in applied science. However, the limitations of classical control theory are now recog- nized, for example the synthesis of nonlinear control systems, multi- variable systems (multi-input and multi-output), the recognition of other performance criteria such as profit, etc. Since 1960 there have been two developments which together have changed dramatically the situation with respect to the control of complex industrial processes, aerospace and socio-economic systems. On the one hand digital computers have come into widespread use, both for general purpose scientific computing and as on-line computers. Digital computers have acted as a catalyst for the second development, namely modern control theory which is usually assumed now to include the related areas of system identification and parameter estimation. It started with the application of Bellman's dynamic programming, a revival of the classical calculus of variations and some of Kalman's early work on linear systems using the state space formulation. The developments since 1960 have been almost explosive ix x Preface and their full impact on applications is far from realized. Modern control theory is therefore likely to be introduced at progressively earlier stages in applied science curricula. This book has been written primarily for use as a first level graduate course, although it seems possible that some of the material may appear in the future in senior elective courses. Given that the student has adequate preparation in matrix algebra, numerical methods and an introduction to state variable characterization, then the book can be used for a one-semester course by excluding say Chapter 5 on stochastic control and estimation. On the other hand, for those students or readers who need revision or re-enforcement in the above prerequisites, an optional supplement has been provided. The inclusion of this preparatory material plus Chapter 5 would then correspond more to a two-semester course. By providing readers with this option it is hoped that the book will also be used by engineers for self-study as its predecessor (written in 1964) was used, largely in the U.K. and Western Europe. The present text was incidentally initially conceived as a revision of "Introduction to Variational Methods in Control Engineering,, but in fact little more than five per cent of the original material remains. Except for some results quoted from the earlier text, most of the illustrative computing results were obtained by Susan Oates using the University of Waterloo, IBM Computer, Systems 360-Model 75. The "Powell-Zangwiir computer program was however provided by the author's colleague Professor J. Vlach. The original manuscript (written in 1970) was corrected, modified and amplified after using in a tutorial manner with several graduate students. However the author is especially indebted to Dr. W. A. Brown of Monash University (Australia) for his careful and helpful review. Finally the author acknowledges the pains- taking efforts of Margaret Adlington in typing most of the manuscript. Waterloo, Ontario M.NOTON 1 State Representation of Dynamical Systems 1.1 STATE EQUATIONS A cornerstone of control theory, as developed since about I960, has been the characterization of dynamical systems by means of state equa- tions instead of transfer functions or frequency response functions. Whereas the latter are of limited applicability to nonlinear systems, state equations and state variables are equally appropriate to linear or non- linear systems. They will be immediately recognizable to the student as no more than the dependent variables of differential equations which define the behaviour of a dynamical system, at least if those equations are written in a certain form. As an example consider the motion of a rocket near the earth but free of atmospheric forces. Assume that it has moved eastbound through an angle of longitude Θ and is at radius r with reference to the centre of the earth. Then the motion is governed by the following differential equations: r = S sin ß + r(Ô + Ω)2 - g(rjr)2 ( l. I ) rè = Scosß- 2r(6 + Ω) ( l .2) S is the dimensionless thrust of a rocket motor at angle β to the hori- zontal, r the radius of the earth, g the acceleration due to gravity at the 0 surface and Ω the angular velocity of the earth. The differential equations are nonlinear and present some difficulty for analytic solution. If they were to be integrated numerically, say by means of a digital computer, then the student is doubtless aware that some rearrangement is neces- sary, to express the two second order equations as four first order 1 2 State Representation of Dynamical Systems equations. Let us make certain substitutions in anticipation of future needs. Put χ = r, x = 0, x = r and x = Θ yielding λ 2 3 4 dx, -^ = Ss'mß + x (x + Ω)2 - g(r /x )2 3 2 0 3 dx ^ = (Slx ) cos ß - 2x (x + Cl)lx 3 x 2 B (1.3) dx 3 dx 4 = x ~dt 2 The equations could now be integrated numerically by a standard com- puter subroutine given the initial conditions on (x x , * , x ) and the u 2 3 4 thrust angle β as a function of time. Equations (1.3) constitute the state equations of the system; x x , x and x are the state variables or the com- u 2 3 4 ponents of the state vectort x and, assuming that in this example the rocket thrust angle β can be adjusted to vary the trajectory, β is the control variable. The importance of the state vector is that, in the case of a deterministic system free of all unpredictable random effects, all future states are com- pletely determined by an initial state and inputs to the system such as known disturbances or control variables. By means of an example let us examine the relationship between the transfer function of a linear system and the state equations. An inductor L, resistor R and capacitor C are connected in series and excited by a voltage source u(t) with all initial conditions equal to zero. If the charge on the capacitor is q it is elementary that summation of voltages around the circuit leads to Lq + Rq + qlC = u (1.4) Substitute q = Cv for the voltage v across the capacitor and, by taking Laplace Transforms, we obtain the transfer function, i.e. the ratio of the Laplace Transform of υ to the Laplace Transform of u. Thus v(s)lu(s) =1/(1+ RCs + LCs2) ( 1.5) Alternatively, in Eq. ( 1.4) with the substitution q = Cv LCv + RCb + v = u (1.6) tConsistent with modern practice, vectors are not underscored or indicated by bold type. The reader should be ready to interpret any unsubscripted lower case letter as a vector and an upper case letter as a matrix, depending on the text. Linear State Equations 3 put JCJ = v and x = v and Eq. (1.6) can be expressed as two first order 2 differential equations, i.e. two state equations dxjdt = x 2 (1.7) dxjdt =(u-x - RCx )l(LC) 1 2 Unfortunately the definition of the state variables was not unique. Thus, if we put Xi = v but x = RCv + v then, after a little manipulation we 2 obtain different state equations, namely dxJdt=(x -x )l(RC) 2 1 (1.8) dx ldt = -{\IRC)x + (\IRC-RlL)x + {RlL)a 2 x 2 The above example with a linear system has illustrated that, whereas the transfer function between two variables is unique, the choice of state variables and consequent form of the state equations is not. At this stage some authors may take the reader through several examples of deriving state equations from transfer functions but such an exercise is deliberately avoided here, apart from noting that it is not a unique procedure. The differential equations of a system are more funda- mental than transfer functions and are of course equally applicable to linear and nonlinear cases. Given the differential equations we proceed along one route (transfer functions) for the application of classical con- trol theory or via another route (state equations) anticipating modern control theory. 1.2 LINEAR STATE EQUATIONS Referring again to the above example of the elementary electric circuit, put L, C and R all equal to one unit. Then Eq. (1.7) in matrix form becomes sen-? -:][» (1.9) It is a special case of the general form for linear systems, viz. x = A(t)x + B(t)u (1.10) where x is an «-component vector u is an m-component vector A is an n X n matrix B is an n x m matrix. 4 State Representation of Dynamical Systems The (t) after A and B in Eq. (1.10) is a reminder that the coefficients of the state variables may be functions of time. Many important and powerful results can be deduced for linear sys- tems, in contrast to the difficulties encountered with nonlinear systems, and the remainder of this chapter is concerned with certain relevant results of linear systems theory. Due to the tractability of linear systems, control algorithms are often derived to apply to linearized systems, for small perturbations about some nominal state. In fact, further considera- tion of the first example in Section 1.1 serves to illustrate how study of an equation of the type Eq. (1.10) may arise even though the state Eqs. (1.3) are nonlinear. Suppose it can be assumed that the rocket has been launched on approximately the desired trajectory and it is planned to vary ß only slightly about a previously computed schedule in time ß(t). Given the initial conditions, Eqs. (1.3) are integrated with ß = ß(t) to generate the "standard trajectory" Jc(/) where x = (*!, JC, Jc, JC). Introduce 2 3 4 Δ*, = *,-*, /= 1,2,3,4 (1.11) where Ax(t) is considered as a small perturbation, and take a first order { expansion of both sides of Eqs. ( 1.3) about x(t) and ß(t), remembering that those equations are satisfied exactly by the latter. We obtain Ax = S cos ß . Aß + (x + Ci)2Ax + 2x (x + ü)Ax dt 1 2 3 3 2 2 + 2g(r 2lx./)Ax, {) -j-Ax = -(S/x ) sinß .Aß-(S/x 2) cosß . Δχ -2^0*2+ Ω)/* t 2 3 3 3 3 (1.12) — 2χΑχ Ιχ + 2χ (χ + Ω)Αχ /χ 2 λ 2 3 χ 2 3 3 ■j Ax = Αχ, t 3 Αχ = Αχ dt 4 2 which in matrix notation becomes 0 α 0 b7\ λ «2 Ax = "3 a4 «5 0 Ax + b2\ Δ/3 (1.13) dt 1 0 0 0 0 0 1 0 0 oj Fundamental Matrices 5 where α = 2x (x + Ω), b = S cos β λ s 2 i a = (x + Ω)2 + 2gr 2lx-\ b = - (S/x ) sin β 2 2 0 s 2 3 a = -2(x + il)/x (1.14) 3 2 3 a = - 2xjx 4 3 a = 2x (x2 + n)lxs2- (Slx 2) cos/3 5 1 3 Thus perturbations about the standard trajectory satisfy an equation of the form Eq. (1.10), where A and B are functions of time evaluated on the standard trajectory. 1.3 FUNDAMENTAL MATRICES We consider first the homogeneous part of Eq. ( l. 10) x = A(t)x (1.15) where x is «-component vector. It can be shown(ll) that if the real elements of A(t) are bounded in the interval [t , t] then a unique solution 0 f always exists, dependent on the initial state vector x(t ). Furthermore, by 0 using different initial vectors x(t ), we can generate n and no more than n 0 linearly independent solutions. In other words, if jc(/) (/ = 1,2,...,«) are f n such solutions and x +i(t) is any other solution for some different initial n vector, then real scalars #,·(/= 1, 2,...,«+ 1) can be found such that ΰΛ(0 + ΰΛ(0 + ··· + Α,ι+Λι+ι(0=0 (1.16) for all times / in the closed interval [i , t], 0 f Let us take any set of n linearly independent solution column vectors and arrange them side by side to form a time dependent n X n matrix, thus Φ(ί) = [*,(*) x(t) x(t)] Φ(ί ) = Ζ (1.17) 2 n 0 Φ(/) is called a fundamental matrix, corresponding to the initial condition matrix Z which is obtained by putting the initial vectors Xi(t ) (i = 1,2,..., 0 n) side by side. Now, because x(t) are solution vectors t x^AXi i= 1,2,...,« (1.18) or [xx ,i„] =A[x x ,. - · ,*,,] (1.19) x 2 u 2 Combination of Eqs. (1.17) and (1.19) leads to the important result Φ(ί) = A(t) Φ(ί) (1.20)

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.