ebook img

Artificial Neural Networks for Modelling and Control of Non-Linear Systems PDF

241 Pages·1996·5.53 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Artificial Neural Networks for Modelling and Control of Non-Linear Systems

ARTIFICIAL NEURAL NETWORKS FOR MODELLING AND CONTROL OF NON-LINEAR SYSTEMS Artificial Neural Networks for Modelling and Control of Non-Linear Systems by Johan A. K. Suykens J oos P. L. Vandewalle Bart L. R. De Moor SPRINGER-SCIENCE+BUSINESS MEDIA, B.V. A C.I.P. Catalogue record for this book is available from the Library of Congress ISBN 978-1-4419-5158-8 ISBN 978-1-4757-2493-6 (eBook) DOI 10.1007/978-1-4757-2493-6 Printed on acid-free paper All Rights Reserved ©1 996 Springer Science+Business Media Dordrecht Originally published by Kluwer Academic Publishers in 1996 Softcover reprint ofthe hardcover 1st edition 1996 No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval system, without written permission from the copyright owner. Contents Preface ix Notation xi 1 Introduction 1 1.1 Neural information processing systems 1 1.2 ANNs for modelling and control . 5 1.3 Chapter by Chapter overview 8 1.4 Contributions..... ..... . 15 2 Artificial neural networks: architectures and learning rules 19 2.1 Basic neural network architectures 19 2.2 Universal approximation theorems . . 23 2.2.1 Multilayer perceptrons . . . . . 23 2.2.2 Radial basis function networks 27 2.3 Classical paradigms of learning 28 2.3.1 Backpropagation 29 2.3.2 RBF networks 33 2.4 Conclusion 35 3 Nonlinear system identification using neural networks 37 3.1 From linear to nonlinear dynamical models 38 3.2 Parametrization by ANNs . . . . 39 3.2.1 Input/output models. . . 39 3.2.2 Neural state space models 41 :3.2.3 Identifiability ...... . 43 :3.3 Learning algorithms ...... . 45 3.:3.1 Feedforward network related models 46 3.:3.1.1 Backpropagation algorithm 46 :3.3.l.2 Prediction error algorithms 46 v vi Contents 3.3.1.3 Extended KaIman filtering 48 3.3.2 Recurrent network related models " 50 3.3.2.1 Dynamic backpropagation 50 3.3.2.2 Extended KaIman filtering 54 :3.4 Elements from nonlinear optimization theory 55 3.5 Aspects of model validation, pruning and regularization 58 3.6 Neural network models as uncertain linear systems 61 3.6.1 Convex polytope .. 62 3.6.2 LFT representation . 65 :3.7 Examples ......... . 68 3.7.1 Some eh allen ging examples from the literature 68 3.7.2 Simulated nonlinear system with hysteresis 69 3.7.3 Identification of a glass furnace 75 :3.7.4 Identifying n-double scrolls 77 3.8 Conclusion 82 4 Neural networks for cOlltrol 83 4.1 Neural control strategies .. 8:3 4.1.1 Direct versus indirect adaptive methods 8:3 4.1.2 Reinforcement learning 85 4.1.:3 Neural optimal control . 87 4.1.4 Internal model control and model predictive control 88 4.2 Neural optimal control . . . . . . . . . . . . . . . . . . . . 90 4.2.1 The N -stage optimal control problem . . . . . . . 90 4.2.2 Neural optimal contro!: full state information case 92 4.2.:3 Stabilization problem: full static state feedback 92 4.2.4 Tracking problem: the LISP principle . . . . . 94 4.2.5 Dynamic backpropagation ........... . 95 4.2.6 Imposing constraints from linear control theory 96 4.2.6.1 Static feedback using feedforward nets . 97 4.2.6.2 Dynamic feedback using recurrent nets 99 4.2.6.:3 Transition between equilibrium points. 101 4.2.6.4 Example: swinging up an inverted pendulum 104 4.2.6.5 Example: swinging up a double inverted pen- dulum. 111 4.:3 Conclusion 115 5 NL Theory 117 q 5.1 A neural state space model framework for neural control design 118 5.2 NLq systems. . . . . . . . . . . . . . . . . . . 122 5.3 Global asymptotic stability criteria for NLqs . 127 5.:3.1 Stability criteria . . . . . . . 127 5.3.2 Discrete time Lur'e problem. . . . . 132 5.4 Input/Output properties - 1 theory .... 134 2 .5.4.1 Equivalent representations for NLqs 134 5.4.2 Main Theorems. . . . 136 5.5 Robust performance problem . . . 140 5.5.1 Perturbed NLqs . . . . . . . 140 5.5.2 Connections with J.1 theory 145 5.6 Stability analysis: formulation as LMI problems. 147 5.7 Neural control design ............... . 150 5.7.1 Synthesis problem ............ . 151 5.7.2 Non-convex nondifferentiable optimization . 1.52 5.7.3 A modified dynamic backpropagation algorithm . 1.53 5.8 Control design: so me case studies . . . . . . . . 154 .5.8.1 A tracking example on diagonal scaling 154 5.8.2 A collection of stabilization problems. . 157 5.8.3 Mastering chaos ............. 162 5.8.4 Controlling nonlinear distortion in loudspeakers . 164 5.9 NLqs beyond control . . . . . . . . 168 5.9.1 Generalized CNNs as NLqs 168 5.9.2 LRGF networks as NLqs . 172 .5.10 Conclusion ............ 175 6 General conclusions and future work 177 A Generation of n-double scrolls 181 A.1 A generalization of Chua's circuit 182 A.2 n-double scrolls . . . . . . . . . . 185 B Fokker-Planck Learning Machine for Global Optimization 195 B.1 Fokker-Planck equation for recursive stochastic algorithms . 196 B.2 Parametrization of the pdf by RBF networks 198 B.3 FP machine: conceptual algorithm 200 B.4 Examples . 203 B.5 Conclusions . . . . . . 205 C Proof of NL Theorems 207 q viii Contents Bibliography 215 Index 233 Preface The topic of this book is the use of artificial neural networks for modelling and control purposes. The relatively young field of neural control, which started approximately ten years aga with Barto's broomstick balancing experiments, has undergone quite a revolution in recent years. Many methods emerged in cluding optimal control, direct and indirect adaptive control, reinforcement learning, predictive contral etc. Also for nonlinear system identification, many neural network models and learning algorithms appeared. Neural network ar chitectures like multilayer perceptrons and radial basis function networks have been used in different model structures and many on- or off-line learning al gorithms exist such as static and dynamic backpropagation, prediction error algorithms, extended KaIman filtering, to name a few. The abundance of methods is basically due to the fact that neural network models and control systems form just another class of nonlinear systems, and can of course be approached from many theoretical points of view. Hence, for newcoming people, interested in this area, and even for experienced researchers it might be hard to get a fast and good overview of the field. The aim of this book is precisely to present both classical and new methods for nonlinear system identification and neural control in a straightforward way, with emphasis on the fundamental concepts. The book results from the first author's PhD thesis. One major contribution is the so-called 'NL theory', described in Chapter 5, which serves as a uni q fying framework for stability analysis and synthesis of non linear systems that contain linear and static nonlinear operators that satisfy a sector condition. NL systems are described by nonlinear state space equations with q layers q and hence encompass most of the currently used feedforward and recurrent neural networks. Using neural state space models, the theory enables to design controllers based upon identified models from measured input/output data. It turns out that many problems arising in neural networks, systems and control can be considered as special cases of NLq systems. It is also shown by examples how different types of behaviour, ranging from globally asymptotically stable systems, systems with multiple equilibria, periodic and chaotic behaviour can be mastered within NL theory. q ix x Preface Acknowledgements We thank L. Chua, P. Dewilde, A. Barbe and D. Bolle for participation in the jury of the thesis, from which this book originates. The leading research work of L. Chua on chaos and cellular neural networks formed a continuous source of inspiration for the present work, as one can observe from the n-double scroll, generalized CNNs within NL theory and the work on identifying and q mastering chaotic behaviour. The summer course of S. Boyd at our university on 'convex optimization optimization in control design' in 1992 was the start for studying neural control systems from the viewpoint of (post)modern control theory, finally leading to the development of NL theory in this book. At this q point we also want to thank L. EI Ghaoui, P. Gahinet, P. Moylan, A. Tesi, P. Kennedy, P. Curran, M. Hasler, T. Roska and our colleagues J. David, 1. Vandenberghe and C. Yi for stimulating discussions. Furthermore we thank all our SISTA colleagues for the pleasant atmosphere. Especially we want to mention here the people that have been working on neural networks, D. Thierens, J. Dehaene, Y. Moreau, J. Hao and S. Tan. We are also grateful to all our colleagues from the Interdisciplinary Center for Neural Networks of the K.U. Leuven, which is a forum for mathematicians, physicists, engineers and medical researchers to meet each other on a regular basis. From Philips Belgium we thank J. Van Ginderdeuren, C. Verbeke and L. Auwaerts for our common research work on reducing nonlinear distorsion in loudspeakers. The structure of this book partially originated from aseries of lectures for the 'Belgian Graduate school on Systems and Control' on 'Artificial neural networks with application to systems and control' in 1993. The work reported in this book was supported by the Flemish Community through the Concerted Action Project Applicable Neural Networks, and the framework of the Belgian Programme on Interuniversity Poles of Attraction, initiated by the Belgian State, Prime Minister's Office for Science, Technology and Culture (IUAP-17 and IUAP-50). We thank all the people that were involved in these frameworks for the many stimulating interactions. Johan Suykens Joos Vandewalle Bart De Moor K.U. Leuven, Belgium Notation Symbols m.n(ce" ) set of real (complex) column vectors m.nxm(ce"xm) set of real (complex) n x rn matrices i ij-th entry of matrix A (unless locally overruled) aj' aij AT transpose of matrix A A* complex conjugate transpose of A 1·1 absolute value of a scalar Ilx112, xE m.n 2-norm of vector x, IIxl12 = (L7=1 IXiI2)1/2 = Ilxlloo oo-norm of vector x, Ilxlloo maxi lXii = IlxliI I-norm of vector x, IlxliI L7=1 lXii = Ilxllp p-norm vector x, Ilxllp (L7=1 IxdP)l/p IIAI12, A E m.nxm induced 2-norm of matrix A, IIAI12 = ä'(A) IIAlloo induced oo-norm of matrix A, IIAlloo = maXi L.f=l laijl = IIAliI induced I-norm of matrix A, IIAlll maxj L7=1 laijl (Ti (A) i-th singular value of A = ä'(A) maximal singular value of A, ä'(A) (Amax(A* A))1/2 dA) minimal singular value of A K(A) condition number of A, K(A) = IIA11211A-1112 Ai(A) i-th eigenvalue of A = p(A) spectral radius of A, p(A) maXi IAi(A)1 A > 0 A positive definite A>B A - B positive definite [x; y] concatenation of vectors x and y (Matlab notation) [AB;C D] matrix consisting of block rows [A B] and [C D] A(i, :) i-th row of matrix A (Matlab notation) A(:,j) j-th column of matrix A (Matlab notation) In identity matrix of size n x n Omxn zero matrix of size m x n 12 set of square summable sequences in ce" Ile112, e E 12 12 norm of a sequence e, lIell~ = L:=l liek II~ f(.; B) function f, parametrized by B xi

Description:
Artificial neural networks possess several properties that make them particularly attractive for applications to modelling and control of complex non-linear systems. Among these properties are their universal approximation ability, their parallel network structure and the availability of on- and off
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.