ebook img

Applications of Learning Classifier Systems PDF

308 Pages·2004·10.718 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Applications of Learning Classifier Systems

1. Bull (Ed.) Applications of Learning Classifier Systems Springer Berlin Heidelberg New York Hong Kong London Milano Paris Tokyo Studies in Fuzziness and Soft Computing, Volume 150 Editor-in-chief Prof. Janusz Kacprzyk Systems Research Institute Polish Academy of Sciences ul. Newelska 6 01-447 Warsaw Poland E-mail: [email protected] Further volumes of this series Vol. 140. A. Abraham, L.c. Jain. B.J. van der Zwaag (Eds.) can be found on our homepage: Innovations in Intelligent Systems, 2004 springeronline.com ISBN 3-540-20265-X Vol. 141. G.C. Onwubolu, B.Y. Babu New Optimzation Techniques in Engineering, Vol 130. P.S. Nair 2004 Uncertainty in Multi-Source Databases, 2003 ISBN 3-540-20167-X ISBN 3-540-03242-8 Vol. 142. M. Nikravesh, L.A. Zadeh, V. Korotkikh Vol 131. J.N. Mordeson, D.S. Malik, N. Kuroki (Eds.) Fuzzy Semigroups, 2003 Fuzzy Partial Differential Equations and ISBN 3-540-03243-6 Relational Equations, 2004 ISBN 3·540-20322-2 Vol 132. Y. Xu, D. Ruan, K. Qin, J. Liu Lattice-Valued Logic, 2003 Vol. 143. L. Rutkowski ISBN 3-540-40175-X New Soft Computing Techniques for System Modelling. Pattern Classification and Image Vol. 133. Z.-Q. Liu, J. Cai, R. Buse Processing. 2004 Handwriting Recognition, 2003 ISBN 3-540-20584-5 ISBN 3-540-40177-6 Vol. 144. Z. Sun. G.R. Finnie Vol 134. V.A. Niskanen Intelligent Techniques in E-Commerce, 2004 Soft Computing Methods in Human Sciences, 2004 ISBN 3-540-20518-7 ISBN 3-540-00466-1 Vol. 145. J. Gil-Aluja Vol. 135. J.J. Buckley Fuzzy Sets in the Management of Uncertainty. Fuzzy Probabilities and Fuzzy Sets for Web 2004 Planning, 2004 ISBN 3-540-20341-9 ISBN 3-540-00473-4 Vol. 146. J.A. Gamez. S. Moral. A. Salmeron (Eds.) Vol. 136. L. Wang (Ed.) Advances in Bayesian Networks. 2004 Soft Computing in Communications, 2004 ISBN 3-540-20876-3 ISBN 3-540-40575-5 Vol. 147. K. Watanabe. M.M.A. Hashem Vol. 137. V. Loia, M. Nikravesh, L.A. Zadeh (Eds.) New Algorithms and their Applications to Fuzzy Logic and the Internet, 2004 Evolutionary Robots. 2004 ISBN 3-540-20180-7 ISBN 3-540-20901-8 Vol. 138. S. Sirmakessis (Ed.) Vol. 148. C. Martin-Vide. V. Mitrana. G. PilUn Text Mining and its Applications, 2004 (Eds.) ISBN 3-540-20238-2 Formal Languages and Applications. 2004 ISBN 3-540-20907-7 Vol. 139. M. Nikravesh, B. Azvine, I. Yager, L.A. Zadeh (Eds.) Vol. 149. ).). Buckley Enhancing the Power of the Internet, 2004 Fuzzy Statistics. 2004 ISBN 3-540-20237-4 ISBN 3-540-21084-9 Larry Bull (Ed.) Applications of Learning Classifier Systems , Springer Larry Bull University of the West of England Faculty of Computing, Engineering & Mathematical Sciences BS16 lQY Bristol United Kingdom E-mail: [email protected] ISSN 1434-9922 ISBN 978-3-642-53559-8 ISBN 978-3-540-39925-4 (eBook) DOI 10.1007/978-3-540-39925-4 Library of Congress Cataloging-in -Publication-Data A catalog record for this book is available from the Library of Congress. Bibliographic information published by Die Deutsche Bibliothek. Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliographie; detailed bibliographic data is available in the Internet at http://dnb.ddb.de This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitations, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2004 Softcover reprint of the hardcover 1st edition 2004 The use of general descriptive names, registered names trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready by editor Cover design: E. Kirchner, Springer-Verlag, Heidelberg Printed on acid free paper 62/3020/M -543 2 1 0 Foreword The field called Learning Classifier Systems is populated with romantics. Why shouldn't it be possible for computer programs to adapt, learn, and develop while interacting with their environments? In particular, why not systems that, like organic populations, contain competing, perhaps cooperating, entities evolving together? John Holland was one of the earliest scientists with this vision, at a time when so-called artificial intelligence was in its infancy and mainly concerned with preprogrammed systems that didn't learn. Instead, Holland envisaged systems that, like organisms, had sensors, took actions, and had rich self-generated internal structure and processing. In so doing he foresaw and his work prefigured such present day domains as reinforcement learning and embedded agents that are now displacing the older "standard Af' . One focus was what Holland called "classifier systems": sets of competing rule like "classifiers", each a hypothesis as to how best to react to some aspect of the environment--or to another rule. The system embracing such a rule "popu lation" would explore its available actions and responses, rewarding and rating the active rules accordingly. Then "good" classifiers would be selected and re a produced, mutated and even crossed, la Darwin and genetics, steadily and reliably increasing the system's ability to cope. This breathtaking vision-certainly as romantic as any in science, since it dares to relinquish control and leave the machine to its own devices, inspirations, and fate-was noticed by some, and it inspired them to try to work the vision out and test it. The way was not easy, because Holland's vision was-and still is-vast. Always alert to progress in biology, economics, and everything to do with what he calls "complex adaptive systems", Holland saw phenomena and mechanisms he knew must be included in a realistic adaptive system. Still, the others bit, and over the past twenty years, they gradually brought concreteness via distillation, some revamping, and much experiment-to Holland's vision. The ongoing results are now called Learning Classifier Systems (just a name change-the learning was always there). There are several LCS versions, there is solid theory, and there is an increasingly challenging range of environments that can be adapted to. In addition, applications of LCS have begun to appear, where the systems' ability to respond rapidly to ongoing environment (problem, process) changes, and, through the rules, to capture environmental structure and "show the knowledge", are leading advantages. Larry Bull, one of the field's best-known and most innovative researchers, has contributed significantly to our current understanding of LCS. In this book Dr. Bull puts together a fascinating selection of applications of LCS theory and know how in such domains as data mining, modeling and optimization, and control. Now, as has occurred in other fields of science, Learning Classifier Systems-long in gestation-are beginning to benefit from the challenge of increasing contact with the real world. Stewart W. Wilson v Contents Learning Classifier Systems: A Brief Introduction ................................................... 3 Bull Section 1 - Data Mining Data Mining using Learning Classifier Systems ...................................................... 15 Barry, Holmes 6-Llora NXCS Experts for Financial Time Series Forecasting ............................................ 68 Armano Encouraging Compact Rulesets from XCS for Enhanced Data Mining. .•............. 92 Dixon, Corne 6-Oates Section 2 - Modelling and Optimization The Fighter Aircraft LCS: A Real-World, Machine Innovation Application .................................................. 113 Smith, EI-Fallah, Ravichandran, Mehra 6-Dike Traffic Balance using Learning Classifier Systems in an Agent-based Simulation ................................................................................ 143 Hercog A Multi-Agent Model of the UK Market in Electricity Generation .•................... 167 Bagnall Exploring Organizational-Learning Oriented Classifier Systems in Real-World Problems .......................................................................................... 182 Takadama VII Section 3 - Control Distributed Routing in Communication Networks using the Temporal Fuzzy Classifier System - a Study on Evolutionary Multi-Agent Control ...... 203 Carse, Fogarty & Munro The Development of an Industrial Learning Classifier System for Data-Mining in a Steel Hop Strip Mill ............................................................. 223 Browne Application of Learning Classifier Systems to the On-Line Reconfiguration of Electric Power Distribution Networks ............................................................... 260 Vargas, Filho & Von Zuben Towards Distributed Adaptive Control for Road Traffic Junction Signals using Learning Classifier Systems .......................................................................... 276 Bull, Sha'Aban, Tomlinson, Addison & Heydecker Bibliography of Real-W orId Classifier Systems Applications ............................. 300 Kovacs VIII Data Mining Learning Classifier Systems: A Brief Introduction Larry Bull Faculty of Computing, Engineering & Mathematical Sciences University of the West of England Bristol BS16 lQY, U.K. [email protected] [Learning] Classifier systems are a kind of rule-based system with general mechanisms for processing rules in parallel. for adaptive generation of new rules. and for testing the effectiveness of existing rules. These mechanisms make possible performance and learning without the "brittleness" characteristic of most expert systems in AI. Holland et aI., Induction, 1986 1. Introduction Machine learning is synonymous with advanced computing and a growing body of work exists on the use of such techniques to solve real-world problems [e.g., Tsoukalas & Uhrig, 1997]. The complex and/or ill-understood nature of many problem domains, such as data mining or process control, has led to the need for technologies which can adapt to the task they face. Learning Classifier Systems (LCS) [Holland, 1976] are a machine learning technique which combines reinforcement learning, evolutionary computing and other heuristics to produce adaptive systems. The subject of this book is the use of LCS for real-world applications. Evolutionary computing techniques are search algorithms based on the mechanisms of natural selection and genetics. That is, they apply Darwin's principle of the survival of the fittest among computational structures with the stochastic processes of gene mutation, recombination, etc. Central to all evolutionary computing techniques is the idea of searching a problem space by evolving an initially random population of solutions such that better - or fitter - solutions are generated over time; the population of candidate solutions is seen to adapt to the problem. These techniques have been applied to a wide variety of domains such as optimization, design, classification, control and many others. A review of evolutionary computation is beyond the scope of this chapter, but a recent introduction can be found in [Eiben & Smith, 2003]. In LCS, the evolutionary computing technique usually works in conjunction with a reinforcement learning technique. Reinforcement learning is learning through trial and error via the reception of a numerical reward. The learner attempts to map state and action combinations to their utility, with the aim of being able to maximize future reward. Reward is usually received after a number of actions have been taken by the learner; reward is typically L. Bull (ed.), Applications of Learning Classifier Systems © Springer-Verlag Berlin Heidelberg 2004 delayed. The approach is loosely analogous to what are known as secondary reinforcers in animal learning theory. These are stimuli which have become associated with something such as food or pain. Reinforcement learning has been applied to a wide variety of domains such as game playing, control, scheduling and many others. Again, a review of reinforcement learning is beyond the scope of this chapter and the reader is referred to [Sutton & Barto, 1998]. Learning Classifier Systems are rule-based systems, where the rules are usually in the traditional production system form of "IF state THEN action". Evolutionary computing techniques and heuristics are used to search the space of possible rules, whilst reinforcement learning techniques are used to assign utility to existing rules, thereby guiding the search for better rules. The LCS formalism was introduced by John Holland [1976] and based around his more well-known invention - the Genetic Algorithm (GA)[Holland, 1975]. A few years later, in collaboration with Judith Reitman, he presented the first implementation of an LCS [Holland & Reitman, 1978]. Holland then revised the framework to define what would become the standard system [Holland, 1980; 1986]. However, Holland's full system was somewhat complex and practical experience found it difficult to realize the envisaged behaviour/performance [e.g., Wilson & Goldberg, 1989]. As a consequence, Wilson presented the "zeroth-level" classifier system, ZCS [Wilson, 1994] which "keeps much of Holland's original framework but simplifies it to increase understandability and performance" [ibid.]. Wilson then introduced a form of LCS which altered the way in which rule fitness is calculated - XCS [Wilson, 1995]. In the following sections, each of these LCS is described in more detail as they form the basis of the contributions to this volume. A brief overview of the rest of the volume then follows. ·2. Holland's LCS Holland's Learning Classifier System receives a binary encoded input from its environment, placed on an internal working memory space - the blackboard-like message list (Figure 1). The system determines an appropriate response based on this input and performs the indicated action, usually altering the state of the environment. Desired behaviour is rewarded by providing a scalar reinforcement. Internally the system cycles through a sequence of performance, reinforcement and discovery on each discrete time-step. The rule-base consists of a population of N condition-action rules or "classifiers". The rule condition and action are strings of characters from the ternary alphabet {0,1,#}. The # acts as a wildcard allowing generalisation such that the rule condition 1#1 matches both the input 111 and the input 101. The symbol also allows feature pass-through in the action such that, in responding to the input 101, the rule IF 1#1 THEN 0#0 would produce the action 000. Both components are initialised randomly. Also associated with each classifier is a fitness scalar to indicate the "usefulness" of a rule in receiving external reward. This differs from Holland's original implementation [Holland & Reitman, 1978], where rule fitness was essentially based on the accuracy of its ability to predict external reward (after [Samuel, 1959]). 2

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.