CCiittyy UUnniivveerrssiittyy ooff NNeeww YYoorrkk ((CCUUNNYY)) CCUUNNYY AAccaaddeemmiicc WWoorrkkss Dissertations and Theses City College of New York 2014 NNEEUURROOEEVVOOLLUUTTIIOONN AANNDD AANN AAPPPPLLIICCAATTIIOONN OOFF AANN AAGGEENNTT BBAASSEEDD MMOODDEELL FFOORR FFIINNAANNCCIIAALL MMAARRKKEETT Anil Yaman CUNY City College How does access to this work benefit you? Let us know! More information about this work at: https://academicworks.cuny.edu/cc_etds_theses/648 Discover additional works at: https://academicworks.cuny.edu This work is made publicly available by the City University of New York (CUNY). Contact: [email protected] NEUROEVOLUTION AND AN APPLICATION OF AN AGENT BASED MODEL FOR FINANCIAL MARKET Submitted in partial fulfillment of the requirement for the degree Master of Science (Computer) at The City College of New York of the City University of New York by Anil Yaman May 2014 NEUROEVOLUTION AND AN APPLICATION OF AN AGENT BASED MODEL FOR FINANCIAL MARKET Submitted in partial fulfillment of the requirement for the degree Master of Science (Computer) at The City College of New York of the City University of New York by Anil Yaman May 2014 Approved: Associate Professor Stephen Lucci, Thesis Advisor Department of Computer Science Professor Akira Kawaguchi, Chairman Department of Computer Science Abstract Market prediction is one of the most difficult problems for the machine learning community. Even though, successful trading strategies can be found for the training data using various optimization methods, these strategies usually do not perform well on the test data as expected. Therefore, se- lection of the correct strategy becomes problematic. In this study, we propose an evolutionary al- gorithm that produces a variation of trader agents ensuring that the trading strategies they use are different. We discuss that because the selection of the correct strategy is difficult, a variety of agents can be used simultaneously in order to reduce risk. We simulate trader agents on real market data and attempt to optimize their actions. Agent decisions are based on Echo State Networks. The agents take various market indicators as inputs and produce an action such as: buy or sell. We optimize the parameters of the echo state networks using evolu- tionary algorithms. Acknowledgement I would like to express my gratitude to my thesis advisor, Assoc. Prof. Stephen Lucci, for his support and guidance. He has been a great inspiration to me in his class and during this thesis. Without his supervision, this thesis would not have been possible. Special thanks to Prof. Izidor Gertner for encouraging me achieve my academic goals. I am deeply grateful for his valuable advices. I would also like to thank all faculty and staff members in Computer Science Department. It has been a privilege studying at the City College of New York. Finally, I thank my family for constantly supporting me throughout my studies. Table of Contents TABLE OF CONTENTS ............................................................................................................... I LIST OF FIGURES ..................................................................................................................... III LIST OF TABLES ....................................................................................................................... VI 1. INTRODUCTION ................................................................................................................. 1 1.1. MOTIVATION ............................................................................................................... 1 1.2. OBJECTIVES ................................................................................................................. 2 1.3. ORGANIZATION OF THIS THESIS ............................................................................ 2 2. EVOLUTIONARY COMPUTATION ................................................................................ 3 2.1. GENETIC ALGORITHMS ............................................................................................. 5 2.2. EVOLUTION STRATEGIES ......................................................................................... 7 2.3. EVOLUTIONARY PROGRAMMING .......................................................................... 8 2.4. GENETIC PROGRAMMING ......................................................................................... 9 3. ARTIFICIAL NEURAL NETWORKS ............................................................................. 11 3.1. FEEDFORWARD NEURAL NETWORKS ................................................................. 12 3.2. ECHO STATE NETWORKS ........................................................................................ 14 4. NEUROEVOLUTION ........................................................................................................ 17 4.1. DIRECT ENCODING ................................................................................................... 19 4.2. INDIRECT ENCODING ............................................................................................... 25 5. FOREIGN EXCHANGE MARKET ................................................................................. 29 5.1. PREDICTABILITY OF AN EXCHANGE MARKET ................................................. 29 5.1.1. EFFICIENT MARKET HYPOTHESIS ................................................................ 29 5.1.2. RANDOM WALK THEORY ............................................................................... 29 5.2. PREDICTING THE MARKET ..................................................................................... 30 5.2.1. FUNDAMENTAL ANALYSIS ............................................................................ 30 5.2.2. TECHNICAL ANALYSIS .................................................................................... 30 6. AGENT BASED MODEL OF THE EXCHANGE MARKET........................................ 31 I 6.1. ECHO STATE NETWORK MODEL ........................................................................... 32 6.2. THE ALGORITHM ...................................................................................................... 35 6.3. RESULTS ...................................................................................................................... 41 7. CONCLUSIONS .................................................................................................................. 46 8. REFERENCES .................................................................................................................... 47 II List of Figures Figure 2.1: Outline of an evolutionary algorithm (Bäck & Schwefel, 1993). ................................. 3 Figure 2.2: Crossover in Genetic Algorithms. ................................................................................. 6 Figure 2.3: Mutation in Genetic Algorithms. .................................................................................. 6 Figure 2.4: Crossover in GP. F = {+, -, *, /}, T = {u, v, x, y, z}. .................................................... 9 Figure 2.5: Mutation in GP. ........................................................................................................... 10 Figure 3.1: The McCulloch and Pitts neuron (a), x and x are inputs, y is output and f (g) is the 1 2 activation function. The activation function (step function) f and threshold θ (b). ....................... 11 Figure 3.2: Generalized model of a neuron (a). Ramp activation function (b). Step activation function (c). Tangent hyperbolic activation function (d). Sigmoid activation function (e). .......... 12 Figure 3.3: A single-layer feedforward network. Network architecture (a). Connection weight matrix (b). ...................................................................................................................................... 13 Figure 3.4: The architecture of a multi-layer feedforward neural network with one hidden layer.14 Figure 3.5: An Echo State Network with all possible connections. (From Jaeger, 2001). ........... 15 Figure 4.1: An outline of an evolutionary algorithm for evolving artificial neural networks (Yao, 1999). ............................................................................................................................................. 19 Figure 4.2: An example of a binary representation of an artificial neural network using direct coding. An artificial neural network is on the left (a) and its genetic representation is on the right (b). Each connection is represented using 3 bits. .......................................................................... 20 Figure 4.3: An example of real-valued representation of an ANN using direct encoding. An ANN (a) and its real-valued genetic representation (b). ......................................................................... 21 III Figure 4.4: An example of a binary representation of the architecture of an ANN using direct encoding. An ANN is illustrated on the left (a), the connection matrix of this ANN is shown in the middle (b), genetic codes for feedforward (c) and recurrent (d) connections are given on the right (Yao, 1999). ................................................................................................................................... 23 Figure 4.5: An example of a genetic representation of an ANN using NEAT. The genetic code consists of 5 genes however, gene number 2 is not expressed in the phenotype (Redrawn from Stanley & Miikkulainen, 2002). .................................................................................................... 24 Figure 4.6: Mutation operator in NEAT. Connection mutation is illustrated on top. The offspring (b) is generated from the parent (a) by adding a connection between neuron numbers 3 and 4. The offspring (d) is generated from parent (c) by adding a new node, disabling the old connection and generating two new connections between the old nodes and the new node (Redrawn from Stanley & Miikkulainen, 2002). ................................................................................................................. 25 Figure 4.7: An example of developmental rules that construct the connectivity matrix (Redrawn from Kitano, 1990). ....................................................................................................................... 26 Figure 4.8: Construction of the ANN using the rewriting rules given in Figure 4.7. .................... 26 Figure 4.9: The results of the developmental process of the evolved neural network. The network and branches on the left, the network after non-connecting branches are eliminated in the center and the functional network on the right (from Nolfi, Miglino, & Parisi, 1994). ........................... 28 Figure 6.1: The function for initialization of the connection weights. .......................................... 32 Figure 6.2: The architecture of the echo state network used in the design of agents. The connections that are fixed are drawn as solid lines and the connections that are optimized are displayed as dotted lines ............................................................................................................................................... 33 Figure 6.3: EUR/USD 1 hourly chart starts from the date 01/01/2013 11:00 pm. The agents first trained and tested, then the window is shifted by 100 samples. .................................................... 35 Figure 6.4: An evolutionary algorithm that optimizes the connection weights of the agents. ...... 36 IV Figure 6.5: Direct mapping between the genotype and the phenotype. First 65 values of the attribute vectors are directly mapped to the W vector. The parameters a and a are optimized for out 66 67 maximum loss and profit values. ................................................................................................... 37 Figure 6.6: The evaluation of the agents. The function takes the attribute vector of an agent and maps it into the connection matrix W . It then evaluates the agents and sends back the performance out values. ............................................................................................................................................ 39 Figure 6.7: The action functions of the algorithm. Buy function is shown in section (a), The sell function is given in section (b) and CloseTrade function is shown in section (c). ........................ 40 Figure 6.8: The steps of the algorithm. For a given time window, the successful agents found during the training step are added to the agent pool. The best 10 agents are selected from the agent pool and tested on the test data. ............................................................................................................. 43 V
Description: