ebook img

Artificial Neural Networks for Optimization on Large-Scale Structural Acoustics Models PDF

119 Pages·2017·8.18 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Artificial Neural Networks for Optimization on Large-Scale Structural Acoustics Models

IT 17 073 Examensarbete 30 hp Oktober 2017 ANN for Optimization on Large-Scale Structural Acoustics Models Desislava Stoyanova Institutionen för informationsteknologi Department of Information Technology Abstract ANN for Optimization on Large-Scale Structural Acoustics Models Desislava Stoyanova Teknisk- naturvetenskaplig fakultet UTH-enheten Optimization of transformer design is a challenging task which defines the dimensions of all the transformer parts, based on a given specification in order to achieve better Besöksadress: operating performance. The mechanical force distributions, as a result of the Ångströmlaboratoriet transformer operation, make the structure vibrate at twice the network frequency, Lägerhyddsvägen 1 Hus 4, Plan 0 and ultimately lead to noise emission from the outer surface of the tank. In this paper, an artificial intelligence technique is proposed for transformer noise data prediction as Postadress: an optimized alternative to the finite-element method with multi-physics capabilities. Box 536 The technique uses a feedforward artificial neural network and the backpropagation 751 21 Uppsala of error learning rule for predicting the noisy data, along with a finite-element model Telefon: for computing a training data set. The method considers two well-known 018 – 471 30 03 backpropagation algorithms, Levenberg–Marquardt and Bayesian Regularization, and while both of them appear to be extremely efficient when it comes to execution time, Telefax: Bayesian Regularization presents considerably higher accuracy. The level of accuracy 018 – 471 30 00 as well as the fast execution time makes the application of artificial neural networks Hemsida: for finite-element model optimization a viable and efficient approach for industrial use. http://www.teknat.uu.se/student Handledare: Anders Daneryd Ämnesgranskare: Maya Neytcheva Examinator: Mats Daniels IT 17 073 Tryckt av: Reprocentralen ITC Acknowledgement I would first like to thank my thesis supervisor Anders Daneryd of the Power Devices Depart- ment at ABB Corporate Research Center. The door of his office was always open whenever I ran into a trouble spot or had a question about my research or writing. He consistently allowed this paper to be my own work and inspired me to do what I am extremely passionate about, but steered me in the right direction whenever he thought I needed it. I would also like to acknowledge Prof. Maya Neytcheva of the Department of Information Technology,DivisionofScientificComputingatUppsalaUniversityasthesecondreaderofthis thesis, and I am gratefully indebted to her for her very valuable comments on this thesis. I must also express my very profound gratitude to Camila Medina and Monika Miljanovi´c for providing me with unfailing support and continuous encouragement throughout the process ofresearchingandwritingthisthesis. Thankyouforthebreakfasts,coffee-breaksandadvices – you were always there with a word of encouragement or listening ear. You should know that your support and encouragement were worth more than I can express on paper. This accomplishment would not have been possible without you. Finally,Iwouldliketothankmyfamilyforsupportingmethroughoutthelastcoupleofyears, financially, practically and with moral support, especially my grandparents. Mum, you knew it would be a long and sometimes bumpy road, but you encouraged and supported me along the way. To dad who was often in my thoughts on this journey – you are missed. Thank you! Desislava Stoyanova Västerås, June 2017 v „ By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. — Eliezer Yudkowsky (Writer) vii Contents 1 Introduction 1 1.1 Motivation and Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Artificial Neural Networks 5 2.1 Components of the Neural Network . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1.1 Neuron Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Neural Network Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.1 Feedforward Neural Networks . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.2 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.3 Completely Linked Networks . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.4 The bias Neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.5 Number of Layers and Neurons . . . . . . . . . . . . . . . . . . . . . . 9 2.2.6 Setting Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.7 Pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Order in which the neuron activations are calculated . . . . . . . . . . . . . . 10 2.3.1 Synchronous Activation . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3.2 Asynchronous Activation . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4 Communication with the world outside the network. . . . . . . . . . . . . . . 11 2.5 Learning and training samples . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.5.1 Paradigms of Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.5.2 Training patterns and teaching input . . . . . . . . . . . . . . . . . . . 13 2.5.3 Learning curve and error measurement . . . . . . . . . . . . . . . . . . 14 2.5.4 Gradient Optimization Procedures . . . . . . . . . . . . . . . . . . . . 15 2.5.5 The Hebbian Learning Rule . . . . . . . . . . . . . . . . . . . . . . . . 17 2.6 The perceptron, backpropagation and its variants . . . . . . . . . . . . . . . . 17 2.6.1 Single Layer Perceptron (SLP) . . . . . . . . . . . . . . . . . . . . . . . 18 2.6.2 Multi Layer Perceptron (MLP) . . . . . . . . . . . . . . . . . . . . . . . 20 2.6.3 Backpropagation (BP) of error learning rule . . . . . . . . . . . . . . . 21 2.6.4 Resilient BP of error . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.6.5 Extensions of BP besides the Resilient BP . . . . . . . . . . . . . . . . . 26 2.7 Radial Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.7.1 Components and structure of an RBF network . . . . . . . . . . . . . . 29 2.7.2 Processing of information in the RBF neurons . . . . . . . . . . . . . . 29 2.7.3 Training an RBF network. . . . . . . . . . . . . . . . . . . . . . . . . . 30 ix 2.7.4 Comparison between MLPs and RBF networks . . . . . . . . . . . . . . 30 2.8 Recurrent perceptron-like networks . . . . . . . . . . . . . . . . . . . . . . . . 31 2.8.1 Jordan networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.8.2 Elman networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.8.3 Training recurrent perceptron-like networks . . . . . . . . . . . . . . . 32 2.9 Self-organizing feature maps. . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.9.1 Structure of a SOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.9.2 Training of a self-organizing map . . . . . . . . . . . . . . . . . . . . . 35 3 Related Work 39 3.1 Top-oil temperature prediction with neural networks . . . . . . . . . . . . . . 39 3.1.1 Feedforward neural network . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1.2 Elman recurrent neural network . . . . . . . . . . . . . . . . . . . . . 43 3.1.3 Performance, results and discussion . . . . . . . . . . . . . . . . . . . . 43 3.2 Neural network usage in structural crack detection . . . . . . . . . . . . . . . 45 4 3DoF Model of Vibrating System 47 4.1 Description of the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.2 Equations of motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.2.1 Matrix form of equations of motion . . . . . . . . . . . . . . . . . . . . 48 4.3 Train an artificial neural network . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.3.1 Description of the training procedures . . . . . . . . . . . . . . . . . . 50 4.3.2 Performance of the training procedures . . . . . . . . . . . . . . . . . . 50 4.4 Further notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5 Multimillion DoF Model 57 5.1 Description of the data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.2 ANN Training Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.3 Reducing the size of the data set . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.4 Stressing the data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.5 Further notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 6 Conclusion 73 6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Bibliography i A Tables iii B Figures vi B.1 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi B.2 ANN training results using trainlm . . . . . . . . . . . . . . . . . . . . . . . . x B.3 ANN training performance after stressing the data set values by 40% . . . . . xv B.4 ANN training performance after stressing the data set values by 50% . . . . . xvii x

Description:
backpropagation algorithms, Levenberg–Marquardt and Bayesian Regularization, and while both of them appear For smaller number of layers, however, David Kriesel [Kri07] recommends testing BP for both In: Parallel distributed processing: explorations in the microstructure of cognition. Vol. 1.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.