ebook img

The Anchored Separated Representation for High Dimensional Problems by Matthew Tin Chun Li ... PDF

115 Pages·2015·1.4 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview The Anchored Separated Representation for High Dimensional Problems by Matthew Tin Chun Li ...

The Anchored Separated Representation for High Dimensional Problems by Matthew Tin Chun Li A thesis submitted in conformity with the requirements for the degree of Master of Applied Science Graduate Department of Aerospace Engineering University of Toronto © Copyright 2015 by Matthew Tin Chun Li Abstract The Anchored Separated Representation for High Dimensional Problems Matthew Tin Chun Li Master of Applied Science Graduate Department of Aerospace Engineering University of Toronto 2015 Although topics in science and engineering that involve dimensions beyond x-y-z appear obscure, in truth numerous examples abound. For instance, uncertainty quantification requires approximating and integrating functions with many inputs, while the study of non-linearrandomvibrationsinvolve PDEswhosedimensionality scaleswiththesystem’s degrees of freedom. Such problems are numerically difficult since the application of classical mesh based techniques incur computational demands that scale exponentially with the dimension; this is the curse of dimensionality. In this thesis we formulate two tractablenumerical methodsthatcircumvent thiscursebyassumingsparsityinthemodel representation. The first method, the anchored separated representation, is a provably minimal decomposition forfunctions thataresparseinamultiplicative sense. The second method is an iterative extension for approximating functions that are sparse in rank. We illustrate with numerical examples the application of both methods to a broad class of high dimensional problems. ii Acknowledgements First and foremost I would like to thank Prof. Prasanth Nair for his mentorship and support over the past couple of years. He was incredibly patient in allowing me leeway to explore different ideas and for that I am extremely grateful. His encyclopaedic under- standing of computational science and engineering, as well as his infectious enthusiasm for the subject, certainly shapes the academic I aspire to become. There are many things I have still to learn from him and I hope to continue this fruitful collaboration going forward. I would also like to express my gratitude to the other members of my research assess- ment committee, Prof. Craig Steeves and Prof. David Zingg, for their invaluable com- ments and advice in the formative stages of this research. I should also thank Prof. Zingg again for entertaining my many na¨ıve questions in his CFD courses, as well as for serving as the second reader of this thesis. Many thanks as well to all the professors and staff at UTIAS. I would beremiss if I didn’t acknowledge my colleagues who shared the (often unbear- ably hot) office with me. They are in no specific order: Mina Mitry, Valentin Stolbunov, Tomas Mawyin, Lin Gao, Ricardo Santos-Baptista, Dr. Christophe Audouze, Dr. Rhea Liem, andDr. AndrewLambe. Thankyouforallthefruitfulandentertainingdiscussions, and the mutual commiseration over substandard indoor climate control. In particular, merci beaucoup to Christophe againfor the many thoughtful discussions and for patiently explaining mathematical ideas that would otherwise be completely inaccessible to me. To my dear friends from UTS, EngSci, UTIAS, and beyond – there are too many of you to acknowledge individually here (or so I hope to fool future readers of this thesis into thinking), but know that you all play an important role in helping me maintain my sanity. Thanks for putting up with all my idiosyncrasies, I can’t wait for whatever future adventures come our way. I am grateful for the financial support from the Kenneth M. Molson Fellowship, the National Science and Engineering Research Council CGS-M Scholarship, the SGS Conference Grant, and UTIAS. Last, but certainly not least, I would like to acknowledge my parents to whom this thesis is dedicated to. I love you both. iii Contents 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 The Classical Problem of Function Approximation and Integration 4 2.1 Function Approximation and Integration . . . . . . . . . . . . . . . . . . 4 2.2 Sparsity in Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2.1 Monte Carlo and Quasi Monte Carlo . . . . . . . . . . . . . . . . 5 2.2.2 Sparse Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Sparsity in the Model Representation . . . . . . . . . . . . . . . . . . . . 8 2.3.1 A Brief Literature Review . . . . . . . . . . . . . . . . . . . . . . 9 2.4 Notation and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3 The ANOVA Decomposition 14 3.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 The ANOVA Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2.1 The Effective Superposition Dimension . . . . . . . . . . . . . . . 18 3.2.2 The Classical ANOVA Decomposition . . . . . . . . . . . . . . . . 21 3.2.3 The Anchored ANOVA Decomposition . . . . . . . . . . . . . . . 23 4 The Anchored Separated Representation 27 4.1 Mathematical Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.1.1 The General Separated Representation . . . . . . . . . . . . . . . 28 4.1.2 The Factorized High Dimensional Model Representation . . . . . 32 4.1.3 Connections to the ANOVA Decomposition . . . . . . . . . . . . 34 4.2 The Anchored Separated Representation . . . . . . . . . . . . . . . . . . 36 4.2.1 The Effective Separation Dimension . . . . . . . . . . . . . . . . . 38 iv 4.2.2 Connections to Anchored ANOVA . . . . . . . . . . . . . . . . . . 41 4.2.3 Connections to fHDMR/FDD . . . . . . . . . . . . . . . . . . . . 42 4.2.4 Numerical Comparison to Anchored ANOVA . . . . . . . . . . . . 43 4.2.5 Numerical Study of Anchor Point Selection . . . . . . . . . . . . . 49 5 The Rank-r Anchored Separated Representation 54 5.1 A Rank-r Extension of ASR . . . . . . . . . . . . . . . . . . . . . . . . . 54 5.2 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 6 Applications to Uncertainty Quantification 63 6.1 A Brief Overview of Uncertainty Quantification . . . . . . . . . . . . . . 63 6.1.1 The Curse of Dimensionality . . . . . . . . . . . . . . . . . . . . . 65 6.2 A Petrov-Galerkin Projection Scheme with the Anchored Separated Rep- resentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 6.2.1 The Vector Valued Anchored Separated Representation . . . . . . 66 6.2.2 The Weighted Residual Form and Petrov-Galerkin Projections . . 67 6.2.3 Post Processing for the Mean and Variance . . . . . . . . . . . . . 69 6.2.4 A Generalization to Rank-r Tensors . . . . . . . . . . . . . . . . . 70 6.3 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 7 Applications to High Dimensional Partial Differential Equations 77 7.1 Overview of High Dimensional PDEs . . . . . . . . . . . . . . . . . . . . 77 7.1.1 The Curse of Dimensionality . . . . . . . . . . . . . . . . . . . . . 78 7.1.2 A Brief Literature Review of Existing Methods . . . . . . . . . . 78 7.2 The Rank-r ASR for Elliptic PDEs . . . . . . . . . . . . . . . . . . . . . 79 7.2.1 Weighted Residual Form . . . . . . . . . . . . . . . . . . . . . . . 80 7.2.2 Governing Equations for the ASR Component Functions . . . . . 81 7.2.3 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . 84 8 Conclusions and Future Work 90 8.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Appendices 93 A Proof of the Minimal Separation Property of ASR 94 References 96 v List of Tables 4.1 Common anchor point choices from the literature. . . . . . . . . . . . . . 50 6.1 Comparison of the relative mean ǫ(µ) and relative variance ǫ(σ2) to the exact solution for the model scalar problem #01. The anchor point was chosen to be a = [0,...,0]⊤. . . . . . . . . . . . . . . . . . . . . . . . . . 73 6.2 Comparison of the relative mean ǫ(µ) and relative variance ǫ(σ2) to the exact solution for the model scalar problem #02. The anchor point was chosen to be a = [0,...,0]⊤. . . . . . . . . . . . . . . . . . . . . . . . . . 74 6.3 Comparison of the relative errors in the mean and variance . . . . . . . 76 vi List of Figures 2.1 (Left) Distribution of 1024 uniformly distributed samples – note the clus- ters of points. (Right) Distribution of 1024 quasi-random Sobol points. . 6 2.2 (Left)Thenodesfromafulltensorproductextensionofthelevel5Clenshaw- Curtiss quadrature. (Right) The level 5 Smolyak sparse grid nodes con- structed using with Clenshaw-Curtiss quadratures. . . . . . . . . . . . . . 7 4.1 A figure illustrating the connections between the different decompositions. Note from Section 4.2.2 that an additional connection between ASR and anchored ANOVA exists for purely position functions. . . . . . . . . . . . 43 4.2 Comparison of relative errors with λ = 0. As expected from Theorem 3, a first order (L = 1) anchored ANOVA captures the function exactly. Surprisingly the integral error converges after L = 3 for ASR. . . . . . . 44 4.3 Comparison of relative errors with λ = 0.25. Although the weighting favours the additive term over the multiplicative term, ASR outperforms anchored ANOVA in both integration and approximation. . . . . . . . . 45 4.4 Comparison of relative errors with λ = 0.50. Although both the additive and multiplicative terms are weighted equally the ASR errors are still less than the anchored ANOVA errors for the most part. . . . . . . . . . . . . 45 4.5 Comparison of relative errors with λ = 0.75. In this example the weight- ing favours the multiplicative term. Notice the difference in relative inte- gration error is more significant than when λ = 0.50 or λ = 0.25. . . . . . 45 4.6 Comparison of relative errors with λ = 1. As expected from Theorem 9 a first order (L = 1) ASR approximation captures the function exactly. . . 46 4.7 Comparison of the approximation properties between ASR and anchored ANOVA. The integration and L errors of ASR are less than anchored 2 ANOVA and these differences becomes more pronounced with increasing truncation order. The dip at L = 6 for the integration error of anchored ANOVA is likely a numerical artifact caused by cancellation errors. . . . 47 vii 4.8 Comparison of the relative integration and approximation errors between ASR and anchored ANOVA for the rational second order polynomial. Whereas both methods perform similarly in integration, the approxima- tion error of ASR is less than anchored ANOVA for all truncation orders. 47 4.9 Comparison ofrelative errorsfor the second order polynomial function. As expected from Theorem 3, anchored ANOVA converges after L = 2. How- ever note that the relative integration of ASR saturates at the same level after only one addition order (L = 3). Furthermore, the approximation error of ASR decays monotonically with truncation order. . . . . . . . . . 48 4.10 Comparison in relative error. Anchored ANOVA achieves a lower inte- gration and approximation error for all truncation orders. This example illustrates that ASR does not always outperform anchored ANOVA for nonlinear functions in an additive and multiplicative sense. . . . . . . . . 49 4.11 Comparison of the integration and L errors for various anchor points. For 2 this function all candidates perform comparably except for the zero anchor (+). Unsurprisingly the integration error at L = 0 is lowest with the mean (cid:3) anchor ( ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.12 Comparison of the integration and L errors for various anchor points. For 2 this function the ones anchor ( ) and the zero anchor (+) do not perform × much poorer than the other candidates. The half anchor ( ) performs ▽ best amongst all choices with the exception of integration at L = 0. . . . 51 4.13 Comparison of the integration and L errors for the anchored separated 2 representation versus truncation order. For this test function the half anchor ( ) outperforms all other candidates. . . . . . . . . . . . . . . . . 52 ▽ 4.14 Comparison of the integration and L errors for the anchored separated 2 representation versus truncation order. For this function the half anchor point ( ) is the optimal candidate with the exception of integration at ▽ L = 0 for which the mean anchor ((cid:3)) performs best. . . . . . . . . . . . 53 5.1 A visualization of the rank-k ASR approximation for the two dimensional sine function. The magnitude of the rank-one solution at the third itera- tion suggests the procedure has converged. . . . . . . . . . . . . . . . . . 58 5.2 A comparison of absolute integration errors of the rank-r ASR approxima- tion for increasing dimension d. Note that the integration error converges with the L error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2 viii 5.3 A comparison of the absolute integration error of the rank-r ASR approx- imation for increasing rank of the target function. Note that the rank of the ASR approximation increases with the rank of the target function, and that both errors after the same number of iterations. . . . . . . . . . . . 60 5.4 A comparison of the absolute integration error of the rank-r ASR approxi- mationforincreasing dimension oftherank-two function. Notethat unlike the previous examples the integration error does not converge with the L error, but instead much earlier. . . . . . . . . . . . . . . . . . . . . . . 61 2 7.1 A comparison of the hypothesized solution to the component functions to the numerically obtained solutions. The number of nodes per dimension aren = 101andtheanchor point wasrandomly selected asa =[-0.921569, 0.156863, 0.784314, 0.352941]. Note that the figures are in agreement for both u and u . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 1 2 7.2 A comparison of the hypothesized solution to the component functions to the numerically obtained solutions. The number of nodes per dimension aren = 101andtheanchor point wasrandomly selected asa =[-0.921569, 0.156863, 0.784314, 0.352941]. Note that the figures are in agreement for both u and u . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3 4 7.3 Convergence of the residual of the rank-one ASR approximation to the d = 4sine function. Asmincreases theerroralsoincreases since thepoints per wavelength of the function increases. Note however the convergence rate is identical for all parameters. . . . . . . . . . . . . . . . . . . . . . 86 7.4 Convergence of the residual of the rank-one ASR approximation for in- creasing number of dimensions. The model parameter is m = 2. Note that the error is nearly independent of the dimension and converges at the same rate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 7.5 Convergence of the residual of the rank-one ASR approximation to the d = 4 manufactured solution. Unlike the analytic function above, here the convergence rate increases as m (a measure of the smoothness) increases. 87 7.6 Convergence of the residual of the rank-one ASR approximation for in- creasing number of dimensions. The model parameter is m = 2. Note that the error converges at the same rate, but differs in magnitude de- pending on the dimensionality. . . . . . . . . . . . . . . . . . . . . . . . . 87 ix 7.7 A visualization of the rank-k ASR approximation for the two-dimensional function. The leftmost plot corresponds to the rank-one solution at itera- tionk, while the plot inthemiddle corresponds to therank-k approximation. 88 7.8 A visualization of the rank-k ASR approximation. The anchor point was selected to be the point where the residual was highest in the previous iteration. Note that the rank-one solution is able to identify a solution to correct this residual. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 7.9 A visualization of the rank-k ASR approximation. Again the anchor point was selected to be the point where the residual was highest in the previous iteration. In this iteration the algorithm corrects the ‘bump’ in the middle of the function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 x

Description:
The Anchored Separated Representation for High Dimensional Problems by. Matthew Tin Chun Li. A thesis submitted in conformity with the
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.