ebook img

u PDF

19 Pages·2016·4.51 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview u

16 X. ZHU, D. XIU and , Y. Chen 6.1. 2D di↵usion equation with input in layers. Consider the two-dimensional stochastic elliptic problem (a(x, !) u(x, !)) = f(x), in D �r · r (6.2) u = g(x), on @D, ⇢ with the physical domain D = [ 1, 1]2, with the right-hand-side f = 1, and zero � Dirichlet boundary condition. The random di↵usivity field a(x, !) has the covariance function, x y 2 2 C(x, y) = exp k � k , x, y [ 1, 1] , (6.3) � `2 2 � ✓ ◆ where ` is the correlation length. Here we choose a short correlation length, ` = 0.2, where the global KL expansion results in d = 200 random dimensions. We divide the physical domain D = [ 1, 1]2 into four rectangular layers with distinct mean values, � 200, D = [ 1, 1] [1/2, 1], 1 � ⇥ 40, D = [ 1, 1] [0, 1/2], 8 2 µ = � ⇥ (6.4) a > 8, D = [ 1, 1] [ 1/2, 0], >> 3 < � ⇥ � 4, D = [ 1, 1] [ 1, 1/2]. 4 � ⇥ � � > > > : To compute the gPC expansion,we employ the high level (with level 7) sparse grids stochastic collocation and then approximate the gPC expansion coe�cients using the Stochaspasrste gircids qcuaodraltulroe ruclea(wtithiolevenl 7) . mThe erefetrehnceoFEdMsso luwtionsiatrehco mputed over a 64 64 linear elements, which result in negligible spatial discretization errors ⇥ for this case. For domain partitioning, we utilize three sets of subdomains: 4 4, multi-fidelity models ⇥ 8 8, and 16 16 (the number of FEM elements in each subdomain is adjusted in ⇥ ⇥ each case to keep the overall number of elements at 64 64). ⇥ Figure 6.1 shows one realization of the input random field a(x, Z) with distinct means values in each layer indicated above and the corresponding full finite element solution (reference solution) of the di↵usion problem in the domain D = [ 1, 1]2. � Xueyu Zhu Scientific Computing and Imaging Institute University of Utah [email protected] with Utah: Dongbin Xiu, Akil Narayan Erin m. Linebarger Fig. 6.1. Left: One realization of the input a(x,Z) with distinct means values in each layer 1 indicated above, Right: Its corresponding full finite element solution (reference solution). Motivation Collocation methods are widely used in uncertainty quantification of practical problems: ‣ Non-intrusive: run different parameter realizations of the deterministic solver. ‣ Fast convergence can be achieved for smooth problems. However, ‣ High cost: repetitive runs of deterministic solvers ‣ Extreme case: “I can only afford 10 simulations” What can we do with this scenario? 2 Motivation For many problems, there usually exists low fidelity models: ‣ Affordable cost: coarser meshes, simplified physics, coarse-grained model, … ‣ Low accuracy but resolve some important features Wishlist: ‣ Efficiency from low fidelity models ‣ Accuracy from high fidelity models How can we glue models together? 3 Setup u (x, t, Z) = (u), in D (0, T ] I , t Z L ⇥ ⇥ (u) = 0, on @D [0, T ] I , Z 8 B ⇥ ⇥ u = u , in D t = 0 I . < 0 Z ⇥ { } ⇥ L u : low fidelity solution (cheap) : H u : high fidelity solution (expensive) The goal: build a surrogate of H in a non-instrusive way u m H v(x, t, z) = c (z)u (x, t, z ), z I n n n Z 2 n=1 X z Q1: How to choose intelligently? n Q2: How to compute c without extensive sampling the high-fidelity solver? n 4 Problems of interest We seek a solution, , which is constrained u(x, µ) Point Selection through a linear parametrized PDE The Reduced Basis Approach: A Motivation (x, µ)u(x, µ) = f(x, µ) x � M L � µ R Key idea: Explore the parameter space by the cheap low-fi model ⇥ D � The Main Idea: Reduuc(xe ,tµhe) =Bagsi(sx, µ) x �� � We might expect a good approximation using a Galerkin approach ‣ Search the space by the low-fi model via greedy algorithm: using solutions for “well chosen” sampling of parameters as base Assumption: The solution varies smoothly on a low- functions. L L z = arg max d(u (z), Udimen)sional manifold under parameter variation. m m 1 z � � 2 Where M Choosing the samples well, Candidate set: we should be able to derive � = z , . . . , z 1 M good approximations for all { } H u (z ) u(µ )j parameters Low-fi approximation space: j uH(µ) u (z) L L L U (�) = u (z ), . . . , u (z ) X m 1 1 m 1 { } � � ✓ Enrich the space by finding the furtherest point away from the Wednesday, November 2, 11 space spanned by the existed set ✓ The greedy choice is (almost) asymptotically optimal. (Devore, 2013) 5 Lifting procedure The coefficients are possibly inferred from the low-fi model ‣ Construct c as projection coefficients from low-fidelity model: n { } m L L L v(z) = u (z) = c (z)u (z ) UL(�) n n P n=1 X ‣ Using the same coefficients, construct the approximation rule for high-fidelity model m H H v(x, t, z) = c (z)u (z ) n n n=1 X • This is in fact a lifting operator from low-fi space to high-fi space • Justification: e.g., linear scaling, coarse/fine mesh 6 Overview of Bi-fidelity algorithm ‣ Run the low-fi models at each point of � to obtain uL(�), U L(�) � select the most “important” m points — ‣ Run the high-fi models on those m points to get uH (�) most expensive part! ‣ Bi-fidelity approximation: ‣ For any given z I , compute low-fi projection coefficients z 2 m L L L v(z) = u (z) = c (z)u (z ) UL(�) n n P n=1 X ‣ Apply the same coefficients to u H ( � ) to get the bi-fi approximation m H H v(z) = c (z)u (z ) n n n=1 X The approximation quality depends on how well the low-fi model approximates the functional variation of high-fi model in the parameter space 7 Tri-fidelity senario ‣ Run the low-fi model to select the most “important” m points ‣ Run the medium and high-fidelity models at those m points to get H M u (�) u (�) 1 ‣ Tri-fidelity approximation: ‣ For any given , compute med-fi projection coefficients z I z 2 m M M M M v(z) 1 = u 1(z) = c 1(z)u 1(z ) more accurate UM1(�) n n P coefficients n=1 X ‣ Apply the same coefficients to H to get the tri-fi approximation u (�) m H M H v (z) = c 1(z)u (z ) n n n=1 X 8 1D Burgers’ equation u + uu = ⌫u , x ( 1, 1), � U (0, 0.1) t x xx 2 � ⇠ u( 1) = 1 + �, u(1) = 1. ⌫ = 0.5 � � low-fi: 100 points (in physical space), FD mean l2 error high-fi: 1000 points, FD ref solution: high-fi solution 0 10 −1 10 −2 10 −3 10 −4 10 −5 10 −6 10 0 2 4 6 8 10 12 14 9 1D stochastic elliptic equation (a(Z, x)u (Z, x)) = 1, (Z, x) I (0, 1) x x Z � 2 ⇥ u(Z, 0) = 0, u(Z, 1) = 0. ⇢ d 1 (k) a(Z, x) = 1 + � cos (2⇡kx)Z , d > 1. k⇡ k=1 X d=50 d=20 d=10 −2 10 low-fi: 16 points, Cheb Collocation high-fi:128 points, Cheb Collocation −3 10 low fidelity: too coarse to resolve the feature in the −4 high-D random space 10 1 2 3 4 5 6 7

Description:
Utah: Dongbin Xiu, Akil Narayan. Erin m. Linebarger What can we do with this scenario? However,. 2 How can we glue models together? 3
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.