PREFACE Parallel CFD 2006 conference was held in Busan city, Korea, from May 51 to 18, 2006. It was the eighteenth conference in an international series of meetings featuring computational fluid dynamics research on parallel computing. There were over 150 participants from 14 countries. During the conference, 7 invited papers and 92 contributed papers were presented in two parallel sessions. Nowadays, Korea has strength in IT-based technology and large scale computing became popular in universities, research institutes, and companies. Although CFD is one of basic tools for design procedures in Korea to produce machineries, such as automobiles, ships, aircrafts, etc., large scale parallel computing has been realized very recently, especially for the manufactures. Various applications in many areas could be experienced including acoustics, weather prediction and ocean modeling, flow control, turbine flow, fluid-structure interaction, optimization, heat transfer, hydrodynamics. We believe that this conference gave strong motivation and need for parallel computing to everyone. These proceedings include about 40 percent of the oral lectures presented at the conference. All published papers have been refereed. The Editors vi ACKNOWLEDGEMENTS Parallel CFD 2006 was hosted by Korean Society of Computational Fluids Engineering Korea Institute of Science and Technology Information We would like to thank for the generous financial contributions of Korea Institute of Science and Technology Information Dynamic Busan City, Busan Convention and Visitors Bureau Air Force Office of Scientific Research, Asian Office of Aerospace Research and Development Korea Tourism Organization Department of Aerospace Engineering, Korea Advanced Institute of Science and Technology Korea Aerospace Industries, LTD. KOREAN AIR Korea Research Foundation HP & INTEL CRAY Inc. IBM ANST Inc. FLOW-NOISE LTD. ATES LTD. KIMIDATA Corp. CD-ADAPCO Inc. KYUNGWON TECH LTD. The conference could not be successful without the contribution and effort from many people who helped from the beginning to the conference date. We are very grateful for the help and guidance from all the international committee members. Local organizing committee members did great work and we would like to thank for their effort. Special thank should be expressed to Kum Won Cho, the secretary general and Pil Choon Lee of KSCFE for their devotion to the conference. Jang Hyuk Kwon, Jysoo Lee Chairmen, Parallel CFD 2006 Parallel Computational Fluid Dynamics - Parallel Computing and Its Applications J.H. Kwon, A. Ecer, .J Periaux, .N Satofuka and .P Fox (Editors) © 2007 Elsevier B.V. All rights reserved. Parallel hybrid particle-continuum (DSMC-NS) flow simulations using 3-D unstructured mesh J.-S. Wu ,~ Y.Y. Lian ,a G. Cheng ,b Y.-S. Chen c ~Mechanical Engineering Department, National Chio Tung University, Hsinchu 30050, TAIWAN bMechanical Engineering Department, University of Alabama, Birmingham, AL 35294, USA CNational Space Organization, Hsinchu 30048, TAIWAN 1. Introduction Several important gas flows involve flow fields having continuum and rarefied regions, e.g., hyper- sonic flows 2, vacuum-pump flows with high compression ratio 3, expanding RCS (reaction control system) nozzle plumes in aerospace and space applications 4, physical vapor deposition processes with heated sources 5, pulsed-pressure chemical vapor deposition processes 6, among other things. Understanding of the underlying physics through simulation and modeling in the above flows are important for the success of these disciplines, in addition to the usual experiemental studies. In gen- eral, these flow problems are governed by the Boltzmann equation, which is very difficult to solve numerically or analytically due to the existence of collision integral and high number of phase-space dimensions. It is well known the direct simulation Monte Carlo (DSMC) method 7 can provide more physically accurate results in flows having rarefied and non-equilibrium regions than the continuum flow models such as the NS equations. However, the DSMC method is extremely computational ex- pensive especially in the near-continuum and continuum regions, which prohibits its applications to practical problems with large domains. In contrast, the computational fluid dynamics (CFD) method, employed to solve the Navier-Stokes (NS) or Euler equation for continuum flows, is computationally efficient in simulating a wide variety of flow problems. But the use of continuum theories for the flow problems involving the rarefied gas or very small length scales (equivalently large Knudsen numbers) can produce inaccurate results due to the breakdown of continuum assumption or thermal equilibrium. A practical approach for solving the flow fields having near-continuum to rarefied gas is to develop a numerical model combining the CFD method for the continuum regime with the DSMC method for the rarefied and thermal non-equilibrium regime. A well-designed hybrid scheme is expected to take advantage of both the computational efficiency and accuracy of the NS solver in the continuum regime and the physical accuracy of the DSMC method in the rarefied or thermal non-equilibrium regime. In the past, there were several efforts in developing the hybrid particle-continuum scheme. Most studies employed structured grid for both the particle and continuum solvers 8-12, in which the location of breakdown interfaces between continuum and rarefied regions was specified in advance 8,9,11,12 or identified after "one-shot" CFD simulation 10. One immediate disadvantage by employ- ing structured grid is that the pre-specified breakdown interface does not follow faithfully the interface determinde by some breakdown parameters 7,13, which may in turn either increases the runtime or induce inaccuracies of the solution. In addition, some techniques, such as particle cloning 12, over- lapped region 8,12 and iteration 8 between particle and continuum regions, are used to reduced the statistical uncertainties in coupling the two solvers. Among these, the hybrid schemes developed by Wang et al. 11 and Roveda et al. 12 are potentially suitable for simulating unsteady flows, while the others were only designed for simulating steady flows. In the above, only one-dimensional and two-dimensional flows were demonstrated and extension to parallel or three-dimensional simulation has not been reported to the best knowledge of the authors. Recently, Wu et al. 1 has developed a parallel hybrid DSMC-NS scheme using 3D unstructured mesh. Parallel implementation is realized on distributed-memory parallel machines, e.g., PC-cluster system. In this method, a domain overlapping strategy, taking advantage of unstructured data format, with Dirichlet-Dirichlet type boundary con- ditions based on two breakdown parameters, was used iteratively to determine the choice of solvers in the spatial domain. The selected breakdown parameters for this study include: )1 a local maximum Knudsen number, proposed by Wang and Boyd 13, defined as the ratio of the local mean free path and local characteristic length based on property gradient and )2 a thermal non-equilibrium indicator defined as the ratio of the difference between translational and rotational temperatures to the transla- tional temperature. A supersonic nitrogen flow (Mo~ = 4) over a quasi-2-D 52 ° wedge and a nitrogen flow, which two near-continuum parallel orifice jets underexpand into a near-vacuum environment, were simulated to verify its validity in simulating gas flows involving rarefied and continuum regions. In the present paper, this recently developed hybrid DSMC-NS scheme is described briefly for com- pleteness. Improvement of the NS solver by using a pressure-based solver is described next. Then, several practical experiences learned from the development and verifications of the implementation are shown in detail. Finally, simulation results for a RCS (reaction control system) nozzle plume are presented to demonstrate its capability in simulating realistic flow problems. 2. Hybrid DSMC-NS Scheme Using 3D Unstructured Mesh In the proposed coupled DSMC-NS method ,1 steady-state flow calculation is assumed. There were two numerical flow solvers included: one is the 3-D DSMC code for rarefied or continuum breakdown and thermally non-equilibrium regions, named PDSC (Parallel Direct Simulation Monte Carlo Code), developed by Wu's group 14, ge and the other is HYB3D, a density-based 3-D Euler and Navier-Stokes solver for continuum regions, developed by Koomullil 15, eg. In the present study for simulating the RCS (reaction control system) nozzle plume issuing from the spacecraft, a pressure-based NS solver, named UNIC-UNS, using 3D unstructured mesh, developed by Chert and his coworkers 16,17, ge that is applicable at all speeds, is used for continuum regions. It is rather straightforward to exchange the information between the PDSC and the UNIC solver in the proposed DSMC-NS scheme because both methods use the unstructured grid topology with parallel computing. However, proposed coupling procedures between DSMC and NS solvers are not limited to any specific codes, and selection of these two solvers is only for the purpose of demonstration. Both codes are introduced briefly in the following, respectively, for completeness. 2.1. DSMC Solver (PDSC) Details of the features of the PDSC code can be found in the reference 14 and are only briefly described here for brevity. PDSC features 3-D unstructured grid, parallel computing with dynamic domain decomposition using MPI, variable time-step scheme with adaptive mesh refinement and treat- ment of high-temperature chemical reacting flows. In addition, iterative pressure boundary treatment is also available for treating internal flows. It can be implemented efficiently on general distributed- memory parallel machines, such as PC-cluster system. 2.2. Navier-Stokes Solvers Details of the HYB3D that is an density-based NS code have been described in detail elsewhere 51 and are skipped here for brevity. The present Navier-Stokes solver (UNIC-UNS), developed by Chen and his coworkers 16,17 employs an unstructured-grid topology and has the following important features: )1 Cell-centered finite-volume for the numerical integration of governing equations, )2 An upwind method with linear reconstruction scheme for convective flux evaluation, 3) Modified pressure- velocity-density coupling algorithm of the SIMPLE type with added pressure damping term, 4) Parallel computing based on domain decomposition with message passing interface (MPI), )5 Turbulent flow S(I) S(II) S(III) Figure .1 Sketch of numerical domain distribution of the present coulped DSMC-NS methods with overlapping regions and boundaries simulation capability with the standard and extended k-e turbulence models, and 6) General chemical reacting flow treatment. One of the most important features of this NS solver is the use of pressure- based method which allows accurate simulaiton of the flows at all speeds. Either implicit first-order Euler time-marching or second-order time-centered scheme can be used for time integration. A second- order spatial accuracy is achieved using Taylors series expansion and the gradients of the flow properties are computed using a least-square method. The creation of local extreme during the higher order linear reconstruction is eliminated by the application of limiter proposed by Barth and Jespersen .81 Parallel computing of the NS solver also incorporates the graph-partition tool, METIS, which is the same as that in the P DSC. 2.3. Breakdown Parameters A continumn breakdown parameter, proposed by Wang and Boyd 13 for hypersonic flows, is employed in the present hybrid DSMC-NS method as one of the criteria for selecting proper solvers. The contimmm breakdown parameter Knmax is defined as K~ ....... =.~K~..K~.K~r (1) where KnD, Knv and KnT are the local Knudsen numbers based on density, velocity and temperature, respectively. They can be calculated from the following general formula KnQ - -Q X VQ I (2) where Q is the specific flow property (density, velocity and temperature) and ~ is the local mean free path. If the calculated value of the continuum breakdown parameter in a region is larger than a preset threshold value /(_Th,r then it cannot be modeled using the NS equation. Instead, a particle solver like DSMC has to be used for that region. In addition, another breakdown parameter is used to identify regions that exhibit thermal non- equilibrium among various degrees of freedom. In the current study, this thermal non-equilibrium indicator is defined as ~,,. - TR (3) Pr " ~ c = TT F where TT,,. and TR are translational and rotational temperature, respectively. If the value of the computed thermal non-equilibrium indicator in a region is larger than some preset threshold value ~)~rrP ' in the current study, then this flow region cannot be modeled correctly by the present thermal- equilibrium NS solver, which assumes thermal equilibrium among various degrees of freedom. Hence, the DSMC method has to be used for that region instead. 2.4. Overlapping regions between DSMC and NS domain Figure 1 shows the sketch of overlapping regions and boundaries near the interface of the DSMC and NS solvers at an intermediate step (other than the first NS simulation step for the whole domain). The general iterative procedure of the present coupling framework is that running the DSMC solver first after the breakdown regions are identified, and then running the NS solver next with the boundary values calculated from DSMC simulations. Note that all domains mentioned in the following include the boundaries surrounding them. Domain fiA U ftB U ftc represents the DSMC simulation region, while domain fib U fic U Dif represents the NS simulation region; thus, domain fiB- U fic- is the designated overlapping region. Boundary conditions (Dirichlet-type) on Boundary-I (= fiA N fiB) for NS simulation come from part of the previous iterative DSMC simulation, while boundary conditions (Dirichlet-type) on Boundary-III (= fic N fiD) for DSMC simulation come from part of the previous iterative NS solution. Location of Boundary-I is determined from strict comparison of breakdown parameters xanrnK( and ,)enTP computed based on previous iterative solution of domain fiA U fib U fiCUfiD, with the preset criteria. Location of Boundaries-II and-III are then determined by extending from Boundary-I towards the neighboring continuum region. In addition, the thickness (number of cell layers) of domains fib and fic can be adjusted to achieve better convergence for the coupling procedure. Furthermore, in the current coupled DSMC-NS method the choice of solution update for each cell si based on sti domain type. Domain fiA A fib si the region where the updated solution comes from the DSMC simulation, while domain f~c N fiD si the region where the updated solution comes from the NS simulation. 2.5. Coupling Procedures In brief summary, major procedures of the present hybrid DSMC-NS method are listed as follows, referring to the overlapped region as shown in Figure :1 (a) Apply the NS code to simulate the whole flow field as continuum. (b) Determine the locations of Boundary-I and-III and, thus, the DSMC simulation domain (f~A 0 fib 0 fie). (c) Impose Dirichlet-type boundary conditions (velocities, temperature and number density) on Boundary- III, obtained from latest NS simulation, for the next DSMC simulation domain (~tA OfiB U fiG). (d) Simulate and sample the flow field in the DSMC domain (fiA O fib O ~C), using the PDSC code, until acceptable statistical uncertainties are reached. (e) Impose Dirichlet-type boundary conditions (velocities, temperature and density) on Boundary-I, obtained from latest DSMC simulation, for the next NS simulation domain (~B 0 ~tc O fiD). (f) Conduct flow simulation in the NS domain (fib O fiC U ~tD), using the NS code, to obtain a converged steady-state solution. (g) Update solution of the whole computational domain. (h) Repeat from Steps (b) to (g) until the maximum number of coupling iterations is exceeded or the preset convergence criterion is reached. 2.6. Practical Implementation Numerical simulations with the hybrid DSMC-NS code are conducted on a distributed-memory PC cluster system (64 Nodes, dual processors, 2GB RAM per node, Gbit switching hub) running under a Linux operating system. 32 processors are used throughout this study, unless otherwise specified. The PDSC and the NS codes are coupled through a simple shell script on Linux system. Thus, the hybrid DSMC-NS code is expected to be highly portable among parallel machines with distributed memory. Most importantly, our experience shows that the I/O time related to the switching of solver and read/write files is negligible as compared to the simulation time used by each solver. iiiiii~i~i~iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii~iiiiiiiiiiiiii!iiiiiiiiiiiiii iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii iiiiiiiiiiiiiiiiiiiiiiiiiiiii~i~iiiii~!iiiiiii ii iiiiiiiiiiiiiii ii! Figure .2 Mach number distribution for the quasi-2D supersonic wedge flow (left) and the two orifice jet (right). (Gray area: DSMC domain; others: NS domain) l.eo1-`oos - ~ , i , i , i , i , i L i L , i , i , i , i , i , | ._~ = 4.oo1--oo4 ,~ ,,w-~ .~ ~r--e o.oo~,,.ooo ' I ~ i ' I ' i ' I ' I ' i ' t.ootk`oo~ ' I ' i ' I ' i ' [ ' 7 4 e ~ lO t:~ 14 te ~ 4 e e to 1:, Figure .3 Convergence history of maximum density deviation for:(left) the quasi-2-D 25° wedge flow; (right) two parallel near-continuum orifice free jets flow. 3. Results and Discussions 3.1. Validations Completed parallel hybrid DSMC-NS code was tested using a supersonic nitrogen flow (Ms = 4) over a quasi-2-D 25 ° wedge and a nitrogen flow, in which two near-continuum parallel orifice jets underexpand into a near-vacuum environment, were simulated to verify its validity [1]. Corresponding flow conditions are shown in Table 1 (wedge flow) and Table 2 (orifice jets) for completeness. Simula- tion data compare excellently with the benchmark simulation results using DSMC for the quasi-2-D wedge flow, and reasonably well with the experimental data for the 3-D orifice jet flow. Details of the description can be found in ]1[ and are skipped here for brevity. From these two simulations, we have found that the number of couplings between DSMC and NS solvers strongly depends upon the flow conditions near the breakdown interfaces, which is described in the following. In the case of supersonic flow past a quasi-2-D wedge (Figure 2a), subsonic flow dominates in the regions normal to the breakdown interface above the boundary layer along the wedge wall that necessitates more number of couplings to exchange the information between two solvers (Figure 3a), although supersonic flow dominates in the regions normal to the breakdown interface around the oblique shock. Note the four sets of data in Figure 3a represent different simulation conditions in the number of cell layers of the overlapped region (1~4 for set 1~4, respectively) and the threshold values Figure 4. Mesh distribution of the near-field plume simulation. ~. ~ " Figure .5 Domain decoposition of DSMC computation. Initial(left) and final (6 th) iteration (right). Pure NS Dornain " "\ Figure .6 Distribution of the DSMS and NS domains after the 6 th coupled iteration. of breakdown parameters , [~r t~rnTax rhr. -- 0.02 ~ 0.04, 1 pThr. Tne = 0.03 ~ 0.06) which all have the similar trend. However, in Figure 2b, supersonic flow dominates normal to the breakdown interface around entrance regime of orifice jets, which greatly reduces the number of couplings (Figure 3b, no. of overlapped cell layer-- 2) required for convergence as seen from the simulation. The above observation is very important from the viewpoints of practical implementation. For example, in the early stage of simulation we can determine the number of couplings required for convergence by simply monitoring to which the flow regime (supersonic or subsonic) normal to the breakdown interface belongs. If most flows normal the breakdown interface are supersonic, then two coupling iterations should be enough for a fairly accurate solution. If not, more coupling iterations (e.g. less than ten) are required to have a converged solution. Further investigation in determining the optimum number of coupling iterations is required in practical applications of the current hybrid method. 3.2. RCS Nozzle Plume Simulation The proposed coupled DSMC-NS method has been verified by the previous two test cases. To demonstrate its applicability, we apply it to simulate a plume issuing from a RCS thruster, which is a very challenging and important problem in designing the ADCS system on a spacecraft. Important flow conditions, which are summarized in Table ,3 include: nitrogen gas, stagnation pressure of 0.1 bar, stagnation temperatur of 300K, throat diameter of 4.36 mm, length of the convergent part of 8 ~ ~0 Figure .7 Distribution of continuum breakdown (left) and thermal non-equilibrium ratio (right) at the 6 th coupled iteration. iiii E iiiiiiiiii~i~i~i~ii~iii~iiii~i!iiiiiiii!!i~ii~.~..~.i..i..~.~.;~i~i~.i~..~ii~i~i~ ii:'ii~i,:ii ,' Figure .8 Density distribution of near-field nozzle plume using using pure NS solver (left,) and hybrid DSMC-NS solver (the 6 th coupled iteration) (right) . Table 1 Free-stream conditions in supersonic flow over quasi-2-D 25 ° wedge. Gas p~ U~ ooT ooM N2 6.545 -4 kg/rn ~: 1111 m/s 185.6 K 4 mm, length of the divergent part of 50 mm, and area expansion ratio of 60. Six processors are used in this case. Estimated Reynolds number at the throat using inviscis theory is about 62,000, which necessitates the use of turbulence model in the NS solver. In this study we have used standard k-e model unless otherwise specified. Simulation conditions of the PDSC for coupled method for the RCS nozzle plume are summarized are listed as follows: ~210,000-250,000 cells, 2.2-2.5 million particles, reference timestep size 6.8E-08 seconds and number of sampling timesteps 8000. 3 layers of overlapping region is used. Krt ~(~,~rhT and P~, .... is set as 0.03 and 0.03, respectively. In the current simulation, the downstream boundary condition of the near-field simulation is not predicable in advance of the simulation. Thus, supersonic boundary condition in the NS solver is chosen and applied for those surface boundaries and, furthermore, the NS solver breaks down in the backflow region due to the rarefaction of flow. Figure 4 illustrates the mesh distribution with exploded view near the C-D nozzle. The three- dimensional (hexahedrons) rather than the axisymmetric mesh is used since we are interested in simulating the far-field plume interaction with the spacecraft in the future. With the present mesh, we can simply add extra grid which includes the spacecraft body to the original 3D mesh. The total number of hexahedral cells is 380,100. Note that length of reservoir in axial direction is about 3.5 times of the inlet diameter of the nozzle, which is necessary to accurately simulate the very low-speed gas flow inside the reservoir. Figure 5 illustrates the initial and final (6 th iteration) distribution of domain decomposition in the DSMC solver by means of dynamic domain decomposition using a multi-level graph-partitioning technique [14]. It is clear that the spatially uniform decomposition at the initial stage has evolved to the spatially non-uniform decomposition, which reflects the correct density distribution at the end. Figure 6 shows the distirbution of DSMC and NS domains after the 6 th coupled iteration. Most of the regions inside the nozzle are NS domains, while DSMC domains dominate outside the nozzle exit with intrusion along the nozzle wall from the lip to the mid of divergent part of the nozzle. Figure 7 shows the distribution of continuum breakdown and thermal non-equilibrium ratio between rotational and translational degrees of freedom at the 6 th coupled iteration. The continuum breakdown parameter generally increases along the streamlines from the nozzle and becomes very large due to rapid expansion outside the nozzle. Thermal non-equilibrium ratio shows a similar trend. It is obvious that regions in the reservoir and in the core of the nozzle flow are in continuum and thermal equilibrium Table 2 Sonic conditions at the orifice exiting plane in two parallel near-continuum orifice free jets flow, (3as ,tp t~o.,h Utt ..... t Tt t~o.,h Re t,o.,h~ N2 6.52 -3 kg/m ~: 314 m/s 237.5 K 401 TO_T ]K[ 3oo Figure .9 Temperature distribution of near-field nozzle plume using pure NS solver (left) and hybrid DSMC-NS solver (the 6 ht coupled iteration) (right) . Table 3 Flow conditions of the plume simulation issuing from RCS nozzle. Gas oP oT wT taorhtD Area Ratio 2N 1.0 bar 300 m/s 300 K 4.36 mm 60 as expected. In the boundary layer region of the nozzle wall near the exit, value of continuum breakdown parameter is rather large (>0.1) due to the large value of velocity gradient. However, the thermal non-quilibrium ratio is close to unity (between 1 and 1.05), which is suprising. This implies that the in the boundary layer regions may be still in thermal equilibrium although the continuum breakdown parameter is large, which requires further investigation. In addition, the scattered data of the thermal non-equilibrium in the backflow region originates from the statistical uncertainties due to very few simulated particles coming into this area. Based on the data in Figure 7 we can conclude that simulation of the plume flow using either NS or DSMC solver along is practically impossible, which necessitates the use of the hybrid DSMC-NS scheme as developed in the present study. Figure 8 shows the density contours of the pure NS simulation and the coupled DSMC-NS method after the 6 ht coupled iteration. It can be found that the density distribution of the coupled method have similar trend to that of the pure NS simulation, although the data differ very much. In addition, only the hybrid DSMC-NS scheme can predict the back flow near the lip of the nozzle. For other flow properties (temperature, streamline and Mach number, as shown in Figures ,9 01 and ,11 respectively), the simulation results obtained by the hybrid DSMC-NS scheme are very much distinct from those by the pure NS solver. Similar to density distribution, temperature distribution, streamline distribution, and Mach number dostribution predicted by the coupled DSMC-NS method are more reasonable than those by the the pure NS solver, because the coupled method is capable of handling the continuum breakdown in the rarefied regions and thermal non-equilibrium in the backflow region. Note the temperature scatters in the backflow region can be reduced if more simulated particles in the DSMC solver is used. Figure 21 shows the L2-norm of number density and temperature as a function of number of it- . "7/ ...... ;-" LL3~;>_.. ..'__~ Figure .01 Streamline distribution of near-field nozzle plume using pure NS solver (left) and hybrid DSMC-NS solver (the 6 ht coupled iteration) (right).
Description: