ebook img

The New Horizon Run Cosmological N-Body Simulations PDF

0.98 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview The New Horizon Run Cosmological N-Body Simulations

Journal of The Korean Astronomical Society doi:10.5303/JKAS.2011.44.6.217 44: 217 ∼ 234, 2011 December ISSN:1225-4614 (cid:13)c2011 The Korean Astronomical Society. All Rights Reserved. http://jkas.kas.org The New Horizon Run Cosmological N-Body Simulations Juhan Kim1, Changbom Park2, Graziano Rossi2, Sang Min Lee3 and J. Richard Gott III4 1 Center for Advanced Computation, Korea Institute for Advanced Study, 85 Hoegiro, Dongdaemun-gu, Seoul 2 130-722,Korea 1 E-mail : [email protected] 0 2 School of Physics, Korea Institute for Advanced Study, 85 Hoegiro, Dongdaemun-gu, Seoul 130-722,Korea 2 3 Supercomputing Center, KISTI, 335 Gwahangno, Yuseong-gu, Daejon, 305-806Korea 4 Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08550, USA n (November 9, 2011; Revised November 22, 2011; Accepted December 2, 2011) a J ABSTRACT 1 1 We present two large cosmological N-body simulations, called Horizon Run 2 (HR2) and Horizon Run 3 (HR3), made using 60003 = 216 billions and 72103 = 374 billion particles, spanning a volume ] O of (7.200 h−1Gpc)3 and (10.815 h−1Gpc)3, respectively. These simulations improve on our previous C Horizon Run 1 (HR1) up to a factor of 4.4 in volume, and range from 2600 to over 8800 times the volume ofthe Millennium Run. Inaddition, they achievea considerablyfiner mass resolution,downto . h 1.25×1011h−1M⊙, allowing to resolve galaxy-size halos with mean particle separations of 1.2h−1Mpc p and 1.5h−1Mpc, respectively. We have measured the power spectrum, correlation function, mass func- - tion and basic halo properties with percent level accuracy, and verified that they correctly reproduce o the ΛCDM theoretical expectations, in excellent agreement with linear perturbation theory. Our un- r t precedentedly large-volume N-body simulations can be used for a variety of studies in cosmology and s a astrophysics,rangingfrom large-scalestructure topology,baryonacousticoscillations,dark energy and [ the characterizationof the expansionhistory of the Universe, till galaxy formationscience – in connec- tion with the new SDSS-III. To this end, we made a total of 35 all-sky mock surveys along the past 3 lightcone out to z =0.7 (8 fromthe HR2 and 27fromthe HR3), to simulate the BOSSgeometry. The v simulationsandmocksurveysarealreadypubliclyavailableathttp://astro.kias.re.kr/Horizon-Run23/. 4 5 Key words : cosmologicalparameters – cosmology: theory – large-scale structure of the Universe – 7 galaxies: formation – methods: N-body simulations 1 . 2 1 1 1. INTRODUCTION novae (SNIa; Riess et al. 1998,Perlmutter et al. 1999, 1 Kowalski et al. 2008), we also have increasing con- : State-of-the-art observations from the Cosmic Mi- firmations that the Universe is geometrically flat, and v crowaveBackground(CMB), suchasdata providedby currentlyundergoingaphaseofacceleratedexpansion. i X the Wilkinson Microwave Anisotropy Probe (WMAP; Whatisdrivingthisexpansion,andinparticularthe Spergel et al. 2003, Komatsu et al. 2011), and from r nature of dark energy, still remains to be explained. a theLargeScaleStructure(LSS)asintheSloanDigital To date, it is perhaps one of the most important open SkySurvey(SDSS;Yorketal. 2000),inthe2dFgalaxy questionsincosmology,whichwoulddeeplyimpactour redshift survey (Colless et al. 2001) and in the Wig- understanding of fundamental physics (see for exam- gleZsurvey(Blakeetal. 2008),supportamodelofthe ple Albrecht et al. 2006). In addition, investigating Universe dominated by Cold Dark Matter (CDM) and the nature of the (nearly) Gaussian primordial fluc- Dark Energy (DE), with baryons constituting only a tuations – which eventually led to the formation of percent of the total matter-energy content. Structures halos and galaxies – is also of utmost importance for formandgrowhierarchically,fromthesmallestobjects shedding light into the structure formationprocess. In to the largest ones. In this framework, dark matter fact, constraining possible deviations from Gaussian- collapses first into small haloes which accrete matter ity will impact for example the LSS topology (Gott et andmergetoformprogressivelylargerhalosovertime. al. 1986; Park et al. 1998, 2005; Gott et al. 2009; Galaxiesformsubsequently,outofgaswhichcoolsand Choi et. al 2010), as any primordial non-Gaussianity collapses to the centers of dark matter halos (White might modify the clustering properties of massive cos- & Rees 1978; Peebles 1982; Davis et al. 1985); there- mic structures forming out of rare density fluctuations fore, halo and galaxy properties are strongly related. (LoVerde et al. 2008; Desjacques et al. 2009; Jeong & In synergy with independent data from Type I super- Komatsu 2009), and generate a scale-dependent large- scale bias in the clustering properties of massive dark Corresponding Author: GrazianoRossi([email protected]) matter halos (Dalal et al. 2008; Verde & Matarrese – 217 – 218 J. Kim et al. 2009; Desjacques & Seljak 2010). Moreover, studying Oscillation Spectroscopic Survey (BOSS; Schlegel et global halo properties such as halo formation, density al. 2009) as well as BigBOSS (Schlegel et al. 2011), profiles, concentrations, shapes, kinematics, assembly the Large Synoptic Survey Telescope (LSST; Tyson times, spin, velocity distributions, substructures and et al. 2004), the Hobby Eberly Telescope Dark En- the effects of baryons is particularly important in or- ergy Experiment (HETDEX; Hill et al. 2004), and der to gain insights into the formation and evolution the space-based Wide-Field Infrared Survey Telescope of galaxies,their large-scaleenvironmentin the cosmic (WFIRST; Green et al. 2011) and Euclid (Cimatti et web(Bettetal. 2007;Gao&White 2007;JingSuto& al. 2008). The lack of precision in the redshift deter- Mo 2007; Li et al. 2008; Gott et al. 2009; Shandarin minationiscompensatedwiththe largervolumeofthe et al. 2010), and to test the validity of cosmological survey, the larger density of galaxies, and the possibil- models. ity of analyzing different galaxy populations. Therefore, at the moment there is considerable in- With large-volume surveys becoming the norm, it terest in constraining cosmological models (i.e. the is timely to devise large-volume numerical simulations maincosmologicalparameters)fromrealdatasetswith able to mimic, reproduce and control the various ob- percent level accuracy, and especially the dark energy servationalsurveys. This paperaimsatpresentingtwo equation of state (DE EoS). In particular, most of large cosmological N-body simulations, called Horizon the efforts towards characterizing DE and its proper- Run2(HR2)andHorizonRun3(HR3),madeusingin- ties are focused on measurements of the baryonacous- novativecomputationalfacilitiesandforefrontcapabil- tic oscillation (BAO) scale. The BAO distinct signa- ities. CosmologicalN-bodysimulationsprovideinfact ture,imprintedinthelarge-scalegalaxydistributionby apowerfultooltotestthevalidityofcosmologicalmod- acoustic fluctuations in the baryon-photon fluid prior els and galaxy formation mechanisms (Bertschinger to recombination, appears as a quasi-harmonic series 1998), as well as they represent a complementary and of oscillations of decreasing amplitude in the galaxy necessary benchmark to design and support the ob- power spectrum at wavenumbers 0.01hMpc−1 ≤ k ≤ servational surveys. In particular, the advent of real- 0.4hMpc−1 (Sugiyama 1995; Eisenstein & Hu 1998, istic cosmological N-body simulations, capable of re- 1999), or as a broad and quasi-Gaussian peak in the producing successfully the global properties of the ob- correspondingtwo-pointcorrelationfunction (Matsub- servedstructureatlargescales,hasenabledsignificant ara 2004). The measurement of its scale is often progressinunderstandingthestructureofdarkmatter achievedby using a well-controlledsample ofluminous halos, the DM clustering over a huge range of scales, red galaxies (LRGs), as observed for example in the the distribution of galaxies in space and time, the ef- SDSS or in the 2dFGRS surveys (Colless et al. 2001; fects of the environment, the mass assembly history, Eisenstein et al. 2005; Cole et al. 2005; Sanchez et al and the nature of DM itself. For example, numeri- 2006, 2009; Percival et al. 2007, 2010; Gaztanaga et cal studies have indicated that the hierarchical assem- al. 2009; Kazin et al. 2010; Reid et al. 2010; Carnero bly of CDM halos yields approximatelyuniversalmass et al. 2011). Since the BAO scale depends solely on profiles (Navarro et al. 1996, 1997), strongly triaxial the plasma physics after the Big Bang and can be cal- shapes with a slight preference for nearly prolate sys- ibrated using CMB data, it can be used as a standard tems(i.e. Jing&Suto2002),presenceofabundantbut ruler to measure the redshift dependence of the Hub- non-dominantsubstructurewithinthevirializedregion ble parameterH(z)andthe angulardiameter distance of a halo, and cuspy inner mass profiles (for instance (Montesanoet al. 2010),andthus to constrainthe DE see Springel et al. 2005). Numerical simulations are EoS parameter w(z) as a function of redshift – along also crucial to DE studies, especially for characteriz- with the other main cosmological parameters. Com- ing BAO systematics (Crotts et al. 2005; Crocce & plementary to the BAO technique, the LSS topology Scoccimarro 2008; Kim et al. 2009). These are well canalsobeusedasastandardruler,bymeasuringand controlled only with a detailed knowledge of all non- comparing the amplitude of the genus curve at differ- linear effects associated, and can only be addressed by ent redshifts and smoothing lengths (see Park & Kim studyingmockcatalogsconstructedfromlargeN-body 2010). Cosmological parameters can also be obtained simulations. In this respect, comparison with N-body through the abundance of high-mass halos, identified simulations is critical, as it allows one to calibrate the as galaxy clusters, and via several other methods that DE experiments with high accuracy. requireandutilize preciseknowledgeofgalaxycluster- While the current and future surveys of large-scale ing (i.e. Zheng & Weinberg 2007;Yoo et al. 2009). structureintheUniversedemandlargerandlargersim- To this end, planned next generation photomet- ulations, simulating large cosmological volumes with ric redshift galaxy surveys will span volumes consid- good mass resolution is clearly a major computational erably larger than the current data sets, providing challenge. However, it is crucial to be able to reach a dramatic improvement in the accuracy of the con- volumesbigenoughtoassessthestatisticalsignificance straintsoncosmologicalparameters. Examplesarethe of extremely massive halos, almost always underrepre- Panoramic Survey Telescope & Rapid Response Sys- sented in simulations that survey a small fraction of tem (Pan-STARRS; Kaiser et al. 2002), the Dark En- the Hubble volume. For example, till recently our un- ergy Survey (DES; Abbott et al. 2005), the Baryonic derstanding of the mass profile of massive haloes has Cosmological N-body simulations 219 been rather limited, derived largely from small num- alongthe pastlightcone (8 fromthe HR2 and27 from bers of individual realizations or from extrapolationof the HR3), made in order to simulate the BOSS geom- modes calibrated on different mass scales (Navarro et etry. These mock catalogs are particularly suitable for al. 1996, 1997; Moore et al 1998; Klypin et al. 2001; DEstudies (i.e. LRGs),andarealreadypublicly avail- Diemand et al. 2004; Reed et al. 2005). This is be- able. WethenpresentinSection5thefirstresultsfrom cause, at high masses, enormous simulation volumes testing our new simulations; namely, the computation are required in order to collect statistically significant of the starting redshifts, mass functions, power spec- samples of these rare DM halos. Instead, individual tra and two-point correlation functions. We conclude halosimulationsmayresultinbiasedconcentrationes- in Section 6, with a brief description on the impor- timates ina varietyofways(Gao etal. 2008). Athigh tance and use of our simulations and catalogs. We are masses we also know only approximately the redshift makingthe simulationdataandmocksavailabletothe and mass dependence of halo concentration, even for communityathttp://astro.kias.re.kr/Horizon-Run23/. the current concordance cosmology. Another example of how important is to simulate big volumes is related 2. NUMERICAL METHODS toDEscience: largervolumesallowonetomodelmore accurately the true power at large scales and the cor- Progressivelysophisticatednumericaltechniquesare responding power spectrum, particularlyin the case of usedtocarryoutN-bodysimulations,withthegoalof theLRGdistribution,whichiscriticaltotheBAOtest optimizing resources and performances by exploiting (Kimetal. 2009). Alargeboxsizewillguaranteesmall forefront technological developments. statistical errors in the power spectrum estimates: we Broadlyspeaking,therearetwobasicalgorithmsby need in fact to measure the acoustic peak scale down which a simulation can be optimized: particle-mesh to an accuracy better than 1%. (PM) methods and tree methods – along with combi- As explained in Neto et al. (2007) and in Gao et nations of the two, such as the PM-Tree scheme. In al. (2008), trying to combine different simulations of the PM method, space is discretised on a mesh and varying mass resolution and box size is instead poten- particles are divided between the nearby vertices of tially very risky, along with extrapolation techniques. the mesh,inorderto computethe gravitationalpoten- To this end, Neto et al. (2007) brings the example of tial via Poisson’s equation and fast Fourier transform Macci`o et al. (2007), who combined several simula- (FFT)techniques. Inthetreemethod,thevolumeisin- tions of varying mass resolution and box sizes in order steadusuallydividedupintocubiccellsandgroupedso to reach scales with M ≪M∗. Unfortunately, this ap- thatonlyparticlesfromnearbycellsneedtobetreated proachcomeswithpitfallsbecauseinordertoresolvea individually, whereas particles in distant cells can be statisticallysignificantsampleofhaloeswith massesof treated as a single large particle centered at its cen- the order of 1010h−1M⊙ one must use a considerably ter of mass. Obviously, this latter scheme increases smaller simulation box (i.e. 14.2h−1Mpc on a side); considerably the computational speed in particle-pair this implies asubstantiallackoflarge-scalepower,due interactions. to suchsmallperiodic realization,whichmay influence Thankstorapidadvancementsinmemoryandcom- the result. The obvious way out is to increase the dy- puting power capabilities, state-of-the-art simulations namic range of the simulation, so as to encompass a can be performed with billion of particles, since algo- volume large enough to be representative,while at the rithmic and hardware development have increased the sametimehavingenoughmassresolutiontoextendthe mass and spatial resolution by orders of magnitude. analysis well above (or below) M∗. Hence,itisperhapsimpressivetorecallthatoneofthe In this view, the enormous volume of our HR2 and firstN-bodysimulationsrunbyPeebles(1970)utilized HR3, togetherwiththe largenumber ofparticlesused, only300particles,whiletodaywecansimulateindivid- make our simulations ideal to characterize with mini- ual collapsed structures in a full cosmological context, malstatisticaluncertaintythedependenceofthestruc- andsubstructurehaloscanberesolved–withexcellent tural parameters of CDM halos on mass, formation approximationstoCDMhalosonalargemassandspa- time, departures from equilibrium, etc., and will be tial range. To this end, Figure 1 shows how our new of great use for DE science and for all the upcoming HR2 and HR3 simulations compare with the previous photometric redshifts surveys. HR1 and with other recent large-volume simulations, The paper is organized as follows. In Section 2 we in terms of number of particles; for specific details on briefly discuss the standard numerical techniques in our simulations refer to the next section. Pale green LSSanalyses,andcomparethenumberofparticlesand filled hexagons in the figure represent the PM simula- volumes of our simulations with other recent numeri- tionsmade byourgroup,stars(pink)arethe PM-Tree cal studies. In Section 3, after a synthetic description simulations carried out always by our group (see the of our previous Horizon Run 1 (HR1), we present our various labels), while blue open circles are those per- new simulations HR2 and HR3. In particular, we pro- formed by other authors. Note in particular the Mil- vide some technical details on the code used, the mass lennium Run (Springel et al. 2005), with 10 billion and force resolution, and the halo selection procedure. particlesandaboxsize of500h−1Mpc, andthe recent In Section 4 we introduce the 35 all-sky mock surveys French collaboration N-body simulations (Teyssier et 220 J. Kim et al. Fig. 1.— Evolution of the number of particles in N-body simulations versus time (years). Filled hexagons (pale green) are PM simulations made byour group; stars (pink) are PM-Tree simulations carried out by ourgroup (theHR2and HR3 are located in the upper-right corner); open circles (blue) are those performed by other authors – see the detailed legend in the figure. Note in particular the position of the Millennium Run (Springel et al. 2005; 10 billion particles, box size 500h−1Mpc), and the French collaboration (Teyssier et al. 2009; 68.7 billion particles, box size 2000h−1Mpc). The solid line is themean evolution of thesimulation size. al. 2009), made using 68.7 billion particles with a box N-body simulations of the gravitational collapse of a size of 2000 h−1Mpc. The solid line is the mean evo- collisionless system of particles actually pre-date the lution of the simulation size taken from Springel et al. CDM model (see again Diemand & Moore 2009). In (2005), which is basically linear in logarithmic space, effect, early simulations in the 1960’s studied the for- increasing a factor of 10 every 4.55 years. mation of elliptical galaxies from the collapse of a cold The application of numerical methods in cosmology top-hatperturbationofstars(vanAlbada1961;Henon isinfactanexponentiallygrowingfield,inparallelwith & Heiles 1964; Peebles 1970). In the 70’s, attempts major technological developments; for a brief history were made to follow the expansion of a collapse of a of numerical studies see Diemand & Moore (2009)and spherical overdensity to relate to the observed prop- Kim et al. (2009). So far, efforts have been essentially erties of virialized structures such as galaxy clusters. focused both on investigating the formation of single In 1975, Groth & Peebles made actual “cosmological” halos with ultra-high resolution (Springel et al. 2008; simulationsusing1550particleswithΩM =1andPois- Stadeletal. 2009),andonsimulatingstructureforma- son initial conditions (Groth & Peebles 1975). After, tion in large boxes, to mimic the LSS in the Universe. Aarseth et al. (1979) used 4000 particles. Only in Obviously, one needs to push numerical simulations to the 80’s, however, was it proposed that cosmic struc- their limits, as the dynamical range of scales to be re- tureformationfollowsa dominant,non-baryonicCDM solved is extremely large for addressing different types component, and later on the inflationary scenario in of cosmological problems: this study focuses on the conjunction with CDM brought realistic initial condi- latter part. What is interesting to notice here is that tions for N-body models. However, it was not until Cosmological N-body simulations 221 3. THE HORIZON RUNS After a brief description of our previous HR1 (Kim et al. 2009), in this section we present the main char- acteristicsofthe new HR2andHR3,twoofthe largest cosmological N-body simulations to date. In particu- lar,weprovidedetailedtechnicalinformationregarding the code developed and used to perform the runs, the massandforceresolution,andthehaloselectionproce- dure. Themainsimulationparametersaresummarized in Table 1 for convenience, and in the following parts we will refer to it – whenever necessary. 3.1 The Horizon Run 1: overview The HR1 (Kim et al. 2009) was the largest vol- ume simulation ever run in 2009, with a box size of 6592h−1Mpc on a side and 11418h−1Mpc diagonally. The cosmological model adopted was a CDM concor- dance scenario with a cosmological constant (i.e. the LDCM model), having the basic parameters fixed by Fig. 2.— Visual comparison between the box sizes of dif- theWMAP5-yeardata(Komatsuetal. 2009)listedin ferent numerical simulations, via a 4.5 h−1Mpc thick slice the secondcolumn of Table 1. The initial linear power through ourHR3N-bodysimulation at thepresentepoch. spectrumwascalculatedusingthefitting functionpro- Theboxsizesare500,2000, 6595, 7200and10815 h−1Mpc vided by Eisenstein & Hu (1998). The initial condi- for the Millennium Run, French collaboration, HR1, HR2 tions were generated on a 41203 mesh with pixel size andHR3,respectively. ThehorizonandHubbleDepth(de- of 1.6h−1Mpc. It used a total of 41203 = 69.9 billion finedascH−1)arealsoindicatedinthefigure,incomoving 0 CDM particles, representing the initial density field at coordinates, to emphasize thevarious simulation volumes. z =23. Those particles, initially perturbed from their i uniformdistribution, weregravitationallyevolvedby a TreePM code (Dubinski et al. 2004; Park et al. 2005) thesimulationsbyDubinski&Carlbergthatindividual with force resolution of 160h−1kpc, for a total of 400 objects were simulated at sufficiently high resolution global time steps. The entire simulation-making pro- to resolve their inner structure on scales that could be cess lasted 25 days on a TACHYON SUN Blade sys- comparedwithobservations. Interestingly,Park(1990) tem (i.e. a Beowulf system with 188 nodes, each with was the first to use 4 million particles, a peak biasing 16 CPU cores and 32 Gigabytes of memory), at the scheme and a CDM, Ω h = 0.2 model to simulate a M Korean KISTI Supercomputing Center. The simula- volume large enough to properly encompass the CfA tion used 2.4 TBytes of memory, 20 TBytes of hard Great Wall. After that, only in 2008 the first billion disk, and 1648 CPU cores. The entire cube data were particle halo simulation Via Lactea II was published stored at various redshifts: z = 0,0.1,0.3,0.5,0.7,1. (Diemand et al. 2008),followed by Aquarius (Springel In addition, the positions of the simulation particles et al. 2008) and GHALO (Stadel et al. 2009), in a were saved in a slice of constant thickness (equal to progressivelyincrementofbox sizeandnumberofpar- 64 h−1Mpc), as they appeared in the past light cone. ticles till our HR1 (Kim et al. 2009) – which spanned During the simulation, 8 equally-spaced (maximally- the largest volume at the time. separated) observers were located in the cube, in or- To this end, Figure 2 allows for a visual compari- der to construct the mock SDSS-III LRG surveys; the son on the massive increment in box size between our positions and velocities of the particles were saved at Horizon Runs with other recent cosmological N-body z < 0.6, as they crossed the past light cone. Subhalos simulations. A 4.5h−1Mpc slice through the HR3 is werethenfoundinthepastlightconedata,andusedto performed at the present epoch, in comoving coordi- simulatetheSDSS-IIILRGsurvey. Formoredetailson nates. The box sizes are 500, 2000, 6595, 7200 and howthecorrespondingmockcatalogswereconstructed 10815h−1Mpc for the Millennium Run, Frenchcollab- see the next section, and Kim et al. (2009). oration,HR1,HR2andHR3,respectively. Thehorizon and Hubble distance (cH−1) are also indicated in the 0 3.2 The Horizon Runs 2 and 3: improvements figure, to emphasize the various simulation volumes. Note the impressive improvement with respect to the The major improvements of our new HR2 and HR3 Millennium Run box size. with respect to the HR1 concern the number of parti- cles used, the bigger box sizes (up to a factor of 4.4 in volume),andaconsiderablyfinermassresolution. Pre- cisely, HR2 and HR3 have been made using 60003 = 222 J. Kim et al. Fig. 3.— Example of the matter density in the past light cone space, obtained from a slice with 30h−1Mpc thickness passing through thecenterof theHR3simulation box. A mock observer is located at thecenterof thefigure;the radiusof the map corresponds to about z =4. The ticks along the horizontal axis indicate the lookback time, in units of billions of years, while theredshift is indicated along the vertical axis. The color scheme varies from blue to red as one goes to larger redshifts. The radial distance is linear in lookback time. The starting epoch of HR3corresponds toa redshift of z=27. 216billionsand72103=374billionparticles,spanning year data (Komatsu et al. 2009) listed in the third avolumeof(7.200h−1Gpc)3and(10.815h−1Gpc)3,re- and fourth columns of Table 1. The initial redshifts spectively – which range from 2600 to over 8800 times of the simulations are z = 32 and z = 27, respec- i i the volume of the Millennium Run. The mass resolu- tively, with 800 time steps for the HR2 and 600 for tionreachesdownto1.25×1011h−1M⊙,allowingtore- the HR3. For more details on the initial conditions, solve galaxy-size halos with mean particle separations see Section 5.1. We also improved on the ‘old fash- of 1.2h−1Mpc and 1.5h−1Mpc, respectively. Results ion’ linear power spectrum by adopting the CAMB span nearly six decades in mass. As with the HR1, source (http://camb.info/sources), which provides a these two new simulations are based on the ΛCDM better measurementof the BAO scale: this is essential cosmology, with parameters fixed by the WMAP 5- for a more accurate determination of the cosmological Cosmological N-body simulations 223 Fig. 4.—Infalsecolors,aquarter-wedgeslicedensitymapinthepastlightconeisshown,fromtheHR3. Colorschemeas in the previous figure. The width is 30h−1Mpc, and thewedge reaches a redshift of z =27. The density is measured using the spline kernel (a technique widely used in SPH), with 5 nearest particles. The ticks along the horizontal axis indicate thelookback time,in unitsof billions ofyears, whiletheredshift isindicated along theverticalaxis. Notethatthestarting epoch of theHR3corresponds to the maximum redshift of z=27. parameters using the BAO constraint. surveyregionforeachsimulation. Thisvolumereaches The positions of the simulation particles were saved z =1.85(HR2)andz =4(HR3), respectively,andthe in a slice of constant thickness (equal to 30h−1Mpc), center is located at the origin of the simulation box. as they appeared in the past light cone. To construct Note in particular that the latter mock survey covers the various mock catalogs,eight observers were evenly the range out to z = 2.5, where BOSS data allow one located in the simulation box of the HR2, and twenty to measurethe 3DBAO usingLyman-αclouds seenin seven in the box of the HR3. Each observer volume absorption in front of many quasar lines of sights. covers a space spanning from z = 0 to z = 0.7, with- Figure 3 is an example of the matter density in out overlaps: this means that the survey volumes are the past light cone space from the HR3 (the width is totallyindependent. Inaddition,thereisonebigmock 30h−1Mpc). A mock observer is located at the cen- 224 J. Kim et al. Fig. 5.— Particle distribution from a small region (i.e. out to z ≃0.1) of the entire mock survey volume shown in Figure 4, which reveals the typical cosmic web-like pattern: individual halos, filaments and sheets can be easily seen, connected in a network, with voids encompassing the volume in between. Particles in the survey volume are shown if they are inside the slice volume with width ∆z corresponding to 7.5 h−1Mpc. The plot demonstrates the large dynamical range of our simulation, capable of resolving theclustering at small scales. ter of the figure; the radius of the map corresponds to Infalsecolors,aquarter-wedgeslicedensitymapinthe about z = 4. The ticks along the horizontal axis in- past light cone is shown. The width is 30h−1Mpc, and dicate the lookback time, in units of billions of years, the wedge reaches z = 27. The color scheme varies as while the redshift is indicated along the vertical axis. in the previous figure. The ticks along the horizontal The color scheme varies from blue to red as one goes axis indicate the lookback time, in units of billions of to largerredshifts. Note that the starting epoch of the years, while the redshift is indicated along the vertical HR3 corresponds to the maximum redshift of z = 27. axis; the starting epoch of the HR3 is z =27. The distribution of the CDM particles is converted to Figure5showstheparticledistributionfromasmall adensity fieldusingthe variable-sizespline kernelcon- region (i.e. out to z ≃ 0.1) of the entire mock survey taining 5 CDM particles – a technique well-known in volume, which clearly reveals the typical cosmic web- smoothed-particle hydrodynamics (SPH). like pattern: one can easily distinguish individual ha- Figure4isanotherexampleobtainedfromourHR3. los,filaments and sheets, connectedin a network,with Cosmological N-body simulations 225 Table 1. Detailed specifics of our Horizon RunN-body simulations. HR1 HR2 HR3 Model WMAP5 WMAP5 WMAP5 Ω 0.26 0.26 0.26 M Ω 0.044 0.044 0.044 b Ω 0.74 0.74 0.74 Λ Spectral index 0.96 0.96 0.96 H [100 km s−1Mpc−1] 72 72 72 0 σ 0.794 0.794 0.794 8 Box size [h−1Mpc] 6592 7200 10815 No. of grids for initial conditions 41203 60003 72103 No. of CDM particles 41203 60003 72103 Starting redshift 23 32 27 No. of global time steps 400 800 600 Mean particle separation [h−1Mpc] 1.6 1.2 1.5 Particle mass [1011h−1M⊙] 2.96 1.25 2.44 Minimum halo mass (30 particles) [1011h−1M⊙] 88.8 37.5 73.2 Mean separation of minimum mass PSB halos [h−1Mpc] 13.08 9.01 11.97 voids encompassing the volume in between. The plot situations, using single precision provides highly inac- demonstrates the large dynamical range of our simu- curate results. A possible (and obvious) solution is to lations, which are able to resolve correctly individual use double precision instead; however, this would re- structures at small scales. quire too much memory space (i.e. an additional 30% increment). Tosolvetheproblem,wedevisedasimpler 3.3 Improving the GOTPM code method which does not require any additional space – but only some more calculation time. In essence, in- The Grid-of-Oct-Trees-Particle-Mesh code, called steadofdeterminingthepositionsoftheparticlesusing GOTPM,isaparallel,cosmologicalN-bodycodebased the position vector in the GOTPM code, we used the onahybridschemeusing the PMandBarnes-Hutoct- corresponding offset vector (or displacement vector), tree algorithm, originally devised by Dubinski et al. from the pre-initial Lagrangianposition. We then cal- (2004). We used an improved version of this code culatedthe positionofthe particledirectlyfromitsin- for our HR2 and HR3 simulations. Specifically, we dex and particle offset vector. By adopting this proce- implemented a new procedure which allows us to de- dure, the positioning error is about d /224, where scribe more accurately the particle positions. In fact, (x,y,z) d is the displacement from the Lagrangian posi- when dealing with billions of particles, single precision (x,y,z) tion. This value is significantly smaller than N. (adopted in the original version of GOTPM) is inac- In addition, in the GOTPM each particle requires curate for representing their positions. This is because a total of 40 bytes; four of them are used to save the thesignificantbitsinsingleprecisionareonly24,which position and velocity, eight are for the particle index, allowforabout7significantdigitnumbers. Theresult- ing position error in single precision is about N/224, and other eight are allocated for a pointer which is used to build a linked list in the Tree mode. Since with N the number of particles in one dimension. For example, the particles in the 72103 simulation have a the particle pointer is not used in the PM mode, this memory space is recycled for the density mesh and for maximum intrinsic positioning error of 0.04%, which the FFT workspace. becomes significant when their clustering is high and whenthoseparticlesarepackedinsmallareas. Inthese 226 J. Kim et al. 3.4 Halo selection distance from the observer at the given time step, and ∆ the difference between the comoving distance at the Identifying and deciding which material belongs to next step (subscript 1) or at the previous step (sub- a halo and what lies beyond it is clearly a non-trivial script2)–withrespecttothatatthecurrentstep. We question. This is because DM halos are dynamical then calculate the particle distances and flag them if structures, constantly accreeting material and often they areinthe shell. Theradiusandwidthofthe shell substantially out of virial equilibrium. In these cir- (andsothevolume)arecalculatedaccordingtothecos- cumstances, halos evolve quickly so that the parame- mologicalmodelandthe simulationtimeandstepsize. tersusedtospecifytheirpropertieschangerapidlyand Obviously, the particles can cross the shell boundary thus are ill-defined. Furthermore, in the case of an between different time steps, since they are moving in ongoing major merger even the definition of the halo space. If that happens, they can be double detected in center becomes ambiguous. adjacent time steps – or totally missed. In the former In the Horizon Runs, halos are first identified via a case, if a particle is double detected between consec- standardFriend-of-Friend(FoF) procedure. Thensub- utive time steps, we average its position and velocity. halosarefound(outofFoFhalos)withasubhalofind- For the latter case, in order not to miss a particle, we ing technique, developed by Kim & Park (2006) and create a buffer zone with constant width (∆ = 0.1 R Kim et al. (2008). This method allows one to iden- of the simulation pixel), and make two regions in the tify physically self-bound (PSB) dark matter subhalos inner and outer shellboundaries,respectively; we then nottidallydisruptedbylargerstructuresatthedesired identify particles inside the buffer zone. If a particle epoch. This procedure does not discard any informa- is detected in both regions of the buffer zone, we aver- tion on the subhalos which is actually in the N-body age its position and velocity and save the information simulations,andwouldbethrownawayinaHaloOccu- in the light cone particle list. This method has been pation Distribution (HOD) analysis using just simple devised to account for double detections, and in order FoF halos. Note in fact that the FoF algorithm is a not to miss particles – which happens frequently due percolation scheme that makes no assumptions about to the finite size of the simulation time step. halo geometry, but may spuriously group distinct ha- los together into the same object. In particular, we 4.1 Mocks from the HR1: overview haveappliedtheparallelversionoftheFoFhalofinding method to identify virializedstructures fromthe simu- TomakemockSDSS-IIILRGsurveysfromtheHR1, lation particles. Linked particles with mutual distance we placed observers at 8 different locations in the sim- lessthan0.2×d ,withd themeanparticlesep- ulation cube, and carried out all-sky surveys up to mean mean aration, are grouped. In addition, subhalos in a FoF z =0.6. These areallpastlightcone data. The effects halo are found by a new version of the PSB method, of our choices of the starting redshift, time step and more advanced than the previous one (Kim & Park force resolution were discussed in detail in Kim et al. 2006) in that it adopts adaptive density fields rather (2009). In particular, we used the Zel’dovich redshifts than the rectangular grid density fields. Densities at andtheFoFhalomultiplicityfunctiontoestimatetheir theparticlepositionsaremeasuredusingtheSPHden- effects (see also the next section). sity allocation scheme, and each particle is linked to During the simulation we located 8 equally-spaced 30 nearest neighbors. The neighbor links are used for (maximally-separated)observersinthesimulationcube, building hierarchical isodensity contours around local and saved the positions and velocities of the parti- densitypeaks. Thesecoordinate-freedensityallocation cles at z < 0.6 as they cross the past light cone. and neighbor search are much more effective for iden- We assumed that the SDSS-III survey would produce tifying subhalos in crowded regions. a volume-limited LRG sample with constant number density of 3×10−4(h−1Mpc)−3. In our simulation we varied the minimum mass limit of subhalos to match 4. MOCK SURVEYS the number density of selected subhalos (the mock In this section, firstwe briefly describe our previous LRGs) with this number at each redshift. For exam- mock catalogs made from the HR1, in support of the ple,themasslimityieldingtheLRGnumberdensityof SDSS-III. We then introduce our new 35 all-sky mock 3×10−4(h−1Mpc)−3wasfoundtobe1.33×1013h−1M⊙ surveysalongthe pastlightcones,madefromthe HR2 and 9.75×1012h−1M⊙ at z = 0 and z = 0.6, respec- and HR3 to simulate the BOSS geometry and already tively. We then checked how well the mock LRG sam- publicly available at http://astro.kias.re.kr/Horizon- ple reproduced the physical properties of the existing Run23/. LRG sample – see for this Kim et al. (2008), Gott et al. (2008), Gott et al. (2009), Choi et al. (2010). We Regardlessofthe specific simulationconsidered,the foundagoodmatchfrom1to140h−1Mpc,particularly followinggeneralprocedureisusedtoconstructamock in the case of the shape of the correlation function. In catalog. For each simulationtime step, we track parti- addition, the BAO scale and the LSS topology could cleswhicharelocatedinthelightconeshellsurrounded be very accurately calibrated with these mocks. by two spherical surfaces of radii (d−∆ ,d+∆ ) 1/2 1/2 (i.e. innerandoutershells),withdbeingthecomoving

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.