ebook img

Detection of Potential Transit Signals in 17 Quarters of Kepler Mission Data PDF

8.7 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Detection of Potential Transit Signals in 17 Quarters of Kepler Mission Data

Detection of Potential Transit Signals in 17 Quarters of Kepler Mission Data 5 1 0 Shawn Seader1, Jon M. Jenkins2, Peter Tenenbaum1, Joseph D. Twicken1, 2 Jeffrey C. Smith1, Rob Morris1, Joseph Catanzarite1, Bruce D. Clarke1, Jie b e Li1, Miles T. Cote2, Christopher J. Burke1, Sean McCauliff3, Forrest R. F Girouard3, Jennifer R. Campbell3, Akm Kamal Uddin3, Khadeejah A. 3 1 Zamudio3, Anima Sabale3, Christopher E. Henze2, Susan E. Thompson1, ] Todd C. Klaus4 P E [email protected] . h p - o r t s ABSTRACT a [ We present the results of a search for potential transit signals in the full 17- 2 quarter dataset collected during Kepler’sprimarymission thatended onMay11, v 6 2013, due to the on-board failure of a second reaction wheel needed to maintain 8 high precision, fixed, pointing. The search includes a total of 198,646 targets, 5 3 of which 112,001 were observed in every quarter and 86,645 were observed in 0 a subset of the 17 quarters. For the first time, this multi-quarter search . 1 is performed on data that have been fully and uniformly reprocessed 0 5 through the newly released version of the Data Processing Pipeline. 1 We find a total of 12,669 targets that contain at least one signal that meets our : v detection criteria: periodicity of the signal, a minimum of three transit events, an i X acceptable signal-to-noise ratio, and four consistency tests that suppress many r false positives. Each target containing at least one transit-like pulse sequence a is searched repeatedly for other signals that meet the detection criteria, indicat- ing a multiple planet system. This multiple planet search adds an additional 7,698 transit-like signatures for a total of 20,367. Comparison of this set of detected signals with a set of known and vetted transiting planet signatures in the Kepler field of view shows that the recovery rate of the search is 90.3%. We review ensemble properties of the detected signals and present various metrics useful in validating these potential planetary signals. We highlight previously undetected transit-like signatures, including several that may represent small objects in the habitable zone of their host stars. Subject headings: planetary systems – planets and satellites: detection 1 1. Introduction radiator correctly oriented; the interval which corresponds to a particular rotation We have reported on the results of state is known colloquially as a “quarter.” past searches of the Kepler data for tran- Because of the quarterly rotation, target siting planet signals in Tenenbaum et al. stars were observed throughout the year in (2012) which searched 3 quarters of data, 4differentlocationsonthefocalplane. Sci- Tenenbaum et al. (2013) which searched ence acquisition was interrupted monthly 12 quarters of data, and Tenenbaum et al. for data downlink, quarterly for maneu- (2014) which searched 16 quarters of data. ver to a new roll orientation (typically this Wenowupdateandextend thoseresultsto is combined with a monthly downlink to incorporate the full data set collected by limit the loss of observation time), once Kepler and an additional year of Kepler every 3 days for reaction wheel desatu- Science Processing Pipeline (Jenkins et al. ration (one long cadence sample is sacri- 2010) development. We further extend the ficed at each desaturation), and at irregu- results to include some metrics used to lar intervals due to spacecraft anomalies. validate the astrophysical nature of the In addition to these interruptions which detections. were required for normal operation, data acquisition was suspended for 11.3 days, 1.1. Kepler Science Data from 2013-01-17 19:39Z through 2013-01- The details of Kepler operation and 29 03:50Z (555 long cadence samples) 1 data acquisition have been reported else- during this time, the spacecraft reaction where (Haas et al. 2010). In brief: the Ke- wheels were commanded to halt motion in pler spacecraft is in an Earth-trailing he- an effort to mitigate damage which was liocentric orbit and maintained a boresight being observed on reaction wheel 4, and pointing centered on α = 19h22m40s,δ = spacecraft operation without use of reac- +44.5◦ during the primary mission. The tion wheels is not compatible with high- Kepler photometer acquired data on a 115 precision photometric data acquisition. square degree region of the sky. The data In July 2012, one of the four reac- were acquired in 29.4 minute integrations, tion wheels used to maintain spacecraft colloquially known as “long cadence” data. pointing during science acquisition experi- The spacecraft was rotated about its bore- enced a catastrophic failure. The mission sight axis by 90 degrees every 93 days in was able to continue using the remaining order to keep its solar panels and thermal three wheels to permit 3-axis control of the spacecraft, until May of 2013. At that 1SETIInstitute/NASAAmesResearchCenter, time a second reaction wheel failed, forc- Moffett Field, CA 94035,USA ing an end to Kepler data acquisition in 2NASA Ames Research Center, Moffett Field, the nominal Kepler field of view. As a re- CA 94035,USA 3Wyle Labs/NASA Ames Research Center, 1Time and date are presented here in ISO-8601 Moffett Field, CA 94035,USA format, YYYY-MM-DD HH:MM, or optionally 4Moon Express, Inc., P.O.Box 309, Moffett YYYY-MM-DDHH:MM:SS,withatrailing‘Z’to Field, CA 94035,USA denote UTC. 2 sult, the analysis reported here is the first includes target stars which were not orig- which incorporates the full volume of data inally observed for purposes of transiting acquired from that field of view.2 planet searches (asteroseismology targets, Kepler science data acquisition of Quar- guest observer targets, etc.). The excep- ter 1 began at 2009-05-13 00:01:07Z, and tion to this is a subset of known eclips- acquisition of Quarter 17 data concluded ing binaries, as described below. Figure 1 at 2013-05-11 12:16:22Z. This time inter- shows the distribution of targets according val contains 71,427 long cadence intervals. to the number of quarters of observation. Of these, 5,077 were consumed by the in- Atotalof112,014targetswereobservedfor terruptions listed above. An additional all 17 quarters. An additional 35,653 tar- 1,100 long cadence intervals were excluded getswereobservedfor14quarters: thevast from use in searches for transiting plan- majority of these targets were in regions of ets. These samples were excluded due to the sky which are observed in some quar- data anomalies which came to light dur- ters by CCD Module 3, which experienced ing processing and inspection of the flight ahardwarefailureinitsreadoutelectronics data. This includes a contiguous set of 255 during Quarter 4 in January 2010, result- long cadence samples acquired over the 5.2 ing in a “blind spot” which rotates along days which immediately preceded the 11 with the Kepler spacecraft, gapping 25% day downtime described above: the short- of the quarterly observations for affected ness of this dataset combined with the du- targets. The balance of 51,008 targets ob- rationofthesubsequent gapledtoajudge- served forsomeothernumber ofquartersis ment that the data would not be useful largely due to gradual changes in the tar- for transiting planet searches. A total of get selection process over the duration of 65,250 long cadence intervals for each tar- the mission. get were dedicated to science data acqui- AsdescribedinTenenbaum et al.(2014), sition. This is only 1,242 or 2% more some known eclipsing binaries were ex- ∼ searchable cadences than available for the cludedfromplanetsearchesinthepipeline. analysis of 16 quarters of kepler data in For this search over Q1-Q17, a total of Tenenbaum et al. (2014). This highlights 1,033 known eclipsing binaries were ex- the fact that quarter 17 was only about a cluded. This is smaller than the number month long in duration before the failure excluded in the Q1-Q16 analysis due to a of the second reaction wheel. change in exclusion criteria. Specifically, A total of 198,675 targets observed by we excluded eclipsing binaries from the Kepler were searched for evidence of tran- most recent Kepler catalog of eclipsing siting planets. This set of targets includes binaries (Prˇsa et al. 2011; Slawson et al. allstellartargetsobservedbyKepleratany 2011) that did not have a morphology < point during the mission, and specifically 0.6 (Matijeviˇc et al. 2012), i.e., the pri- mary and secondary eclipses must be well 2We exclude 10daysofdataacquiredatthe endof separated from one another, and were not commissioningon 53,000starsdubbedQ0asthis in conflict with established planet candi- ∼ segment of data is too short to avoid undesirable dates archived at NExScI. This require- edge effects in the transit search. 3 mentonthemorphologyisidenticaltothat ule the ensemble of target flux time series of the Q1-Q16 run but all the other crite- were then corrected for systematic varia- riahavebeendroppedtoexcludefewer tar- tions driven by effects such as differential gets. Note, however, that the Data Valida- velocityaberration, temperature-drivenfo- tion (DV) (Wu et al. 2010; Twicken et al. cus changes, and small instrument point- 2015) step of the Data Processing Pipeline ing excursions(Stumpe et al. 2014). These may still exclude some events based on corrected flux time series became the in- its own analysis of primary and secondary puts for the Transiting Planet Search. depths. Thus, the excluded eclipsing bina- The Transiting Planet Search software ries are largely contact binaries which pro- module analyzed each corrected flux time duce the most severe misbehavior in the series individually for evidence of periodic Transiting Planet Search (TPS) pipeline reductions in flux which would indicate a module (Tenenbaum et al. 2012, 2013), possible transiting planet signature. The while well-detached, transit-like, eclipsing search process incorporated a significance binaries are now processed in TPS. This threshold against a multiple event statis- was done in order to ensure that no possi- tic”3 and a series of vetoes; the latter bletransit-likesignaturewasexcluded, and were necessary because while the signif- also toproduce examples of theoutcomeof icance threshold was sufficient for rejec- processing such targets through both TPS tion of the null hypothesis, it was inca- and DV, such that quantitative differences pable of discriminating between multiple between planet and eclipsing binary detec- competing alternate hypotheses which can tions could be determined and exploited potentially explain the flux excursions. An for rejecting other, as-yet-unknown, eclips- ephemeris on a given target which satis- ing binaries detected by TPS. fied the significance threshold and passed allvetoesisknownasaThresholdCrossing 1.2. Processing Sequence: Pixels to Event (TCE). Each target with a TCE was TCEs to KOIs then searched for additional TCEs, which potentially indicated multiple planets or- The steps in processing Kepler science biting a single target star. datahavenotchangedsinceTenenbaum et al. (2012), and are briefly summarized be- After the search for TCEs concluded, low. The pixel data from the space- additionalautomatedtestswereperformed craft were first calibrated, in the CAL to assist members of the Science Team in pipeline module, to remove pixel-level ef- fects such as gain variations, linearity, 3The multiple event statistic is a measure of the degree to which the data are correlated with the and bias(Quintana et al. 2010). The cali- reference waveform (in this case a sequence of brated pixel values were then combined, evenly spaced transit pulses) normalized by the within each target, in the Photometric strength of the observation noise. It is approxi- Analysis (PA) pipeline module, to pro- mately the same as the result of dividing the fit- ted transit depth by the uncertainty in the fitted duce a flux time series for that target transit depth, and can be interpreted in terms of (Twicken et al. 2010b). In the Pre-search the likelihood a value would be seen at that level Data Conditioning (PDC) pipeline mod- by chance. 4 their efforts to reject false positives. A poor for several reasons such as local TCE which has been accepted as a valid trends or noise in the data. Improvements astrophysical signal (either planetary or were made to fix this problem of poor an eclipsing binary), based on analysis corrections. Additionally, some improve- of these additional tests, is designated as ments were made to the wavelet band- a Kepler Object of Interest (KOI). The splitting method used by PDC’s multi- collection of KOIs that pass additional scale Bayesian Maximum A Posteriori al- scrutiny are promoted to planet candidate gorithm for removing systematic noise status, while those that don’t are disposi- from the data (Smith et al. 2012). tioned as false positives. Note that, while In addition to improvements in the sci- the TCEs were determined in a purely ence algorithms used to process the data, algorithmic fashion by the TPS software the data was also improved. A major ef- module, KOIs were selected on the basis of fort was undertaken by the Kepler Sci- examination and analysis by scientists. ence Operations staff to, for the first time, uniformly reprocess all the data through 1.3. Pre-Search Processing the newly released codebase. This was aided by the fact that all major mod- SincethepublicationofTenenbaum et al. ules of the pipeline have been ported over (2014), there have been considerable im- to the NASA Ames supercomputer to al- provements to the Kepler Science Process- low for reprocessing of multiple quarters ing Pipeline. We describe here only the in parallel. In all however, processing all recent improvements and refer the inter- the data uniformly from the start to the ested reader to Tenenbaum et al. (2012, end of the pipeline takes several months. 2013, 2014) for a description of past im- Each module of the pipeline is necessar- provements. The pixel-level calibration ily, andpainstakingly, configuredandman- performed by the CAL module of the aged separately as they are run in se- Science Processing Pipeline has been im- quence. proved to include a cadence-by-cadence 2- D black correction. In an effort to mitigate Pixel-level data as well as light curves the effects of the image artifacts described from these pre-search processing steps are in Caldwell et al. (2010) on the transiting publicly available at the Mikulski Archive planet search, CAL also generates rolling forSpaceTelescopes4 (MAST) asDataRe- band flags (Clarke et al. 2014) that can be lease 24. Each data release is also accom- used to identify the cadences on a given panied by a Data Release Notes document target that are affected. Improvements to that describes the data in general. the undershoot correction have also been made in CAL. 2. Transiting Planet Search ThePre-searchDataConditioning(PDC) Thissectiondescribesthechangeswhich module corrects for both attitude tweaks have been made to the TPS algorithm and Sudden Pixel Sensitivity Dropouts since Tenenbaum et al. (2014). Forfurther (SPSDs). Previously it was noticed that those corrections can occasionally be very 4 https://archive.stsci.edu/index.html 5 information on the algorithm, see Jenkins of TPS to strong short period planetary (2002),Jenkins et al.(2010),Tenenbaum et al. signatures(Christiansen et al. 2013). TPS (2012, 2013, 2014). normalizes each quarterly segment by its median and fills the monthly and quar- 2.1. Conditioning and Quarter-Stitching terly gaps using methods that attempt to the Data preserve the correlation structure of the observation noise and permit FFT-based Owing to the complicated and varied approaches to be used in the subsequent noise sources across the many stellar tar- search. gets in the Kepler field, the core detec- After TPS has found a signal with a tion algorithm in TPS is a wavelet-based, MultipleEventStatistic(MES)(Jenkins et al. adaptive matched filter. The data are 2010) above threshold, however, it is pos- transformed to the wavelet domain using sible to detrend and re-whiten the data Daubechies 12-tap wavelets (Debauchies and avoid any loss in Signal-to-Noise Ratio 1988) in a joint time-frequency overcom- (SNR) that would otherwise occur because pletewaveletdecomposition. Inthewavelet of the effect of the signal on the estimated domain, the noise at each time-frequency trend and the whitening coefficient esti- scale is estimated and used to whiten both mates. Although this will not make the the data and the templates which span problem of detecting the signal any eas- the space of physically allowable orbital ier, since it can only be done after the period, orbital phase (epoch), and tran- initial detection, it does improve the dis- sit duration. Although the window sizes criminating power of the other statistical used to estimate the noise at the various tests, or vetoes (discussed subsequently), time-frequency scales are large compared thatTPS subjects eachcandidate event to. to the transit duration, the whitening pro- Prior to calculation of the veto statistics, cess can lower the Signal-to-Noise Ratio the ephemeris of the candidate event is due to the way that any signal present used to identify in-transit cadences (with in the data perturbs the estimates of the some small amount of padding). These noise (orwhitening coefficients). The same in-transit cadences are then filled using an is true of any detrending of the data that adaptive auto-regressive gap prediction al- is performed prior to a search. For this gorithm. A trend line is then estimated reason, we do not detrend the data in any using a piecewise polynomial fitting al- way prior to the search other than to re- gorithm that employs Akaike’s Informa- move edge effects of spacecraft pointing at tion Criterion to prevent over-fitting. Af- quarter and monthly boundaries where the ter removing this trend from the data, spacecraft broke from data acquisition in the whitening coefficients are then re- order to beam the data back to Earth. We computed. After removing the trend from also attempt to filter out coherent stellar the in-transit cadences, they are restored oscillations by identifying and removing to the trend-removed data which are then harmonics from each quarterly segment. whitened using the new whitening coef- This does, indeed, suppress such stellar ficients. The potential candidate is then variability, but also reduces the sensitivity 6 subjected to a suite of four statistical tests (discussed in the next section) which cur- described below in section 2.3. rently assume a perfect match between the signal and template. In the next pipeline 2.2. Search Templates and Tem- code release, the calculation of the χ2 plate Spacing vetoes will take into account the signal- template mismatch as described first in Mismatch between any true signal and Allen (2004) and later in Seader et al. the template used to filter the data de- (2013). Using the new templates, the to- grades SNR. This degradation due to tal shape mismatch (transit duration in- signal-template mismatch can be decom- cluded) is only 1.66% compared to 4.32% posed into two separate types: shape mis- for the square wave model. match, and timing mismatch. The transit The mismatch in timing is a function of duration and assumed transit model affect the template spacing in the period-epoch the shape mismatch, while the template space. This is controlled by a mismatch search grid spacing in orbital period and parameter which is essentially the Pearson epoch affect the timing mismatch. Correlation Coefficient between two mis- TPS searches a total of 14 transit dura- matched pulse trains (Jenkins et al. 1996, tions: 1.5, 2.0, 2.5, 3.0, 3.5, 4.5, 5.0, 6.0, 2010). Previously, TPS required a match 7.5, 9.0, 10.5, 12.0, 12.5, and 15 hours. of 90%, which gave a timing mismatch These are logarithmically spaced, rounded alone of 2.65% with the new astrophys- to the nearest half hour (roughly the time ically motivated templates for a total mis- per cadence), and augmented by a require- match of 4.31% including both shape and ment to always search for 3, 6, and 12 hour timing mismatches. Since we are now run- pulses. Prior to this run of the pipeline, ning TPS on the NAS however, we can af- TPS has simply used a square-wave tran- ford to search a finer grid of templates and sit model. In Seader et al. (2013), it was have therefore tightened up the period- shown through a Monte Carlo study, that epoch match to 95%. This parameter may with perfect duration and timing match, be increased further in the future but by the square wave, on average, mismatches increasing the number of search templates a true signal by 3.91%. This translates thefalsealarmratealsoincreases. Sothere directly into SNR loss. This same Monte are tradeoffs to consider outside of just run Carlo study was used to compute the in- time. The false alarm vetoes keep the to- tegral average of all astrophysical models tal number of false alarms at a manageable based on the Mandel and Agol geomet- level even with the increase to the period- ric transit model (Mandel & Agol 2002) epoch match. with limb darkening of Claret (Claret 2000), over the parameter space of interest 2.3. False Alarm Vetoes (Seader et al. 2013). TPS now uses this averaged model to construct templates, The most substantial changes in terms which lowers the shape mismatch to only of both impact on final results and extent 1.49%. Lowering this shape mismatch also in the codebase, are the changes to the improves the sensitivity of the χ2 vetoes false alarm vetoes since the last pipeline 7 codebase release cycle. During the tran- is not applied. Otherwise, we require that siting planet search, TPS steps through the MES exceeds the calculated threshold potential candidates across period-epoch value to, in effect, ensure that we are mak- space with MES values exceeding the ing a detection that meets the false alarm search threshold of 7.1σ in order of de- criteria we have placed on the search. The creasing MES for each pulse duration. The thresholdusedinthisrunforthebootstrap search continues until either TPS runs out veto istoostrictgiven thatmanyreal tran- of time on that pulse duration, it hits siting planets are found in multiple planet the maximum allowable number of can- systems. The presence of transits for other didates to loop over (set to 1000), it ex- candidate events, on a given target, in the hausts the list without finding anything null statistics will artificially elevate the that passes all the vetoes, or it settles on threshold since they will add counts to something that passes all the vetoes. Dur- MES bins above background. In future ing the course of performing the search, runs, after appropriate tuning, the thresh- TPS has the ability to remove up to two old will be relaxed to reduce the likelihood features in the data that contribute to de- of rejecting real planetary signatures. tections that do not pass all the vetoes This Bootstrap veto does an excellent (Tenenbaum et al. 2013). After removing job at removing long period false alarms features, the period-epoch folding is re- that are related to rolling band image ar- done to generate a new list of candidates. tifacts. The image artifacts render the Candidate events are subjected to a MES distribution non-white and poten- suite of four statistical tests, two of which tially non-Gaussian after whitening. This arenew tothis pipeline code release. First, means that 7.1σ is not as significant in the distribution of the MES under the null comparison to the background at a given hypothesis at the detected period is es- period for a given target than it would be timated by a Bootstrap test outlined in in a purely standard normal distribution detail in Appendix A and in Jenkins et al. so we must therefore require a higher (2015). From the estimated MES distri- threshold to achieve the desired false bution, the threshold needed to achieve alarm rate. Under the Neyman-Pearson the false alarm probability equivalent to a criterion, which is employed in our detec- 7.1σ threshold on a standard normal dis- tion strategy, the search threshold is de- tribution ( 6.28 10−13) is calculated by termined from a required false alarm rate. × either interpolation or extrapolation. If The Bootstrap veto allows us to map out a the whitener is doing its job perfectly, the correctionfactorforeachtargetandperiod MES distribution should be standard nor- so that we can ensure the required false mal. When extrapolation is needed, a lin- alarm rate is met. This is sure to play an earextrapolationinlogprobabilityspaceis important role in upcoming planet occur- done which yields values that are typically rence rate studies where one must assess conservative. If any threshold values are detectability of potential signals spanning suspect, or if the distribution can not be the whole parameter space on every tar- constructed for some reason, then this veto get. TheBootstrapshows usthattheMES 8 distribution, and therefore detectability, is Tenenbaum et al. (2014) and Seader et al. highly variable from target-to-target and (2013). This test breaks up the MES into across period space as well. different components, one for each tran- Candidates that pass the Bootstrap sit event, and compares what is expected veto are then subjected to the Robust from each transit to what is actually ob- Statistic (RS) veto after the detrending tained in the data assuming that there and re-whitening described in Section 2.1 is indeed a transiting planet. The cur- above. This veto is described in detail in rent formulation assumes there is no mis- Appendix A of Tenenbaum et al. (2013). match between the signal and template. The fit SNR is derived from robustly fit- In the next pipeline release however, the ting the whitened data to a whitened mismatch will be explicitly accounted for model pulse train constructed using the by modifying the way the number of de- ephemeris of the candidate event. Previ- greesoffreedomarecalculatedasdiscussed ously, the threshold for this fit SNR was in Allen (2004) and Seader et al. (2013). set to 6.4σ based on examination of the Since Tenenbaum et al. (2014), the use of slope of the best fit line on a plot of MES χ2 has been dropped due to the deficien- (1) versus RS for KOIs detected with the cor- cies discussed in Seader et al. (2013). We rect ephemeris. We found previously that have also dropped theuse ofχ2 as defined (3) RS = 0.9MES, hence the 6.4σ threshold in Seader et al. (2013). We have however since we have a 7.1σ threshold on MES. implemented a new χ2 veto, which tests Now however, we find RS = 1.15MES, the goodness of fit, that was presented in which means we could in principle set Baggio et al. (2000) and later discussed in the RS threshold at 8.1σ. This makes Allen (2004). The new veto is dubbed sense considering that the whitening pro- χ2 and details of its construction are (GOF) cess lowers SNRandtheMEShasnotbeen presented in Appendix B. The thresholds re-computed with the new whitener as the used for both χ2 vetoes are 7.0σ. RS has. For this run, we conservatively The number of targets with a MES raised the threshold on the RS to 6.8σ. above threshold was 126,153, or 63.5% There is an additional criterion applied of all the targets. The vetoes were then during the RS test that affects only candi- applied to this set of targets. The boot- dates with the minimum allowed number strap veto rejected 107,846 targets, the RS of transits (three transits). We require rejected an additional 3,553 targets, and that each transit has no more than 50% of the χ2 vetoes rejected an additional 2,056 its cadences with data quality weights less targets. Note that there is a lot of over- than unity (Tenenbaum et al. 2014). lap across the different vetoes for targets If the RS threshold is exceeded, the that were rejected, for example most of candidate event is subjected to two sep- the targets vetoed by the bootstrap would arate statistical χ2 tests. The first of also be vetoed by the χ2 vetoes, and tar- these has been previously used and is gets vetoed by one version of the χ2 veto described in Tenenbaum et al. (2013) as would also be vetoed by the other. To- χ2 with modifications as discussed in gether, this is a powerful set of false alarm (2) 9 vetoes each with a very firm theoretical detectionofmultipleplanetsystemsyielded basis, that complement each other well to 7,698 additional TCEs across 5,238 target discriminate against the myriad of poten- stars, for a grand total of 20,370 TCEs. tial types of noise that can masquerade as Figure 2 shows a histogram of the number transiting planet signals. The vetoes are oftargetswitheachoftheallowednumbers not perfect however, and do prevent the of TCEs. In this run, 16 targets produced generation of TCEs for some number of le- 6 TCEs, 3 targets produced 7 TCEs, and gitimate transiting planets. We are in the 1 target had 8 TCEs. Note that all of process of fully characterizing their perfor- these TCEs are subjected to the full TPS mance through transit injection studies. process of detection and vetoing described above. 2.4. Detection of Multiple Planet In the analyses below, 3 targets that Systems produced TCEs are not included. This is due to the desire to limit the analysis pre- For the 12,669 target stars which were sented here to TCEs for which there is a found to contain a threshold crossing full analysis availablefromtheDVpipeline event, additional TPS searches were used module (Wu et al. 2010). These 3 tar- to identify target stars which host multiple gets failed to complete their DV analyses planetsystems. Theprocessisdescribedin due to timing out and are thus excluded. Wu et al. (2010) and in Tenenbaum et al. The Kepler Input Catalog (KIC) numbers (2013). The multiple planet search incor- for these 3 targets are: 5513861, 8019043, porates a configurable upper limit on the 10095469. The TCEs found around KIC number of TCEs per target, which is cur- targets 5513861 and 10095469 were both rently set to 10. This limit is incorpo- short period ( 0.676 and 0.755 days re- rated for two reasons. First, limitations spectively) and were found previously and on available computing resources translate are contained in the TCE catalogs at the to limits on the number of searches which NASA Exoplanet Archive. They were not can be accommodated, and also on the made into KOIs by the TCE Review Team number of post-TPS tests which can be (TCERT). The other TCE found on KIC accommodated. Second, applying a limit target 8019043 has been found previously to the number of TCEs per target prevents and is contained in the KOI catalog at the a failure mode in which a flux time series NASA Exoplanet Archive and is labeled as is so pathological that the search process being a false positive (KOI 6048.01). The becomes “stuck,” returning an effectively 20,367TCEsincludedinthisanalysishave infinite number of nominally-independent beenexported tothetablesmaintained by detections. The selected limit of 10 TCEs the NASA Exoplanet Archive5. is based on experience: to date, the max- imum number of KOIs on a single target star is 7, which indicates that at this time, 5http://exoplanetarchive.ipac.caltech.edu. limiting the process to 10 TCEs per target is not sacrificing any potential KOIs. The additional searches performed for 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.