ebook img

NASA Technical Reports Server (NTRS) 20130009114: Assessing the Benefits of NASA Category 3, Low Cost Class C/D Missions PDF

8 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview NASA Technical Reports Server (NTRS) 20130009114: Assessing the Benefits of NASA Category 3, Low Cost Class C/D Missions

Assessing the Benefits of NASA Category 3, cm Low Cost Class Missions Robert Bitten Steve Shinn EricMahr The Aerospace Corporation Goddard Space Flight Center The Aerospace Corporation 2310 E. EI Segundo Blvd. 8800 Greenbelt Rd., Code 400 400 E Street NW EI Segundo, California 90009 Greenbelt, Maryland 20771 Washington, DC 20024 (310) 336-1917 (301) 286-5894 (202) 358-5118 [email protected] [email protected] [email protected] Abstrac#-Category 3, Class elD missions have the benefit of and their relative total mission life cycle cost (LCC), as delivering worthwhile science at minimal cost wbich is shown in Table I. [4] Similarly, NPR 8705.4 defines increasingly important in NASA's constrained budget different classes of missions based upon a variety of factors, environment. Although higher cost Category 1 and 2 missions as shown in Table 2. [5] The definitions of mission are necessary to achieve NASA's science objectives, Category 3 categories and classes allow a distinction between missions missions are shown to be an effective way to provide significRnt in terms of guidelines for development. Although there are science return at a low cost. Category 3 missions, however, are clear guidelines for elements such as parts selection and often reviewed the same as the more risk averse Category 1 and 2 missions. Acknowledging that reviews are not the only testing reqnirements, the review requirements for these aspect of a total engineering effort, reviews are still a missions are much more ambiguous. This ambiguity often significant concern for· NASA programs. This can leads review teams to default to the conunon practices and unnecessarily increase the cost and schedule of Category 3 extensive requirements of larger missions. As a result, the missions. This paper quantifies the benefit and performance of lower priority, lower cost Category 3, Class CID missions Category ,3 missions by looking at the cost vs. capability are being reviewed similar to high priority, higher cost relative to Category 1 and 2 missions. Lessons learned from Category 112, Class AlB missions. A conunon statement in successful organizations -that develop low cost Category 3, NASA is "Every mission is Class A by the time it Class CID missions are also investigated to help provide the launches". The primary benefit of Category 3 missions is basis for suggestions to streamline the review of NASA Category 3 missions. their ability to collect science data at a relatively low cost. By treating Category 3 missions the same as Category I or 2 TABLE OF CONTENTS missions, the effort reduces this benefit and provides a substantial burden on the Category 3 mission project team 1.INrRODUCTION. .............................................................. 1 and reduces the benefit to NASA and its stakeholders. 2. BENEFITS OF CATEGORY 3 MISSIONS ........................... 2 3. FAILURE RESULTS SUMMARY ........................................ 5 Table 1. Category 1, 2, 3 Definitions from NPR 7120.5E 4. NASA REVIEW PROCESS & RECOMMENDATIONS FOR STREAMLINING ............................ ;. ........................ 6 PRIOR,I TY 5. SUMMARY .....................•.••........•....•............••.•.••........... 10 I I \'1'1. ACKNOWLEDGEMENfS .................................................... 11 CAT 2 CAT 2 CAT 1 REFERENCES .................................................................... 11 CAT 3 CAT 2 CAT 1 BIOGRAPHY ...................................................................... 12 CAT 3 CAT 2 CAT I I. INTRODUCTION Table 2. Class A, B, C, D Defmitions from NPR 8705.4 Category 3 missions are the lowest cost and highest risk ",i i"i§ig;'. Cln" r Clu,~ [) missions within NASA's science portfolio. Category 3 ~.r"'<'I:I~ missions do, however, provide significant benefit to NASA -Prinl:j High High Medium Low by providing important science contributions at a low cost. .\Cc~pl~hh· VeiY [I] [2] [3] The success of Category 3 missions becomes even Low Medium High R15k low more important at this time when NASA budgets are ·. ... 1'10".1 Vel) Low to becoming more and more restrictive. The continued success Sf ~u In,'. I'll':" ,High High Medium Medium of Category 3 missions will be important to NASA's future. ,\Iis5lon Vel) High to Medium Medtum NASA has developed a set of guiding documents to provide Com ieii . High Medtum to low to Low different requirements and governing principles for missions MJ"lrm High to Medium High Low of differing levels of criticality. NASA Procedural CoO,,' Medium to Low Requirement (NPR) 7I20.5E defines different categories of missions based on their priority to NASA's strategic goals This paper addresses the benefits of the lower cost Category 3, Class CID missions, looks at relative failure rates of 978·1-4673·1813·6/131$31.00 ©2013 IEEE I similar organizations that manage and build Category 3 also be significantly more focused in their science missions, and provides recommendations for potentially objectives, yielding a small, but significant scientific result. streamlining the review process for Category 3 missions to maintain or improve quality while retaining the benefits of Data Collection and Mission Categorization these low cost missions. To assess the benefit of each category of missions, data were collected for NASA missions launched within the last 15 years. The intent was to do a comparison between 2. BENEFTIS OF CATEGORY 3 MISSIONS Category I, 2 and 3 missions to characterize their mission Overview cost, failure rate and overall benefit. The result is a set of 62 NASA missions listed individually in the Appendix. The NASA employs a mix of missions within its portfolio to missions included in the study represent a wide range of accomplish desired science objectives. Large Category I category and class of missions. Figure I shows the high priority flagship missions, such as the Hubble Space distribution of the Life Cycle Cost (LCC) in FY12$ as Telescope, are required to answer unique science questions compared to the category and class of the mission. All real that only a large scale telescope can address. Other year mission cost data were inflated to fiscal year 2012 missions, like the Cassini and Galileo Orbiters, are the most dollars (FYI2$) so as to represent, as best as possible, the cost effective way to operate a large number of scientific real year dollar guidance for LCC categories as stated in instruments orbiting a distant planet. Medium sized NPR 7120.5E and shown in Table I. As can be seen, Category 2 missions are also important as they can provide Category I missions consist of Class A and B missions, focused platforms for science that requires either multiple where Category 2 missions consist of a balanced mix of instruments for simultaneous observations or single medium Class B and C missions while Category 3 consists of Class to large sized instruments that have a unique scientific C and D missions. Category 3 missions include the lower objective. Less costly Category 3 missions are also cost half of the Class C missions launched within the last 15 necessary, however, to conduct initial observations or fill years. gaps in knowledge for certain science disciplines. They can CAll $1,200 f-----;- .t---!------------------+-------r $1.100 i--:::1-.:::~::::::::::::::::~---------------i--------------_r111t1111tt11Itf11 $1poo ~$900 +--~;",----------+--.---. tsoo u ~2 +--+ ---- -------{.---.. --.. ----.. ---.......... -.. 8$700 11 -I----'-----------t----. _-.. . -- .--- 3'$600 :S -1----;--------- -. --" ... _. ". $500 ----+ "". ----........ $400 -I--~f__--- -1·Hf----·,,~-t DDDDCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCBBBBBBBBBBBBBBBBBBBBBBBBAA Figure 1 - Life Cycle Cost Characterization of Mission Category & Class for the Study Data Set 2 One of the objectives of the paper was to identify the more in Section 3, is that Category 3 missions also have a relative science costlbenefit of each mission class. As snch, much higher non-confirmation rate. only NASA Science Missions that meet certain criteria were considered. Missions that have recently been launched but have yet to begin their science missions, such as the Failure Rate per Category Radiation Belt Storm Probes (RBSP) and Nuclear ~% 26% Spectroscopic Telescope Array (NuSTAR) missions, were not considered because their success has yet to be 25% +-------------------~. determined at the time of writing. Missions that relied 20% +---------- heavily on international contributions, such as the Tropical 15% +-----------'~oK,--~-22%__stC Rainfall Measuring Mission (TRMM) and CALIPSO missions, were also elimmated from consideration due to the 10% ------------- 4% ----- difficulty of assessing foreign costs. Additionally, only full 5% 8% science missions were considered. Therefore "Instrument only" science experiments, where the instrument was 0% .j---ifl-Ol.<-~~- ---.--.,---- provided to another organization, were not considered. CAT 1 CAT 2 CAT 3 Technology demonstrators, such as NanoSail-D and Demonstration of Autonomous Rendezvous Technology Figure 3 - Average Failure Rate per Category (DART), were also excluded given that their primary focus is technology demonstration, not science. In addition, "Science Return" Categorization operational missions like the Geostationary Operational Assessing the benefit of a science mission is typically very Environmental Satellite (GOES) series, were excluded in. subjective as the ''value'' to one scientist for any given data order to focus on more typical, one of a kind NASA science returned will not be the same as to another scientist in missions. another discipline. There is no perfect way to judge Cost Categorization scientific instrument value. To assess the benefit for the purposes of the study, two "science return" metries were The average cost for each mission category is shown in investigated to provide an objective measure ofbenefi!. The Figure 2. Given that one of the primary criteria for first "science return" metric calculates the number of categorization of missions per NPR 7120.5E is cost, the instruments operating over their lifetime and was originally resnlt is as expected with Category 1 missions being proposed as an objective quantification of overall science substantially higher than Category 2 missions which are value. [6] In addition, a second metric was defined to look more costly than Category 3 missions. at the total data retorned from all science instruments over the lifetime of the mission. Other metrics were considered but found to have certain issues that were hard to overcome. For example, the number of papers published by mission Average Cost per Mission scientists was considered but was believed to $2,000 disproportionately favor large, prolific tearns that publish mnltiple papers versus a small team that publish a few, very $1,500 significant papers. Similarly, a metric based on the number of "significant" findings that resulted from the mission $1,000 wonld be challenging to use given that the term "significant" is very subjective and difficnlt to quantify. It has also been suggested that the science value of an $500 instrument is proportional to its mass. This metric suggests, however, that planetary missions are inherently less $- valuable than Earth orbiting missions because planetary CAT 1 CAT 2 CAT 3 missions typically have much less payload mass due to the difficulty of getting its payload to its final destination. Figure 2 - Average Cost (FY12$M) per Category Combined, the two proposed metrics should provide a reasonable assessment of the benefit of the different classes. Failure Rate Categorization The first proposed metric uses the number of instruments The average failure rate for each mission category was also on-board the satellite multiplied by the length of time the calculated, as shown in Figure 3 with Category 3 missions, instruments take data at their final destination and is being mostly comprised of Class C and D missions, measured in terms of "instrument-months." [6] experiencing a much higher failure rate than Category 1 or 2 Multiplying by the duration that the instrument operates missions. A mission failure is defined as a launch vehicle or provides a surrogate for the quantity and depth of spacecraft failure. An interesting note, which is discussed 3 information gathered by the instrument. The proposed objectives are considered of equal value. The obvious metric also accounts for full and partial mission failures limitation of such a metric is that all instruments are not because the failed mission's instrument duration of created equal. The metric itself treats each instrument the operation, and corresponding science return, would be zero. same even though a very sophisticated imaging mdar instrument is much more complex to develop than a simple When this metric is applied to the dataset used for this magnetometer. It could be argned, however, that the value study, the results are as expected, with Category I missions to the scientist utilizing the magnetometer data is the same returning more science than Category 2 missions which as the scientist utilizing the data from a radar image as each return more valne than Category 3 missions, the results of is answering a relevant science qnestion with the data which are shown in Figure 4. obtained. The primary assumption of the second metric is that every Average Months of Instrument data bit genemted by a science mission is of the same value Operations per Mission as any other bit from another mission. This a limiting 700 assumption, however, as an instrument that collects 600 significant amounts of data, such as a Synthetic Aperture 500 Radar (SAR), would be deemed inherently more valuable 400 than instruments that collect less data. 300 The value of each instrument that NASA launches carmot be 200 understated. NASA employs a severely competitive science 100 selection process using a peer review board of scientists to select the most valuable science from all proposals CAT 1 CAT 2 CAT 3 submitted. Table 3 identifies that, over the last 15 years of Small Explorer (SMEX) and the no longer existing University Explorer (UNEX) proposals submitted, ouly the Fignre 4 - Average Instrnment Month per Mission top 6% were selected for implementation. Given the The second metric closely parallels the instrument-months number of proposals submitted and the thoroughness of the metric but utilizes the instrument data rate and operating evaluation process, it is believed that the science of these durations to calculate the data returned from the mission missions .i s of the best that can fit within the cost data set. When this metric is applied, the results are also as constraints. Additionally, given that mass, power and expected, with Category I missions returning more data volume resources on a spacecmft are always tight and than Category 2 missions which return significantly more extremely valuable; each instrument has to "buy" its way data than Category 3 missions, the results of which are onto the spacecmfl such that the selection of each is shown in Figure 5. warranted. For those missions that can be implemented within Category 3 funding constraints, the competitive process achieves high value science with the selected Average Data Rate (Mbps) missions and instruments. per Mission Table 3. NASA SMEXIUNEX Program Selections 12.0 10.0 8.0 6.0 4.0 2.0 ._- 4.5'----- CAT 1 CAT 2 CAT 3 Figure 5 - Average Data Rate per Mission "Science Value" Cost Effectiveness Metrics "Science Return" Metrics Caveats & Limitations The results shown in Fignres 2 through 5 are as expected; These metrics provide a different perspective of value for although Category I and 2 missions are more expensive each mission class bnt each has its limitations. A primary than Category 3 missions, they also fail less often and assumption of the first metric is that all instruments provide provide more science return per mission. This is a equal science valne. The basic mtionale for this assumption straightforward outcome given that Category 3 missions are is that each instrument is placed on-board a satellite to made up of less reliable Class C and Class D missions while achieve a specific scientific objective and that all scientific the reduced scope reqnired to meet funding gnidelines limit 4 the number of instruments and years of operations thereby nnsslOn. For those scientific objectives that can be reducing the science return, as indicated by the instrument implemented within the constraints of a Category 3 mission, months and data returned metric. however, the two metrics -developed indicate that the mission implementations seem to provide a fairly cost Given that Category 3 missions are shown to provide such effective acquisition strategy. . The data indicates that little science return and fail more often, why should Category 3 missions can. serve as a cost-effective, critical Category 3 missions be attempted at all? The answer lies as building block for a balanced portfolio of missions. being cost effective building blocks for future discoveries. Given that science return has been defined with instrument months or as data returned, it is a simple task to divide the 3. FAILURE RESULTS SUMMARY two proposed metrics by the total mission LCC to determine As shown previously in Section 2.4, satellite failures have the cost-effectiveness of each mission. When viewed this occurred more often in Category 3 missions than in way, these cost-effectiveness metries measure the mission's Category I and 2 missions. As shown in Table 4, the "bang for the buck" or the amount of science gathered per majority of these failures have been in the satellite flight dollar. This cost-effectiveness approach can be extended to system. In addition, Category 3 missions have a greater each mission class by summing the total instrument-moths probability of being non-confirmed. Of the eight missions or data returned for missions in a given class and dividing that were cancelled or not confirmed from 1997 to 20 II, by the total LCC of the mission. in that class. Computing five of those missions were Category 3 missions. Given the these values, the data presented indicates that Category 3 23 Category 3 missions launched, the five missions missions are either the most cost effective category,as represent an 18% cancellation rate (i.e., 5 out of 28 total shown in Figure 6 relative to instrument montha per dollar, missions) which is siguificant. Combined with the failure or essentially equally as cost-effective, as shown in Figure 7 statistics, this represents a probability of cancellation or based on the data returned per dollar. failure for missions that are selected on the order of 39%. A further breakdown of the missions that have failed and the possible cause of failure or cancellation/non-confirmation Category I-M per $M (FY12) are provided in Table 5. More detail on each mission is O.B contained in the Appendix. 0.7 0.6 Table 4. Canses of Mission Failnre 0.5 =.....,,=~~:::"! 0.4 0.3 0.2 0.1 CAT 1 CAT 2 CAT 3 Tobie 5. Causes of Mission Failnre ~Q~- . (;~'i~i;Jr'~~- : Fuliil;t:{ ~\'~~r' Rrr{'n'n(C' Fignre 6 - I-M Science Mission Cost Effectiveness x- _ -_-___ Kf•i• l~ ~.. -"'u 'Jl -_ _ - _ , Lew~ ___~ A.I_ 3_ _ S~cecra!t_..1.~~.7_ __7, ,,-B_ __ Catego_r_y T 8 per $M _(F_Y 12) WIRE •_ ___ CAT 3 Spa.c:ecraft 1999 9 --_. ._._ --_. . ._-- 0.20 CAT 3 Spac~~L!999 __1_ 0_ _' i ~~'"s~_ MCO CAT 3 Spacecraft 1999 11 0.15 MPL _---'C=A-:T.: .2-= ----'S:..p!:.a::cecr~f! ...,;10;"9,9;..9;;_ _1_ 2_-1 0.10 CONTOUR CAT3 Spacecraft 2002 13 Clark CAT 3 Cance,;.I1:,.e::.d."---,1,;.9::,,9,,8:.....-. 14 I 0.05 ST-4 CAT 2 Cancelled 1999 15, 16 jl ~CL CAT 3 Cancelled 2001 17 0.00 IME!<_ ___ _- =C.:.A.:.:T... .3::. .----c":a;:cn;.c:.e::.l:I;.e::d'7'_ _2~ '.O_1_._..;1;cB;_ _ CAT 1 CAT 2 CAT 3 !1 FAME CAT 2 Cancelled 2002 D19 ! Fignre 7 - TB Science Mission Cost Effectiveness SPIDR __£ A_T ~~_-=Ca:.:n.cell_ed_._2_0_0_2 CATSAT CAT.:3::_...;C:.a:::n.::c;:;e::l:I.e::.d::.......:2::.0:~0~2_ 21 -, As stated previously, there are certain scientific objectives SIM CAT 1 Cancelled 200B 22--1 that cannot be accomplished given the funding and "23"] I QuikTOMS CAT 3 --L-V' 2001 subsequent size and complexity constraints of a Category 3 5 Directors can work with the Mission Directorate Associate Admiuistrator (MDAA) to identify an acceptable review approach. Although Category 3 reviews are governed by NASA Centers, current policy does not provide good, 4. NASA REvIEW PROCESS & uuiversal guidance on the streamliuing of reviews for RECOMMENDATIONS FOR STREAMLINING Category 3 missions. [26] Overview An example of the growing review requirements for NPR 8705.4 provides guidance for the distinction of Class Category 3 missions can be seen by the experience of the A, B, C and D missions for a variety of different elements. Small Explorer (SMEX) Aeronomy of Ice in the As a previously described, Category 3 missions are usually Mesosphere (AIM) mission. Iuitially the AIM team, as part covered by Class C & D guidance. Although NPR 8705.5 of their proposal and Concept Study Report for the provides good guidance on many aspects of mission competed SMEX mission, proposed ten reviews for the development, such as testing, parts/materials, safety, major milestones. Due to circumstances originating over reliability, risk management, etc., relative to Class A, B, C the initial concern about the cost of the mission and and D missions, there is very little guidance relative to the spacecraft, the AIM project was required by the Independent implementation of reviews. As shown in Figure 8, guidance Integrated Review Team (IIRT) to hold over fifty reviews for reviews is provided in One line. Although this line prior to mission CDR. As shown in Figure 10, the 3 provides general guidance for overall reviews, it does not original reviews that were to encompass Systems address specific requirements for reviews at a lower level. Requirement Review (SRR), Preliminary Design Review (PDR) and the Confirmation Readiness Review activities NPR 7120.5E also provides some guidance relative to expanded to include 29 reviews during that timeframe. [27] Category 1,2 and 3 mission, as shown in Figure 9 and states Although it is difficult to quantify the complete cost impact that NASA Centers have the sole Techuical Authority for of such an increased review requirement, at minimum there Category 3 missions. Given that is the case, Center was a siguificant disruption of project activities and progress. Development Class A I Class B Class C ClassD 1- TOI'"i,c'---1 .....- -'~ Full formal reVIew Full formal re~.ew ~F ullt-oimal re""", Center level reVl~ . program. Either IPA O program Either IPAO program Independent with particIpation of all external mdependent extemal.ndependent review. managed at applicable Directorate. reVIews or mdependent r"",ews or mdependent (;enter level with May be delegated to revIews managed at the reviews managed at the Drrectorate participation. ProJects. Pee! re~lews Review Center level With Center level With Inclnde formal ofsoflware D.rectorate partIcIpation. Drrectorate participation. mspectionsofsoftware req=ents and code Include formal Include formal requirements, peer mspectloru; of software inspections of software reVIews of design and requirements, requirements. design. code. desIgn. verification verification documents, documents, and code. and peer reVIews of code. Fignre 8 - Review Requirements for Class A, B, C, D Missions as Stated in NPR 8705.4 Director, Decision Authority Technical Authority Office of NASAAA MDAA NASACE Center Director(s) Evaluation Programs Approve Approve Approve Approve Approve Category 1 Projects Approve Approve Concur Approve Approve Category 2 Projects Approve Concur Approve Approve" Category 3 Projects Approve Approve NASA CE = NASA Chief Engineer " Only for Category 2 projects that are $250 million or above. Figure 9 - Convening Authorities for Standing Review Boards as Stated in NPR 7120.5E 6 IIRT Plan 4/1/03 • 3 planned reviews grew to 29 • 50+ Reviews from 5103 SRR to 11104 MCDR Including 3 SRRs and 2 SOFIE PDRs Figure 10 - Expansion of Review Experienced by AIM as Presented by tbe AIM Principal Investigator [27] Although the NASA AIM Project Manager also commented The Department of Defense (DOD) Space Test Progmm on the escalation of reviews that AIM experienced, he also (STP) is chartered by the Office of the Secretary of Defense stated the benefit of internal peer reviews in the following to serve as ". .. the primary provider of mission design, statements: "One type of review - peer reviews - were of spacecraft acquisition, integration, launch, and on-orbit significant value, particularly in the early phases of the operations for DOD's most innovative space experiments, project when technical input and critique were incorporated technologies and demonstrations". [29] The Space Test for a modest investment in time. Some of the most effective Progmm has been providing access to space for the DOD peer reviews were small and informal with a modest number space research and development community since 1965. of expert participants. On more than one occasion, however, The Space Test Program has a long history and well the peer reviews were preceded by a dry run peer review to developed expertise in mission design, spacecraft bus enable the desigu team to work issues. As these reviews acquisition, payload integration and testing, and launch and become more broadly attended with increased formality, on-orbit operations. they lose the original intent. One wonders how this trend to formality might be reversed." [28] The Air Force Research Labomtory's Space Vehicles (AFRURV) Directomte leads the nation in space supremacy Considerations for Streamlining research and development. Their mission is to develop and transition innovative high-payoff space technologies To more fully understand the possibility of streamlining supporting the warfighter, while leveraging commercial, reviews, the pmctice of two United States Air Force (USAF) civil and other government space capabilities to ensure organizations were investigated. Although these America's advantage. [30] organizations launch a variety of different types of ntissions, both launch a subset of missions that are equivalent in scope Figure II shows the failure mte of the STP and AFRL to NASA Category 3 ntissions and, for this subset, they organizations as compared to the failure mte of NASA expetience a relatively high mission success mte. It must be Category 3 missions. As can be seen, the combined failure noted, however, that the missions developed by these rate of STP and AFRL missions is significantly lower than organizations are primarily short term technology for NASA missions. In the same time petiod from 1997 to demonstration missions and, therefore, have different 20 II, STP and AFRL have launched fourteen missions that overall objectives than NASA science missions. There are relatively equivalent to NASA Category 3 missions. Of should be some consideration given that technology those 14 missions, ouly one expetienced a spacecraft failure demonstrations missions may be able to allow for some and ouly one experienced a launch vehicle failure for an liberties that a NASA Science ntission may not be able to ovemll failure rate of 14% as compared to the one launch take. Given their relatively high success mte, however, an vehicle and five spacecraft failures of the 23 NASA assessment of review practices of these organizations was Category 3 missions launched in that same time petiod. conducted. Based on the relative success of their missions, Aerospace personnel supporting these organizations were asked to 7 provide comments on their review process in order to Class A through D, start with the design review criteria identify differences STP's and AFRL's review approach outlined in the Aerospace Space Vehicle Systems relative to NASA. Engineeting Handbook. Aerospace then sends out a tailored version of the design review criteria with the tailoring based on how much funding has invested in the project. In some Failure Rate Comparison cases the criteria may be significantly tailored by taking a 30% ~----------------~~----- quick look of each of the major subsystems so that there is a 26% 25% ~-----.. --------.-----.--~ level of confidence that the supplying organization is following good practices After approval by STP, the list is 20% +-----------~~-- sent to the contractor for acknowledgement that these areas 15% t---r-~~"L------ 22% SIC will or will not be addressed in the review. At this point, the contractor is provided the opportunity to negotiate scope. 10% ---------II Once the criteria are decided upon, Aerospace attends the +---1 5% review and provides verbal comments and action items. STP has approval authority at major milestones and utilizes the 0% -1-..- --' verbal comments and action items as input. If the project STP&AFRL NASA CAT 3 does not pass the Design Review then STP requires that either all of the liens be properly closed or requires a Delta Figure 11 - Failure Rate for Equivalent Category 3 Design Review in order to enter the next phase. Missions STP also relies on many information reviews conducted by STP and AFRL Experiences [31] the contractor as part of their normal practices. Technical When discussing· the STP and AFRL review processes, it Issue Reviews (TIRs) are informal. Peer Reviews of was clear that the reviews by themselves were not the subassemblies are occasionally conducted and led by the primary contributing factor to the success of their missions. contractors during development of the subsystems and .Both STP and AFRL have a unique, streamlined mission software systems. assurance approach that relies on identifying high risk An Independent Readiness Review Team (IRRT), elements early and then focusing on these risk areas with comprised of from 4-6 independent (of the program being greater scrutiny while minimizing review of the lower or evaluated) reviewers and Subject Matter Experts (SMEs) as accepted risk items. Both STP and AFRL rely on the needed, is organized by SMC. The IRRT is usually contractor's normal best practices while focusing on the comprised of personnel from several organizations. high risk areas throughout the project while utilizing the major reviews as a discriminating gate for passage to the Similarly, AFRL routinely only conducts the following next phase. Although both STP and AFRL start with reviews: SRR, PDR, CDR, PSR, MRR and participates in standard entry and exit criteria for major reviews, they the SMC/CC FRR. In the past few years AFRL has also streamline these criteria tailored to each mission based on begun conducting an Operations Readiness Review (ORR) the initial and continuing risk assessment. This provides for before the MRR. This focuses almost exclusively on the an environment where the limited review resources are operations strategy, on-orbit and operations risks, and focused on the items that matter the most. operations personnel readiness. The SRR, PDR, and CDR are usnally 2 days in length and are chaired by the Program STP normally conducts the following formal reviews for Manager with AFRIJRV leadership in attendance as each spacecraft it acquires: System Requirements Review spectators. The MRR is specifically chaired by the (SRR), Independent Baseline Review (ffiR), Preliminary Design Review (PDR), Critical Design Review (CDR), AFRIJRV Director. Integration Readiness Review (IRR), Test Readiness In addition, AFRL routinely conducts an Independent Review (TRR), Space Flight Worthiness Certification (as Readiness Review (IRR), comprised of 4-6 independent (of part of the MRR), Pre-Ship Review (PSR), Mission the program being evaluated) reviewers plus subject matter Readiness Review (MRR), Flight Readiness Review (FRR), experts (SMEs) as needed and as directed by the AFRLIRV and Normal Operations Readiness Review (NORR). The Director. The IRR is usually supported by The Aerospace PDR and CDR are usually about three days in length. Other Corporation, with SMEs usually drawn from The Aerospace reviews are several hours to one day in length. Reviews are Corporation's engineering group. normally chaired by the Program Manager, except for the MRR and NORR which is chaired by the Space Peer Reviews are routinely and frequently conducted by the Division/Space Test Program (SD/STP) Director, and the contractors during development of the subsystems and FRR which is chaired by Space and Missile Systems Center software systems. Occasionally InternaI Design Reviews Commander (SMC/CC). (!DRs) are conducted at the contractors; but such reviews are rare. The Aerospace Corporation (Aerospace) applies a process to tailor design review criteria for STP missions. All missions, 8 AFRL mmmnzes their documentation and distracting relates to the review process as recommended for Class C reviews. The ones condncted have over the years proven and D missions - i.e., Category 3 missions - to ensure that their worth. They have occasionally tried to do others; but the review process is commensurate with the level of the value never justifies the expense. accepted mission risk. [32] The following is a sununary of the recommendations made by the MAIW relative to review In terms of documentation, AFRL again produces requirements for Class C and D missions. documentation that has been proven by AFRL to be of value. They produce SRR, PDR, CDR and PSR briefing Class C RevieWs Guideline-Risk-accepting Class C slides but no other accompanying documentation. There is program reviews may not include the full suite of reviews. one exception: the System Requirements that are given to Early in the project-defiuition phase less critical reviews the contractor(s) are formally and contractually documented. may be eliminated to balance cost containment against the There are no system specifications beyond what the risk of late issue identification. Planned reviews are contractor(s) consider in their contracts to be "normal, best typically documented in the project plan. Key reviews such practices" for them. The Government produces no such as SRR, CDR, and MRR are generally held in compliance documents on AFRL projects. At the system-level, test with contractor or industry standards. Review material planning is formalized but documentation is in Engineering generally follows standards for such reviews with some Notebooks. Test procedures and results are published in modification allowed to manage review cost. Items formal, policy-compliant, short reports. A caution is that eliminated are perceived low risk to the project. this "minimal documentation" can sometimes lead to an under-delivery of material .that aids in the transitioning of Class D Review Guidelines-High-risk tolerant projects technologies, which can be a hindrance to technology typically hold only at a few key milestone reviews during adoption. Design specifications (especially "as built" specs) their lifecycle. Key milestones include requirements and test reports that focus on what has been learned about definition, design determination, prefabrication, and post the various technologies are distinctly lacking. However, it hardware fabrication prior to transfer to the customer. is the culture of AFRL to proceed cautiously in terms of These reviews typically include a few key internal to the adding tasks, reviews, or documentation without careful and contractor with contractor personnel who have similar controlled proof that the benefits outweigh (usually AFRL project experience. External customers may be invited but seeks "far outweighs" rather than just "outweighs") the are not required to participate. Review material is less costs (in dollars, time, and skills). formal in content and is often less than fully compliant with industry standards for such reviews. Early planuing for all An additional, unique difference for AFRL missions projects, including Class D projects, should include a includes the requirements development process. Virtually discussion regarding the reviews to be held during the all requirements at AFRL are mutable; they are not "written project's lifecycle. in stone." The only requirements at AFRL that are typically "written in stone" are the Mission-level experiment A primary lesson learned is that, prior to any review, it is requirements. These are the highest level of requirements beneficial for a project to perform an internal readiness and are established at the beginning of the project when it is review to verify that they are ready to start and complete the formed. All lower-level requirements are mutable and review at hand. In addition, prior to conducting an "negotiable" throughout the programs to balance cost, independent review, the development of all entrance and schedule and technical performance. AFRL works hard to exit criteria for each review to determine Mission Class A-D ensure that requirements do not ever force them into specific entrance and exit criteria· would be useful to set the compromised schedules or costs. expectations for that specific review. Also, required program Independent Reviews should be defined in the Recommendations Project Management Plan· during project kick-off. Lastly Independent Review (IR) criteria should be defined early on Mission Assurance is a combination of many processes and to understand in the context of contractor policies. factors of which the reviews and review process is a limited part. The 2010-2011 Mission Assurance Improvement All reviews are given the following consideration: Workshop (MAlW) program addressed this issue in more detail in developing the ''Mission Assurance Guidelines for 1. Requirement: A-D Mission. Risk Classes" which is based on Required, Recommended, Discretionary recommendations from a team comprised of government 2. Independence: and industry team partuers and interviews with different External, Internal, Developer supporting organizations such as STP and AFRL. The goal 3. Completeness (see following paragraphs) of the team was to develop guidelines to define characteristic profIles for mission assurance processes for a Review Completeness Guidelines for Class C Missions given space vehicle risk Class (A, B, C, or D) to serve as a Only core mission assurance topics described in the exit recommended technical baseline suitable to meet program criteria will be reviewed. The Independent Review Team needs based on programmatic constraints and mission (IRT) works with program management to determine and needs. Appendix B2 of the MAlW document specifically 9 review the high and medium-high risk/mission critical areas. following a defined process, which, at a miuimum, should Interviews conducted with key players in the high and include the independence and completeness levels as medium risk/mission critical areas. Class C requires the indicated in the matrix. program to prove completion by review of examples, 100% physical review is not required. Tailoring of Independent The levels of independence are stated as Externally Review (IR) criteria is permitted to allow review of Independent, Internally Independent and Developer summary analysis of evidence is acceptable through Independent. Externally Independent reviewers are agreement between the IRT leadership and the program orgauizations or' personnel that are techuically, office. IRs are typically performed on an ad hoc basis. managerially, and financially independent of the contractor. Internally Independent reviewers are orgauizations or Review Completeness Guidelines for Class D Missions personnel that are within the contracting orgauization - i.e., Reviews performed only on core mission assurance required NASA Center - that are techuically, managerially, and by launch safety or potentially impacting any higher-class financially independent of the project. Developer payload (if rideshare configuration) described in the exit Independent reviewers are organizations or personnel that criteria. The IRT will work with project management to are within the contractors that are technically independent of determine and review the high risk/mission critical areas. the review subject developer team. Interviews should be conducted on a subset of the key players only in the high risk/mission critical areas. Work is A summary of the recommended reviews for Class C and D performed through a scaled down checklist pre-defined by missions is shown in Table 6. For Class C missions, CDR agreement between the IRT leadership and the project. and MRR are the only reviews that are proposed to be Class D uses word of mouth or sampling as sufficient required, while SRR/SDRlMDR, PDR, PSR and FRR are evidence, not necessarily requiring physical review of recommended with SIR and PER 'being discretionary. For objective evidence. Siguificant tailoring of IR criteria is Class D missions, given the high level of risk accepted and acceptable through agreement between the IRT leadership the miuimal consequence for failure, no· system-level and the project which may not include SMEs from all reviews are proposed to be required with only CDR, PSR techuical disciplines (focus is on' critical requirements of the and MRR being recommended. All other reviews for Class mission). The IR is mostly considered an ad hoc function. D missions are considered discretionary. [32] Based on the MAIW recommendations presented in Table 6, one IR requirements are stated as Required, Recommended or interesting consideration is that the project's baseline Discretionary. Required reviews are formally part of the confirmation review may need to be postponed until after project per contract requirement following an CDR given that CDR is required for Class C missions, with intemaVexternal standard. Recommended reviews are PDR only recommended, whereas PDR is listed as fully highly suggested following an internaVexternal standard that discretionary for Class D missions. This would be in direct can be tailored from the suggested levels of independence conflict with NPR 7120.5E, however, and would have to be and completeness. Discretionary reviews will be at the given siguificant further thought. discretion of the project, contractor and/or customer Table 6. Recommended Review Process Streamlining for Class C and D Missions Developer Discretionary Developer Internally Recommended Internally Discretionary Developer Discretionary Developer Discretionary Developer Discretionary Developer Recommended Internally Recommended Internally 5. SUMMARY When looked at as a whole, however, Category 3 missions provide a cost effective acquisition approach for missions The study provides an assessment of NASA's Category 3 that can fit within the defined c.ost constraints. Simply missions relative to Category 1 and 2 missions over the last stated, Category 3 missions are not effective for delivering 15 years. The data collected indicates that, although much of NASA's mission portfolio but, when used in Category 3 missions cost less than Category 1 and 2 appropriate situations, they can be a very cost-effective tool missions, they deliver less science and fail more often. 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.