S tandardized Functional V erification Alan Wiemann Standardized Functional Verification Alan Wiemann San Carlos, CA USA ISBN 978-0-387-71732-6 e-ISBN 978-0-387-71733-3 Library of Congress Control Number: 2007929789 © 2008 Springer Science+Business Media, LLC All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper. 9 8 7 6 5 4 3 2 1 springer.com Preface It’s widely known that, in the development of integrated circuits, the amount of time and resources spent on verification easily exceeds that spent on design. A survey of current literature finds numerous references to this fact. A whitepaper from one major CAD company states that, “Design teams reportedly spend as much as 50 to 70 percent of their time and resources in the functional verification effort.” A brief paper from Design and Reuse observes that, “70 percent of the overall design phase is dedicated to verification,” and that, “as designs double in size, the verification effort can easily quadruple.” In spite of all this effort, another whitepaper from yet another CAD company observes that, “two or three very expensive silicon iterations are the norm today.” Couple these observations on verification effort with the fervent quest for functional closure taking place in the industry and it becomes clear that a breakthrough in functional verification is greatly needed. Standardized functional verification is this breakthrough. The title of this book suggests that standardized functional verification has already been accomplished. However, at the time of publication this standardization is but an exciting vision. Great strides have been made with the availability of commercial test generators and coverage analysis tools, but the organizing principles that enable direct comparisons of verification projects and results have been, until now, undiscovered and undefined. One leading software vendor refers to coverage of scenarios, stating that, a “test generator [can] approach any given test scenario from multiple paths.” However, there is no consistent method available for using lists of scenarios as a basis for comparing verification projects. In chapter 3 we will learn that any given scenario can be reduced to the specific values of standard variables and, more precisely, to one or more arcs in a value transition graph, arcs that connect these values. There usually is a multitude of paths to any given function point, and we must travel each and every one to exercise our target exhaustively. Moreover, current literature does not explain how to produce an exhaustive list of these scenarios, so risk assessment is based on hopeful VI Preface assumptions that the few scenarios that were listed in the verification plan were: 1) enough, and 2) the very scenarios that are likely to result in faulty behavior caused by functional bugs. One often encounters odd complaints along the lines of, “it is impossible to ‘think’ of all the possible bugs when writing the functional test plan.” Indeed, bugs frequently remain where we do not think to look for them. One vendor defines functional coverage as “explicit functional requirements derived from the device and test plan specifications.” However, they do not explain how to derive these explicit requirements or what specific form they have, once derived. Standard variables and their ranges, and the rules and guidelines that govern their relationships, provide the analytical foundation upon which this can be achieved. They continue, stating that, “Test plans consisting of lists of test descriptions were used to write a large number of directed tests,” and that their verification software enables “exhaustive functional coverage analysis.” Later on they say that, “functional coverage items are derived from a written verification plan,” and that “conditions to satisfy the verification plan are identified and coded into functional coverage objects.” However, they do not explain how to achieve this “exhaustive functional coverage.” This same vendor also remarks (accurately) that, “… one of the most difficult and critical challenges facing the designer is to establish adequate metrics to track the progress of the verification and measure the coverage of the functional test plan,” and that, “… coverage is still measured mainly by the gut feeling of the verification manager, and eventually the decision to tape out is made by management without the support of concrete qualitative data.” They conclude that, “Perhaps the most frustrating problem facing design and verification engineers is the lack of effective metrics to measure the progress of verification.” This book reveals for the first time the organizing principles defining the relationships among the many defining variables in the vast universe of digital designs and defines precise means for exploiting them in IC development and verification. A rigorous examination of the standard variables described in this book with regard to applicable concepts in linear algebra and graph theory may blaze the trail to improved techniques and tools for the functional verification of digital systems. This book also proposes a set of specific measures and views for comparing the results obtained from verifying any digital system without regard for size or functionality. It also describes how these standard results can be used in objective data-driven risk assessment for correct tape-out decisions. Preface VII Finally, the intellectual property (IP) industry needs a level playing field so that integrators can express their quality requirements clearly, and so that providers can declare unambiguously how those quality requirements are met and what level of risk is entailed in integrating their products. Standardized functional verification makes the IP market a safe place to transact business. Integrators need to be able to “peek behind the curtains” to understand the results of the IP provider’s verification effort. IP providers need to safeguard proprietary processes. Establishing standard measures and views will enable this industry to thrive in a manner similar to how interchangeable parts enabled manufacturing industries to thrive. These standards must be: 1. applicable to any and all digital systems, 2. genuine indicators of the degree of exercise of the IP, and 3. economical to produce. The specific measures and views described within the covers of this book meet these requirements. They are by no means the only possible measures or views, but they constitute a much needed beginning. They will either thrive or perish on their own merits and as their usefulness grows or diminishes. This is the natural advance of science in a field where years of accumulated data are needed to discover what works well and what does not. Additionally, as the standardized approach described in this book gains acceptance, more precisely defined industry standards will gain approval and be reflected in commercially available verification software. The experienced verification engineer will find many of the concepts familiar and wonder that they have not been organized in a standardized fashion until now. But, as John H. Lienhard explains in his recent book How Invention Begins (Oxford University Press, 2006), the relentless priority of production can shove invention to the side. “Too much urgency distracts inventors from their goal; you might say it jiggles their aim. Urgency makes it harder for inventors to find the elbow room–the freedom–that invention requires.” What this book is about This book is about verifying that a digital system works as intended and how to assess the risk that it does not. This is illustrated in Figure 1. VIII Preface Fig. 1 What this book is about What this book is not about This book is not about: • Programming: Verification engineers practice a particularly nettlesome craft, one which requires highly refined programming skills but with a detailed and nuanced understanding of the hardware that must endure the strenuous exercising by these programs. Excellent books and train- ing courses on programming are already in great supply. • Programming languages: Verification engineers will most likely work with one of several commercially available programming languages, such as the e language from Cadence or the Vera language from Syn- opsys. There are several good books that explain how to write programs in these commercially available languages. See the references at the end of Chapter 4 for books on verification programming languages. Preface IX • Modeling and test generation: Verification languages are especially well suited for modeling digital systems. In addition, verification IP is available commercially for standard interfaces (USB, PCI-Express, etc.) that provides ready-to-use models for industry-standard interfaces. See the references at the end of Chapter 4 for books on modeling and writ- ing testbenches. • Test environment architecture: Modern verification languages often have an implied architecture for the test environment and how it incorporates the models for the devices being verified. In addition, there are already many good books that deal extensively with this topic, for example, Writing Testbenches: Functional Verification of HDL Models (Bergeron 2003). • Formal verification: This emerging technology represents an orthogonal approach to dynamic verification and is not treated in this book. However, an analysis in terms of standard variables may prove to be of use in formal verification as well. Who should read this book The people who will benefit from reading this book and applying its concepts are: • Verification engineers, who will learn a powerful new methodology for quickly transforming multiple specifications for an IC (or any digital system) into a clear, concise verification plan including a foundation for architecting the testbench regardless of verification language. A work- ing knowledge of programming, verification languages, modeling, and constrained random verification is assumed. • Verification managers, who will learn how to plan projects that meet agreed upon risk objectives consistent with scope, schedule, and resources. Most verification managers also have highly relevant experience as verification engineers, but a working knowledge of effective project management is assumed. • Department-level and higher level managers, who will learn how to manage multiple IC developments with a data-driven approach to assessing risk. Higher level managers may or may not have a strong background in verification, and this book will give them the working knowledge they need to communicate effectively with their subordinates as well as sort through the many vexing issues and contradictory information that reach them from inside their organization as well as X Preface outside. In particular, the manager who must approve the expenses for tape-out will benefit greatly from learning how to assess risk from data rather than rosy reassurances from exhausted staff. Scope and Organization of this book The well-known techniques of constrained pseudo-random verification are the means by which the target is exercised thoroughly. Commercially available software and, often, in-house designed software, for test generation is readily available to meet this need and are beyond the scope of this book. This book deals only with functional bugs, i.e., bugs that manifest them- selves as faulty or sub-optimal behavior. There are, however, many other kinds of bug that are outside the scope of this book, including: • Syntactical errors (such as those reported by VeriLint). • Synthesis errors such as inferred latches (such as those reported by DesignCompiler) • Design goodness errors, such as using both positive-edge and negative- edge triggered flip-flops in a design, lack of suitable synchronization between clock domains, presence of dead code or redundant code • Noncompliance to language conventions (such as those reported by LEDA, Spyglass, etc.) Finally, this book is organized into 7 chapters as follows: 1. Introduction to Functional Verification 2. Analytical Foundation - standard framework - standard variables 3. Exploring Functional Space - size of the space - structure of the space 4. Planning and Execution - standard interpretation - verification plan - standard results 5. Normalizing Data 6. Analyzing Results - standard measures - standard views 7. Assessing Risk Acknowledgements Any book that attempts to advance scientific understanding is invariably the product of many minds, and this book is no exception. Apart from the contributions of others, this work would not have been possible. A debt of gratitude is owed to reviewers of my early drafts: Marlin Jones, Kris Gleason, Bob Pflederer, and Johan Råde. I thank Eric Hill for insightful comments on a later manuscript. I am especially grateful to Anne Stern, whose meticulous reading of a final manuscript revealed errors and ambiguities and passages greatly in need of further clarification. Without her diligent attention to detail, this book would be in want of much improvement. I also thank my editors at Springer for their unflagging support, especially Katelyn Stanne who guided this project to publication. Finally, I must acknowledge the contributions of my many colleagues over the years, both engineers and managers, who ardently embraced a cul- ture of engineering excellence. Any remaining errors in this book are mine and mine alone.