ebook img

Graph Separators, with Applications PDF

266 Pages·2002·13.173 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Graph Separators, with Applications

Graph Separators, with Applications FRONTIERS OF COMPUTER SCIENCE Series Editor: Arnold L. Rosenberg University of Massachusetts Amherst, Massachusetts ASSOCIATIVE COMPUTING: A Programming Paradigm for Massively Parallel Computers Jerry L. Potter INTRODUCTION TO PARALLEL AND VECTOR SOLUTION OF LINEAR SYSTEMS James M. Ortega PARALLEL EVOLUTION OF PARALLEL PROCESSORS (A book in the Surveys in Computer Science series, Edited by Larry Rudolph) Gil Lerman and Larry Rudolph GRAPH SEPARATORS, WITH APPLICATIONS Arnold L. Rosenberg and Lenwood S. Heath A Continuation Order Plan is available for this series. A continuation orderwill bring delivery of each new volume immediately upon publication. Volumes are billed only upon actual shipment. For further information please contact the publisher. Graph Separators, with Applications Arnold L. Rosenberg University of Massachusetts Amherst, Massachusetts and Lenwood S. Heath Virginia Polytechnic Institute Blacksburg, Virginia KLUWER ACADEMIC PUBLISHERS NEW YORK, BOSTON, DORDRECHT, LONDON, MOSCOW (cid:72)(cid:37)(cid:82)(cid:82)(cid:78)(cid:44)(cid:54)(cid:37)(cid:49)(cid:29) 0-306-46977-4 (cid:51)(cid:85)(cid:76)(cid:81)(cid:87)(cid:3)(cid:44)(cid:54)(cid:37)(cid:49)(cid:29) 0-306-46464-0 (cid:139)(cid:21)(cid:19)(cid:19)(cid:21)(cid:3)(cid:46)(cid:79)(cid:88)(cid:90)(cid:72)(cid:85)(cid:3)(cid:36)(cid:70)(cid:68)(cid:71)(cid:72)(cid:80)(cid:76)(cid:70)(cid:3)(cid:51)(cid:88)(cid:69)(cid:79)(cid:76)(cid:86)(cid:75)(cid:72)(cid:85)(cid:86) (cid:49)(cid:72)(cid:90)(cid:60)(cid:82)(cid:85)(cid:78)(cid:15)(cid:3)(cid:37)(cid:82)(cid:86)(cid:87)(cid:82)(cid:81)(cid:15)(cid:3)(cid:39)(cid:82)(cid:85)(cid:71)(cid:85)(cid:72)(cid:70)(cid:75)(cid:87)(cid:15)(cid:3)(cid:47)(cid:82)(cid:81)(cid:71)(cid:82)(cid:81)(cid:15)(cid:3)(cid:48)(cid:82)(cid:86)(cid:70)(cid:82)(cid:90) (cid:51)(cid:85)(cid:76)(cid:81)(cid:87) (cid:139)1999 Kluwer Academic / Plenum Publishers New York (cid:36)(cid:79)(cid:79)(cid:3)(cid:85)(cid:76)(cid:74)(cid:75)(cid:87)(cid:86)(cid:3)(cid:85)(cid:72)(cid:86)(cid:72)(cid:85)(cid:89)(cid:72)(cid:71) (cid:49)(cid:82)(cid:3)(cid:83)(cid:68)(cid:85)(cid:87)(cid:3)(cid:82)(cid:73)(cid:3)(cid:87)(cid:75)(cid:76)(cid:86)(cid:3)(cid:72)(cid:37)(cid:82)(cid:82)(cid:78)(cid:3)(cid:80)(cid:68)(cid:92)(cid:69)(cid:72)(cid:3)(cid:85)(cid:72)(cid:83)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:72)(cid:71)(cid:82)(cid:85)(cid:3)(cid:87)(cid:85)(cid:68)(cid:81)(cid:86)(cid:80)(cid:76)(cid:87)(cid:87)(cid:72)(cid:71)(cid:3)(cid:76)(cid:81)(cid:68)(cid:81)(cid:92)(cid:73)(cid:82)(cid:85)(cid:80)(cid:3)(cid:82)(cid:85)(cid:3)(cid:69)(cid:92)(cid:68)(cid:81)(cid:92)(cid:80)(cid:72)(cid:68)(cid:81)(cid:86)(cid:15)(cid:72)(cid:79)(cid:72)(cid:70)(cid:87)(cid:85)(cid:82)(cid:81)(cid:76)(cid:70)(cid:15) (cid:80)(cid:72)(cid:70)(cid:75)(cid:68)(cid:81)(cid:76)(cid:70)(cid:68)(cid:79)(cid:15)(cid:3)(cid:85)(cid:72)(cid:70)(cid:82)(cid:85)(cid:71)(cid:76)(cid:81)(cid:74)(cid:15)(cid:3)(cid:82)(cid:85)(cid:3)(cid:82)(cid:87)(cid:75)(cid:72)(cid:85)(cid:90)(cid:76)(cid:86)(cid:72)(cid:15)(cid:3)(cid:90)(cid:76)(cid:87)(cid:75)(cid:82)(cid:88)(cid:87)(cid:3)(cid:90)(cid:85)(cid:76)(cid:87)(cid:87)(cid:72)(cid:81)(cid:3)(cid:70)(cid:82)(cid:81)(cid:86)(cid:72)(cid:81)(cid:87)(cid:3)(cid:73)(cid:85)(cid:82)(cid:80)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:51)(cid:88)(cid:69)(cid:79)(cid:76)(cid:86)(cid:75)(cid:72)(cid:85) (cid:38)(cid:85)(cid:72)(cid:68)(cid:87)(cid:72)(cid:71)(cid:3)(cid:76)(cid:81)(cid:3)(cid:87)(cid:75)(cid:72)(cid:3)(cid:56)(cid:81)(cid:76)(cid:87)(cid:72)(cid:71)(cid:3)(cid:54)(cid:87)(cid:68)(cid:87)(cid:72)(cid:86)(cid:3)(cid:82)(cid:73)(cid:3)(cid:36)(cid:80)(cid:72)(cid:85)(cid:76)(cid:70)(cid:68) (cid:57)(cid:76)(cid:86)(cid:76)(cid:87)(cid:3)(cid:46)(cid:79)(cid:88)(cid:90)(cid:72)(cid:85)(cid:3)(cid:50)(cid:81)(cid:79)(cid:76)(cid:81)(cid:72)(cid:3)(cid:68)(cid:87)(cid:29)(cid:3)(cid:3) (cid:75)(cid:87)(cid:87)(cid:83)(cid:29)(cid:18)(cid:18)(cid:78)(cid:79)(cid:88)(cid:90)(cid:72)(cid:85)(cid:82)(cid:81)(cid:79)(cid:76)(cid:81)(cid:72)(cid:17)(cid:70)(cid:82)(cid:80) (cid:68)(cid:81)(cid:71)(cid:3)(cid:46)(cid:79)(cid:88)(cid:90)(cid:72)(cid:85)(cid:10)(cid:86)(cid:3)(cid:72)(cid:37)(cid:82)(cid:82)(cid:78)(cid:86)(cid:87)(cid:82)(cid:85)(cid:72)(cid:3)(cid:68)(cid:87)(cid:29) (cid:75)(cid:87)(cid:87)(cid:83)(cid:29)(cid:18)(cid:18)(cid:72)(cid:69)(cid:82)(cid:82)(cid:78)(cid:86)(cid:17)(cid:78)(cid:79)(cid:88)(cid:90)(cid:72)(cid:85)(cid:82)(cid:81)(cid:79)(cid:76)(cid:81)(cid:72)(cid:17)(cid:70)(cid:82)(cid:80) Preface Theoretical computer science is a mathematical discipline that often ab- stracts its problems from the (hardware and software) technology of “real” computer science. When these problems are solved, the results obtained often appear in journals dedicated to the motivating technology rather than in a “general-purpose” Theory journal. Since the explosive growth of com- puter science makes it impossible for anyone to stay up to date in all areas of the field, many widely applicable theoretical results never get promul- gated within the general Theory community, hence get re-proved (and republished) numerous times, in numerous guises. When a subject area develops a sufficiently rich, albeit scattered, mass of results, one can argue that the Theory community would be well served by a central, theory- oriented (rather than application-oriented) repository for the mass of results. The present book has been written in response to our perception of such need in the area of graph separators. This need is all the more acute given the multitude of notions of graph separators that have been developed and studied over the past (roughly) three decades. The need is absolutelycritical in the area of lower-bound techniques for graph separator, since these techniques have virtually never appeared in articles having the word “separator” or any of its near synonyms in the title. Graph-theoretic models naturally abstract a large variety of computa- tional situations. Among the areas that give rise to such models are the problems of finding storage representations for data structures, finding efficient layouts of circuits on VLSI chips, finding efficient structured versions of programs, and organizing computations on networks of proces- sors. In addition, numerous specific computational problems, say involving decomposition of problem domains, can fruitfully be formulated as prob- lems of manipulating and/or partitioning graphs in various ways, includ- ing myriad problems that employ the well-known divide-and-conquer V vi Preface paradigm. A striking feature of all of the cited areas is that they exploit the same major structural feature of their graph-theoretic models, namely the decomposition structure of the graphs as embodied in various notions of graph separator. All variations on the theme of graph separation involve removing either edges or nodes from the subject graphs in order to chop each graph into subgraphs—usually, but not always, disjoint—whose sizes must be within certain prespecified absolute or relative bounds. In all of the cited areas, the complexities of either procedures (e.g., algorithm timing) or structures (e.g., circuit areas) can be bounded by bounding the sizes of graph separators. Although we do not have the machinery to be formal, or even precise, at this stage of the exposition, we can describe at an intuitively evocative level a couple of scenarios that benefit from abstractions involving graph separators. Consider first the problem of laying integrated circuits out on the chips that control our watches, calculators, computers, washing machines, cars, etc. The hallmark of integrated-circuit technology is that the world of integrated circuits is populated by only two types of objects, transistors and the wires that interconnect them. (The capacitors, resistors, etc., of the days of yore have all been replaced by transistors that can play multiple roles.) Thus, integrated circuits almost cry out to be viewed as graphs: transistors become nodes, and wires become edges.* Two problems that loom large in the layout of integrated circuits are the dearness of silicon real estate—chips are small—and the slowness of long wires—there are definite physical limitations on the speed of signal propagation. We shall see in Section 2.4 that one can obtain good upper and lower bounds on the amount of silicon needed to implement a given circuit design and on the length of the longest wire in the implementation by analyzing the separation characteristics of the graph that abstracts the circuit. Our second example concerns programs in a procedural programming language. It has long been the practice in the design of compilers and other devices for mapping programs into computers (e.g., assemblers, schedulers) to represent a program awaiting mapping by a set of graphs that represent the flow of data and/or control and/or “communication” in the program. A typical control-flow graph, for instance, views each straight-line block of code in the program as a node in a graph and views deviations from straight-line flow of control as arcs that interconnect the nodes; for instance, a k-way branch would engender k arcs, each leading from the block that contains the branch to one of the blocks that branch might lead to. (One * Our discussion is only a first-order approximation to reality, in that it ignores the “multipoint” nets" that are used in advanced circuit designs. However, the preponderance of “two-point” nets in circuits renders our approximation a valuable one. Preface vii might want to refine the blocks so that all arcs enter a block at its top.) A typical data-flow graph or communication graph might begin with a partition of the program into node-chunks (called tasks) and install arcs that originate at task-nodes in which a variablex is defined or modified and end at a task-node in which x is used with no interveningmodification. We shall see several examples in Chapter 2 of how various mapping problems for programs can be solved efficiently if—and sometimes, only if—the graph(s) associated with the program can be recursively decomposed efficiently, i.e., the graph(s) have small separators. Section 2.2 uses an efficient recursive graph decomposition to craft an efficient divide-and- conquer implementation of an abstract program; Section 2.3 uses an efficient recursive graph decomposition to map the communication structure of a program efficiently onto the interprocessor communication network of a parallel computer; Section 2.6 uses the efficiency of a graph’s decomposabil- ity to bound the number of memory registers that must be available in order to execute the program with maximum efficiency. The current book is devoted to techniques for obtaining upper and lower bounds on the sizes of graph separators, upper bounds being obtained via decomposition algorithms. While we try to survey the main approaches to obtaining good graph separations, our main focus is on techniques for deriving lower bounds on the sizes of graph separators. This asymmetry in focus reflects our perception that the work on upper bounds, or algorithms for graph separation, is much better represented in the standard Theory literature than is the work on lower bounds, which we perceive as being much more scattered throughout the literature on application areas. A secondary motive is the first author’s abiding personal interest in lower- bound techniques, which allows this book to slake a personal thirst. The book is organized in four chapters and an appendix. Chapter 1 gives a technical overview of the graph theory that we need in order to study the lower-bound techniques of interest. We survey there the various types of graph separators that have been studied and their relationships. We introduce families of graphs that have proven important in many of the problem areas mentioned. We then introduce two technical topics that are needed to develop or appreciate the lower-bound techniques: we introduce the field of graph embeddings, which is at once a client of the techniques we develop and a facilitator of those techniques; and we introduce the notion of quasi-isometry of graphs, which is a formal notion of equivalence of graphs “for all practical purposes.” Chapter 2 surveys a number of problem areas that have important abstractions to graph-theoretic problems that center on graph separation. This chapter should help motivate the reader for the highly technical development of the chapters on upper- and lower- bound techniques. Chapters 3 and 4 respectively, introduce and develop, the viii Preface upper- and lower-bound techniques that are our major focus. As we develop the techniques, we illustrate their application to the popular graph families of Chapter 1. Chapter 3, on upper bounds, can be viewed as an overview of the field with pointers to later and more specialized developments. Chapter 4, on lower bounds, covers that aspect of the field almost exhaustively, as of the date of the book’s completion. Finally, Appendix A is somewhat a reprise of Chapters 2, 3, and 4, in that it illustrates how the separator- oriented techniques of Chapters 3 and 4 apply to the applications surveyed in Chapter 2. We hope that this sampler of applications of the abstract development will suffice to illustrate how the techniques can be brought to bear on a large range of the problem areas mentioned. Throughout, we have attempted to make the coverage adequate for the expert and the exposition careful enough for the novice. Thus, we hope that the book will prove useful as both a reference and text. Toward this end, we conclude each chapter with an annotated list of references to the literature. Most obviously, we cite the sources where the material we cover originated; in addition, though, we list a variety of sources whose material does not appear in the book; indeed, we list many sources that are only indirectly relevant to our subject, in the hope of fanning whatever flames of interest we have been able to kindle in the reader. We share credit for whatever quality the reader perceives herein with many people. First, and foremost, no words suffice to express our debt to our collaborators, whose work—over a period spanning literally decades— is inextricably imbedded in the technical developments in this book. While we wish to avoid listing these numerous friends and colleagues explicitly, for fear of inadvertently omitting one, three stand out so prominently for the first author that they must be mentioned. My long-standing collaboration with Sandeep Bhatt, Fan Chung, and Tom Leighton, for well over 15 years, has so profoundly influenced my research that their influence touches virtually every word of this book. Next, we are grateful to the many colleagues (and their various publishers) who graciously permitted us to paraphrase excerpts from their technical papers. We owe special thanks to the first author’s former students Fred Annexstein, Miranda Barrows, Bojana Vittorio Scarano, and Julia Stoyanovich for their careful reading ofportions ofvarious versions ofthis work; many improvements to the original presentation are due to them. Finally, we thank all of the (present and former) students at Duke University, the University of Mass- achusetts at Amherst, the University of North Carolina, Virginia Tech, and the Technion (Israel Institute of Technology) who suffered with patience and good will through seminars and courses in which the material herein was developed, sharing myriad helpful comments and suggestions. While we Preface ix thank all of these, we acknowledge sole responsibility for the errors that inevitably escape detection in large works. We thank the companies and agencies that have supported both the research that enabled this project and the preparation of the book. We thank the International Business Machines Corporation, where much of the first author’s early research was done; the National Science Foundation for continuing support for more than 18 years; the Lady Davis Foundation for support in spring 1994, when much of the first author’s initial writing was done; and Tellcordia Technologies, which nurtured the multiyear Bhatt– Chung–Leighton–Rosenberg collaboration. Finally, we thank our wives, Susan and Sheila, for their support throughout the period during which this book was written: especially for putting up with the mental absence that seems inevitably to accompany immersion in a large intellectual project. Arnold L. Rosenberg Lenwood S. Heath Amherst, Massachusetts Blacksburg, Virginia Contents 1. A Technical Introduction 1 1.1. Introduction 1 1.2. Basic Notions and Notation 2 1.3. Interesting Graph Families 4 1.4. Graph Separators 12 1.5. Graph Embeddings 27 1.6. Quasi-Isometric Graph Families 33 1.7. Sources 44 2. Applications of Graph Separators 47 2.1. Introduction 47 2.2. Nonserial Dynamic Programming 49 2.3. Graph Embeddings via Separators 53 2.4. Laying Out VLSI Circuits 68 2.5. Strongly Universal Interval Hypergraphs 82 2.6. Pebbling Games: Register Allocation and Processor Scheduling 92 2.7. Sources 94 3. Upper-Bound Techniques 99 3.1. Introduction 99 3.2. NP-Completeness 101 3.3. Topological Approaches to Graph Separation 109 3.4. Geometric Approaches to Graph Separation 121 3.5. Network Flow Approaches to Graph Separation 130 xi

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.