ebook img

Computational geometry: methods and applications PDF

227 Pages·1996·1.14 MB·English
by  Chen J.
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Computational geometry: methods and applications

Computational Geometry: Methods and Applications Jianer Chen Computer Science Department Texas A&M University February 19, 1996 Chapter 1 Introduction Geometric objects such as points, lines, and polygons are the basis of a broad variety of important applications and give rise to an interesting set of problems and algorithms. The name geometry reminds us of its earliest use: forthemeasurementofland andmaterials. Today, computersarebeing used more and more to solve larger-scale geometric problems. Over the past two decades, a set of tools and techniques has been developed that takes advantage of the structure provided by geometry. This discipline is known as Computational Geometry. The discipline was named and largely started around 1975 by Shamos, whosePh.D.thesisattractedconsiderable attention. Afteradecadeofdevel- opment the (cid:12)eld came into its own in 1985, when three components of any healthy discipline were realized: a textbook, a conference, and a journal. Preparata and Shamos’s book Computational Geometry: An Introduction [23], the (cid:12)rst textbook solely devoted to the topic, was published at about thesametimeasthe(cid:12)rstACMSymposiumonComputationalGeometrywas held, andjustpriortothestartofanewSpringer-VerlagjournalDiscreteand Computational Geometry. The (cid:12)eld is currently thriving. Since 1985, sev- eral texts, collections, and monographs have appeared [1, 10, 18, 20, 25, 26]. The annual symposium has attracted100 papers and 200attendees steadily. There is evidence that the (cid:12)eld is broadening to touch geometric modeling andgeometrictheoremproving. Perhapsmostimportantly,the(cid:12)rststudents who obtained their Ph.D.sin computer science with theses in computational geometry have graduated, obtained positions, and are now training the next generation of researchers. Computational geometry is of practical importance because Euclidean 1 2 INTRODUCTION space of two and three dimensions forms the arena in which real physical objects are arranged. A large number of applications areas such as pattern recognition [28], computer graphics [19], image processing [22], operations research,statistics[4,27],computer-aideddesign,robotics[25,26],etc.,have been the incubation bed of the discipline since they provide inherently geo- metric problems forwhich e(cid:14)cient algorithms havetobe developed. A large number of manufacturing problems involve wire layout, facilities location, cutting-stock and related geometric optimization problems. Solving these e(cid:14)ciently on a high-speed computer requires the development of new geo- metrical tools, as well as the application of fast-algorithm techniques, and is not simply a matter of translating well-known theorems into computer programs. From a theoretical standpoint, the complexity of geometric algo- rithms is of interest because it sheds new light on the intrinsic di(cid:14)culty of computation. In this book, we concentrate on four major directions in computational geometry: the construction of convex hulls, proximity problems, searching problems and intersection problems. Chapter 2 Algorithmic Foundations For the past twenty years the analysis and design of computer algorithms hasbeenoneofthemostthrivingendeavorsincomputerscience. Thefunda- mentalworksofKnuth[14]andAho-Hopcroft-Ullman [2]havebroughtorder and systematization to a rich collection of isolated results, conceptualized the basic paradigms, and established a methodology that has become the standard of the (cid:12)eld. It is beyond the scope of this book to review in detail the material of those excellent texts, with which the reader is assumed to be reasonably familiar. It is appropriate however, at least from the point of view of terminology, to brie(cid:13)y review the basic components of the language in which computational geometry will be described. These components are algorithms and data structures. Algorithms are programs to be executed on a suitable abstraction of actual \von Neumann" computers; data structures are ways to organize information, which, in conjunction with algorithms, permit the e(cid:14)cient and elegant solution of computational problems. 2.1 A Computational model Many formal models of computation appear in the literature. There is no general consensus as to which of these is the best. In this book, we will adopt the most commonly-used model. More speci(cid:12)cally, we will adopt random access machines (RAM) as our computational model. 3 4 ALGORITHMIC FOUNDATIONS Random access machine (RAM) A random access machine (RAM) models a single-processor computer with a random access memory. A RAM consists of a read-only input tape, a write-only output tape, a program and a (random access) memory. The memory consists of registers each capable of holding a real number of arbitrary precision. There is also no upper bound on the memory size. All computations take place in the processor. A RAM can access (read or write) any register in the memory in one time unit when it has the correct address of that register. The following operations on real numbers can be done in unit time by a random access machine : 1) Arithmetic operations: , =, +, , log, exp, sin. (cid:3) (cid:0) 2) Comparisons 3) Indirect access 2.2 Complexity of algorithms and problems The following notations have become standard: O(f(n)): the class C1 of functions such thatfor any g C1, there is a (cid:15) 2 constant cg such that f(n) cgg(n) for all but a (cid:12)nite number of n’s. (cid:21) Roughly speaking, O(f(n)) is the class of functions that are at most as large as f(n). o(f(n)) : the class C2 of functions such that for any g C2, (cid:15) 2 limn g(n)=f(n)= 0. Roughly speaking, o(f(n))is theclass offunc- !1 tions that are less than f(n). (cid:10)(f(n)) : the class C3 of functions such that for any g C3, there is a (cid:15) 2 constant cg such that f(n) cgg(n) for all but a (cid:12)nite number of n’s. (cid:20) Roughly speaking, (cid:10)(f(n)) is the class of functions which are at least as large as f(n). !(f(n)) : the class C4 of functions such that for any g C4, (cid:15) 2 limn f(n)=g(n) = 0. Roughly speaking, !(f(n)) is the class of !1 functions that are larger than f(n). (cid:2)(f(n)) : the class C5 of functions such that for any g C5, g(n) = (cid:15) 2 O(f(n) and g(n) = (cid:10)(f(n)). Roughly speaking, (cid:2)(f(n)) is the class of functions which are of the same order as f(n). DATA STRUCTURE 5 Complexity of algorithms Let be an algorithm implemented on a RAM. If for an input of size n, A A halts after m steps, we say that the running time of the algorithm is m A on that input. There are two types of analyses of algorithms: worst case and expected case. Fortheworstcaseanalysis,weseekthemaximumamountoftime used by the algorithm for all possible inputs. For the expected case analysis we normally assume a certain probabilistic distribution on the input and study the performance of the algorithm for any input drawn from the distribu- tion. Mostly, we are interested in the asymptotic analysis, i.e., the behavior of the algorithm as the input size approaches in(cid:12)nity. Since expected case analysis is usually harder to tackle, and moreover the probabilistic assump- tion sometimes is di(cid:14)cult to justify, emphasis will be placed on the worst case analysis. Unless otherwise speci(cid:12)ed, we shall consider only worst case analysis. De(cid:12)nition Let be an algorithm. The time complexity of is O(f(n)) A A if there exists a constant c such that for every integer n 0, the running (cid:21) time of is at most c f(n) for all inputs of size n. A (cid:1) Complexity of problems While time complexity for an algorithm is (cid:12)xed, this is not so for problems. For example, Sorting can be implemented by algorithms of di(cid:11)erent time complexity. Thetimecomplexityofaknownalgorithmforaproblemgivesus the information about at most howmuch time we need tosolve the problem. We would also like to know the minimum amount of time we need to solve the problem. De(cid:12)nition A function u(n) is an upper bound on the time complexity ofa problem if there is an algorithm solving such that the running time P A P of is u(n). A function l(n) is a lower bound on the time complexity of a A problem if any algorithm solving has time complexity at least l(n). P P 6 ALGORITHMIC FOUNDATIONS 2.3 A data structure supporting set operations A set is a collection of elements. All elements of a set are di(cid:11)erent, which means no set can contain two copies of the same element. When used as tools in computational geometry, elements of a set usually are normal geometric objects, such as points, straight lines, line segments, and planes in Euclidean spaces. We shall sometimes assume that elements of a set are linearly ordered by a relation, usually denoted \<" and read \less than" or \precedes". For example, we can order a set of points in the 2-dimensional Euclidean space by their x-coordinates. LetS beasetandletubeanarbitraryelementofauniversalsetofwhich S isasubset. Thefundamentaloperationsoccurringinsetmanipulation are: MEMBER(u;S): Is u S? (cid:15) 2 INSERT(u;S): Add the element u to the set S. (cid:15) DELETE(u;S): Remove the element u from the set S. (cid:15) When the universal set is linearly ordered, the following operations are very important: MINIMUM(S): Report the minimum element of the set S. (cid:15) SPLIT(u;S): Partition the set S into two sets S1 and S2, so that S1 (cid:15) contains all the elements of S which are less than or equal to u, and S2 contains all the elements of S which are larger than u. SPLICE(S;S1;S2): Assuming that all elements in the set S1 are less (cid:15) than any element in the set S2, form the ordered set S = S1 S2. [ We will introduce a special data structure: 2-3 trees, which represent sets of elements and support the above set operations e(cid:14)ciently. De(cid:12)nition A 2-3 tree is a tree such that each non-leaf node has two or three children, and every path from the root to a leaf is of the same length. The proof of the following theorem is straightforward and left to the reader. DATA STRUCTURE 7 Theorem 2.3.1 A 2-3 tree of n leaves has height O(logn). A linearly ordered set of elements can be represented by a 2-3 tree by assigning the elements to the leaves of the tree in such a way that for any non-leaf node of the tree, all elements stored in its (cid:12)rst child are less than any elements stored in its second child, and all elements stored in its second child are less than any elements stored in its third child (if it has a third child). Each non-leaf node v of a 2-3 tree keeps three pieces of information for the corresponding subtree. L(v): thelargestelement storedin the subtree rootedatits (cid:12)rstchild. (cid:15) M(v) : the largest element stored in the subtree rooted at its second (cid:15) child. H(v) : the largest element stored in the subtree rooted at its third (cid:15) child (if it has one). 2.3.1 Member The algorithm for deciding the membership of an element in a 2-3 tree is given asfollows, where T is a2-3tree,t is the rootofT, and uis theelement to be searched in the tree. Algorithm MEMBER(T, u) BEGIN IF T is a leaf node then report properly ELSE IF L(t) >= u then MEMBER(child1(T), u) ELSE IF M(t) >= u then MEMBER(child2(T), u) ELSE IF t has a third child THEN MEMBER(child3(T), u) ELSE report failure. END SincetheheightofthetreeisO(logn),andthealgorithmsimplyfollowsa path in the tree fromtheroottoa leaf, the time complexity ofthe algorithm MEMBER is O(logn). 8 ALGORITHMIC FOUNDATIONS 2.3.2 Insert To insert a new elment x into a 2-3 tree, we proceed at (cid:12)rst as if we were testing membership of x in the set. However, at the level just above the leaves, we shall be at a node v that should be the parent of x. If v has only two children, we simply make x the third child of v, placing the children in the proper order. We then adjust the information contained by the node v to re(cid:13)ect the new situation. Suppose, however, that x is the fourth child of the node v. We cannot have a node with four children in a 2-3 tree, so we split the node v into two nodes, which we call v and v0. The two smallest elements among the four children of v stay with v, while the two larger elements become children of node v0. Now, we must inset v0 among the children of p, the parent of v. The problem now is solved recursively. One special case occurs when we wind up splitting the root. In thatcase we create a new root, whose two children are the two nodes into which the old root was split. This is how the number of levels in a 2-3 tree increases. The above discussion is implemented as the following algorithms, where T is a 2-3 tree and x is the element to be inserted. Algorithm INSERT(T, x) BEGIN 1. Find the proper node v in the tree T such that v is going to be the parent of x; 2. Create a leaf node d for the element x; 3. ADDSON(v, d) END Where the procedure ADDSON is implemented by the following recursive algorithm, which adds a child d to a non-leaf node v in a 2-3 tree. Algorithm ADDSON(v, d) BEGIN 1. IF v is the root of the tree, add the node d properly. Otherwise, do the following. 2. IF v has two children, add d directly 3. ELSE 3.1. Suppose v has three children c1, c2, and c3. Partition c1, DATA STRUCTURE 9 c2, c3 and d properly into two groups (g1, g2) and (g3, g4). Let v be the parent of (g1, g2) and create a new node v’ and let v’ be the parent of (g3, g4). 3.2. Recursively call ADDSON(father(v), v’). END Analysis: The algorithm INSERT can (cid:12)nd the proper place in the tree for the element x in O(logn) time since all it needs to do is to follow a path from the root to a leaf. Step 2 in the algorithm INSERT can be done in constant time. The call to the procedure ADDSON in Step 3 can result in at most O(logn) recursive calls to the procedure ADDSON since each call will jump at least one level up in the 2-3 tree, and each recursive call takes constant time to perform Steps 1, 2, and 3.1 in the algorithm ADDSON. So Step 3 in the algorithm INSERT takes also O(logn) time. Therefore, the overall time complexity of the algorithm INSERT is O(logn). 2.3.3 Minimum Given a 2-3 tree T we want to (cid:12)nd out the minimum element stored in the tree. Recall that in a 2-3 tree the numbers are stored in leaf nodes in ascending order from left to right. Therefore the problem is reduced to going down the tree, always selecting the left most link, until a leaf node is reached. This leaf node should contain the minimum element stored in the tree. Evidently, the time complexity of this algorithm is O(logn) for a 2-3 tree with n leaves. Algorithm MINIMUM(T, min) BEGIN IF T is a leaf THEN min := T; ELSE call MINIMUM(child1(T), min); END 2.3.4 Delete When we delete a leaf from a 2-3 tree, we may leave its parent v with only one child. If v is the root, delete v and let its lone child be the new root. Otherwise, let p be the parent of v. If p has another child, adjacent to v on

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.