ebook img

Statistical quality control : a loss minimization approach PDF

385 Pages·89.568 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Statistical quality control : a loss minimization approach

STATISTICAL QUALITY CONTROL A Loss Minimization Approach SERIES ON APPLIED MATHEMATICS Editor-in-Chief: Frank Hwang Associate Editors-in-Chief: Zhong-ci Shi and U Rothblum Vol. 1 International Conference on Scientific Computation eds. T. Chan and Z.-C. Shi Vol. 2 Network Optimization Problems — Algorithms, Applications and Complexity eds. D.-Z. Du and P. M. Pandalos Vol. 3 Combinatorial Group Testing and Its Applications by D.-Z. Du and F. K. Hwang Vol. 4 Computation of Differential Equations and Dynamical Systems eds. K. Feng and Z.-C. Shi Vol. 5 Numerical Mathematics eds. Z.-C. Shi and T. Ushijima Vol. 6 Machine Proofs in Geometry by S.-C. Chou, X.-S. Gao and J.-Z. Zhang Vol. 7 The Splitting Extrapolation Method by C. B. Liem, T. Lu and T. M. Shih Vol. 8 Quaternary Codes by Z.-X. Wan Vol. 9 Finite Element Methods for Integrodifferential Equations by C. M. Chen and T. M. Shih Vol. 10 Statistical Quality Control — A Loss Minimization Approach by D. Trietsch Vol. 11 The Mathematical Theory of Nonblocking Switching Networks by F. K. Hwang Series on Applied Mathematics Volume 10 STATISTICAL QUALITY CONTROL A Loss Minimization Approach Dan Trietsch MS1S Department University of Auckland New Zealand World Scientific SiK$W!^f&\MM%ty'London 'Hong Kong Published by World Scientific Publishing Co. Pte Ltd. P O Box 128, Farrer Road, Singapore 912805 USA office: Suite IB, 1060 Main Street, River Edge, NJ 07661 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE Library of Congress Cataloging-in-Publication Data Trietsch, Dan Statistical quality control : a loss minimization approach / by Dan Trietsch. p. cm. — (Series on applied mathematics; v. 10) Includes bibliographical references and index ISBN 9810230311 1 Quality control - Statistical methods I Title. II Series TS156.T75 1998 658.5'62-dc21 98-10545 CIP British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Copyright © 1999 by World Scientific Publishing Co Pte Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher. For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc , 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher. Printed in Singapore by Uto-Print Dedicated to my parents This page is intentionally left blank Preface This book originated as part of a proposed textbook in Total Quality (TQ). The main theme was that TQ should be studied as a system. By now it's almost universally agreed that TQ should be applied as widely as possible, i.e., to the whole system, but looking at TQ itself as a system is less prevalent. My own premise is that TQ is a system whose aim is to maximize quality by continuously increasing it. Therefore, once we define quality, we should be able to judge what tools and philosophies belong to the TQ system. The test is simple: does it support the system aim (increase quality)? But unless we define quality broadly, maximizing it may cause suboptimization. Some definitions of quality are not wide enough for this purpose. For example, defining quality as conformance to specifications implies that conformance should be sought without regard to cost. Further, it suggests that the design to which we try to conform is necessarily good. Fitness for use, Juran's definition, implies both good design and good conformance, but neglects the cost element. Deming — who inspired me more than any other quality teacher -- wrote in his last book: "A product or service has quality if it helps someone and enjoys a good and sustainable market." This includes fitness for use and economic cost. As such it is less amenable to suboptimization. But it may still happen that third parties will suffer as a result of a deal that is good for both seller and buyer (a classic example is pollution). To capture the interests of third parties, known as externalities in economics, I adopted the following definition: Total Quality is the pursuit of maximal net utility to society. Another side of the same coin is: Total Quality is the continuous reduction and elimination of any waste, including waste of opportunities. Thus quality is net utility to society — value, if you will. Since intangibles also have utility, this two-sided definition takes into account the value of leisure, culture, etc. Furthermore, it is practically impossible to measure societal utility -- at least not without destroying it! - so we'll have to assume that societal utility is the sum of the individual utilities of the members of society. And we'll try to increase societal utility by methods that demonstrably increase that sum, rather than by transferring monies within society -- a respectable endeavor, perhaps, or maybe not, but strongly outside our scope. Having said that, we must recognize that within the sub-field of statistical quality control (SQC) quality is measured by the degree to which design objectives are achieved. vii viii Statistical Quality Control That is to say, quality is measured by conformance. But conformance no longer means merely falling within preset tolerances. Rather, conformance is measured by the distance between the design ideal and the actual performance. More than anyone else, Taguchi was instrumental in making this point. The Taguchi loss function, often a quadratic function, measures quality (of conformance) by the loss imparted to society by deviating from the ideal design. One objective of SQC, then, is to minimize this loss. But in addition to the loss to society, we have to take into account the cost of production, which is also part of societal net utility. While Taguchi definitely recognized that production costs and conformance loss are equally important, he chose to define quality by loss to society alone. The definition adopted here is wider, and considers both elements as part of quality. Often, however, the cost of production is not a direct function of the conformance, and in such cases decreasing Taguchi's loss function is operationally equivalent to increasing quality. While many texts on quality today espouse the Taguchi loss function, they rarely draw the conclusions from this endorsement. Furthermore, many consider reducing this loss to be a higher priority than reducing production costs. These deficiencies motivated me to take a closer look at established SQC methods. It turns out that using the Taguchi loss function implies some changes in the way we practice SQC, and sheds new light on questions relating to SQC. One example is the correct relationship between statistical capability and tolerance setting. An economical tolerance setting technique suggested by Taguchi highlights this connection. Another example is a myth according to which Taguchi's loss function approach calls for perfect centering. We'll discuss process adjustment issues and show that the myth is false. While doing so we'll also extend Taguchi's tolerance setting technique to processes with drifts that cannot be designed away economically. Deming devoted a lot of energy to eradicating improper adjustment methods that cause tampering and increase losses due to inferior centering. But he did not spell out how to adjust properly. I attempt to do so here, and it's one of the issues on which I had to go against Deming's teachings, at least as they are usually presented. The book covers other issues that are usually neglected in the SQC literature, and come to light when we look at SQC as a (sub)system - rather than a mere collection of techniques. Minimizing loss, or maximizing quality, is the driver of the system, but it has many interactions that need to be discussed explicitly, e.g., measurements, determining how many points to sample to obtain reliable control charts, and more. We also examine Deming's kp rule - a clear economic loss reduction application - and discuss appropriate ways to inspect incoming lots when the rule is not applicable. Also, where conformance is measured by a continuous loss function, the kp rule has to be changed: we outline how. Preface ix Taguchi's most famous contribution is the use of Design of Experiments (DOE) to reduce the loss function. Unfortunately, Taguchi's techniques leave much to be desired, and this created much controversy. Western statisticians, led by George Box, have shown that there are better ways to achieve his ends. In particular, his use of external arrays and signal- to-noise ratios have been criticized. Dorian Shainin developed yet other alternatives. Personally, I acknowledge Taguchi as a leader in terms of where to go, but not necessarily how to get there. DOE itself, although fascinating, is outside our scope. So are many other engineering techniques that are indispensable today. Chief among them are poka-yoke (error- proofing) and setup reduction techniques devised by Shigeo Shingo. All these vital methods are concerned more with breakthrough than with control. And they can easily fill another book of this size. The book is intended for both practitioners and theoreticians, and it can also be used for teaching. To gain full benefit, however, readers should have at least an introductory level knowledge of probability and statistics. The aim to serve practitioners led me to concentrate on basic techniques, and to organize the coverage so that some mathematical details can be skipped without loss of continuity. The service to theoreticians, therefore, is not by reviewing complex models, but rather by coverage of the theoretical underpinnings of the basic ones in more detail than the norm. The service to both communities is by addressing the issues in a systemic way, stressing connections and interactions, and covering key subjects that are missing in practically all other SQC books. Nine chapters and seven supplements provide the coverage. In general, supplements are attached to chapters, and they either discuss optional material or deal with mathematical details that are not necessary for practitioners. Some smaller mathematical expositions that are not vital for practitioners are given in unnumbered boxes ("Technical Notes"), which may be skipped without loss of continuity. In contrast, numbered boxes are used occasionally to facilitate cross-reference. The first chapter introduces classical control charts, Shewhart charts, which generally follow Shewhart's concepts. Since this text concentrates on statistical control methods, and not on graphic improvement tools and so on, control charts are our main focus. Chapter 2 discusses measurements. Supplement 2 elaborates on some technical details associated with calibration and efficient measurement. Chapter 3 introduces loss functions, and covers Taguchi's economical tolerance setting method. It also shows how to change the kp rule where the quadratic loss function applies. Supplement 3 discusses asymmetrical loss functions. Chapter 4 presents economical adjustment methods that avoid tampering. It also extends Taguchi's tolerance setting technique to processes with drift. Chapter 5 is the first x Statistical Quality Control in a block of four that present various aspects of classic control charts. It deals with control charts for attributes. Motivated by Deming's vehement objections to doing so, Supplement 5 compares control charts with hypothesis testing, and concludes that the only major difference is that with control charts we should not claim we know the statistical risks exactly. Chapter 6 is devoted to charting continuous variables. Three supplements to this chapter deal with mathematical details: Supplement 6.1 compares the efficiency of various dispersion statistics, and shows that a recommendation made by Shewhart should be reversed; Supplement 6.2 shows how control chart constants are derived; Supplement 6.3 discusses optimal subgroup size when we want to detect a process deviation of a given size by ±3cr control limits. Chapter 7 deals with pattern tests that apply to control charts. Some results in this chapter correct prevalent errors in the literature. Supplement 7 briefly discusses SQC extensions to processes with statistical dependence. Chapter 8 introduces a new graphic tool, diffidence charts, that can help determine how many items we should sample to establish a reliable control chart (it's more than many authorities claim). Another classic SQC subject — inspection — is presented in Chapter 9. A traditional coverage of SQC could be limited to Chapters 1, 5, 6, 7, and 9. I hope readers will find that even these chapters are not completely conventional. Chapters 2, 3, and 4 provide necessary parts that are not covered traditionally. As for Chapter 8, I believe that studying diffidence charts is highly conducive to understanding variation. As Deming said, understanding variation is imperative. Thus, even if diffidence charts will not become part of the tool-kit of every SQC practitioner, the chapter is still useful in a practical sense. Acknowledgement While I'm responsible for the results, I did not work in a vacuum. Many helped me generously with their time, advice, and more. Some reviewers of the material that found its way into this book were anonymous, but their impact is considerable. In contrast, quite a few of those I know by name are also my relatives or good friends. As such, they'll understand how difficult it is to draw the line after naming a finite number of people. Thank you all, and please forgive me for taking the easy way out.

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.