ebook img

Embedded Visual System and its Applns. on Robots PDF

141 Pages·2010·12.557 MB·English
by  D. Xu
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Embedded Visual System and its Applns. on Robots

EMBEDDED VISUAL SYSTEM AND ITS APPLICATIONS ON ROBOTS By De Xu eBooks End User License Agreement Please read this license agreement carefully before using this eBook. Your use of this eBook/chapter constitutes your agreement to the terms and conditions set forth in this License Agreement. Bentham Science Publishers agrees to grant the user of this eBook/chapter, a non-exclusive, nontransferable license to download and use this eBook/chapter under the following terms and conditions: 1. This eBook/chapter may be downloaded and used by one user on one computer. The user may make one back-up copy of this publication to avoid losing it. The user may not give copies of this publication to others, or make it available for others to copy or download. For a multi-user license contact [email protected] 2. All rights reserved: All content in this publication is copyrighted and Bentham Science Publishers own the copyright. You may not copy, reproduce, modify, remove, delete, augment, add to, publish, transmit, sell, resell, create derivative works from, or in any way exploit any of this publication’s content, in any form by any means, in whole or in part, without the prior written permission from Bentham Science Publishers. 3. The user may print one or more copies/pages of this eBook/chapter for their personal use. The user may not print pages from this eBook/chapter or the entire printed eBook/chapter for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained from the publisher for such requirements. Requests must be sent to the permissions department at E-mail: [email protected] 4. The unauthorized use or distribution of copyrighted or other proprietary content is illegal and could subject the purchaser to substantial money damages. The purchaser will be liable for any damage resulting from misuse of this publication or any violation of this License Agreement, including any infringement of copyrights or proprietary rights. Warranty Disclaimer: The publisher does not guarantee that the information in this publication is error-free, or warrants that it will meet the users’ requirements or that the operation of the publication will be uninterrupted or error-free. This publication is provided "as is" without warranty of any kind, either express or implied or statutory, including, without limitation, implied warranties of merchantability and fitness for a particular purpose. The entire risk as to the results and performance of this publication is assumed by the user. In no event will the publisher be liable for any damages, including, without limitation, incidental and consequential damages and damages for lost data or profits arising out of the use or inability to use the publication. The entire liability of the publisher shall be limited to the amount actually paid by the user for the eBook or eBook license agreement. Limitation of Liability: Under no circumstances shall Bentham Science Publishers, its staff, editors and authors, be liable for any special or consequential damages that result from the use of, or the inability to use, the materials in this site. eBook Product Disclaimer: No responsibility is assumed by Bentham Science Publishers, its staff or members of the editorial board for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products instruction, advertisements or ideas contained in the publication purchased or read by the user(s). Any dispute will be governed exclusively by the laws of the U.A.E. and will be settled exclusively by the competent Court at the city of Dubai, U.A.E. You (the user) acknowledge that you have read this Agreement, and agree to be bound by its terms and conditions. Permission for Use of Material and Reproduction Photocopying Information for Users Outside the USA: Bentham Science Publishers Ltd. grants authorization for individuals to photocopy copyright material for private research use, on the sole basis that requests for such use are referred directly to the requestor's local Reproduction Rights Organization (RRO). The copyright fee is US $25.00 per copy per article exclusive of any charge or fee levied. In order to contact your local RRO, please contact the International Federation of Reproduction Rights Organisations (IFRRO), Rue du Prince Royal 87, B-I050 Brussels, Belgium; Tel: +32 2 551 08 99; Fax: +32 2 551 08 95; E-mail: [email protected]; url: www.ifrro.org This authorization does not extend to any other kind of copying by any means, in any form, and for any purpose other than private research use. Photocopying Information for Users in the USA: Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by Bentham Science Publishers Ltd. for libraries and other users registered with the Copyright Clearance Center (CCC) Transactional Reporting Services, provided that the appropriate fee of US $25.00 per copy per chapter is paid directly to Copyright Clearance Center, 222 Rosewood Drive, Danvers MA 01923, USA. Refer also to www.copyright.com CONTENTS Foreword i Preface ii Contributors iii CHAPTERS 1. Introduction of robot vision on the aspects from configuration to measurement and control methods 1 D. Xu 2. Hardware and software design of an embedded vision system 15 J. Liu 3. Embedded vision positioning system based on ARM processor 30 W. Zou, D. Xu and J. Yu 4. Collaboration based self-localization algorithm for humanoid robot with embedded vision system 47 Y. Liu, R. Xiong, S. Chen and J. Chu 5. Application of vision sensor to seam tracking of butt joint in container manufacture 56 Z. J. Fang and D. Xu 6. Vision system design and motion planning for table tennis robot 83 Z. T. Zhang, P. Yang and D. Xu 7. Object recognition using local context information 103 N. Sang and C. Gao 8. The structured light vision system and application in reverse engineering and rapid prototyping 119 B. He and S. Chen Subject Index 132 i FOREWORD It has been known for a very long time that vision systems are essential for autonomous robots to recognize the environments where they are and to detect and measure the objects that they are interested in to track or avoid. Vision system for a robot is just like the eyes for a person. Up to now, almost all robots are equipped with vision system. Traditionally, a vision system consists of cameras and a computer. An image grabber card inserted in the computer is employed to capture images from the cameras to the computer. The large size and high energy cost prevent the traditional vision system from micro robots or some autonomous robots that require small and light vision sensing system. Indeed, a vision system works as a kind of sensing system to provide special information what the robots need. The ideal vision system should like other sensors, such as distance sensors, position sensors, velocity sensors etc., which are of compact structure and can present the specified sensing information. Thanks to the developments of electronics and optical engineering, the compact version system, that is, the embedded vision system integrating the camera and processing unit together, merges in recent years. Of course, the computing power of an embedded vision system is not as strong as that of the computer in a traditional vision system. Therefore, how to sufficiently utilize the limited computing capability in an embedded vision system is necessary to investigated. The e-book edited by Prof. De Xu provides a broad overview of the embedded vision system and addresses the aforementioned questions. Chapters written by experts in their respective fields will make the reader have a variety of topics ranging from the configuration to algorithm design and applications. I believe that this e-book should be very useful to basic investigators and engineers interested in the latest advances in this exciting field. Professor Qinglin Wang Beijing Institute of Technology Beijing 100190 China ii PREFACE Vision system is very important for robots to sense the environments where they work and to detect the objects what they will operate. Effective vision system can greatly improve robot’s flexibility, adaptability and intelligence. Up to now, vision system has been widely applied on various robots such as mobile robots, industrial robots, under water robots, and flying robots. However, most of the vision systems currently used by robots consists of traditional cameras and image capture devices, and the image processing algorithms are executed on PC-based processors. The separated components make the traditional vision system be large and heavy, which prevents it from many applications requiring small and light vision system. Recently, embedded vision system such as smart camera has been rapidly developed. Vision system becomes smaller and lighter, but its performance is stronger and stronger. The algorithms in embedded vision system have their specified characteristics because of resource limitations such as main frequency of CPU, memory size, and architecture. The motivation of this e-book is to provide a platform for the engineers, researchers and scholars in the robotics, machine vision, and automation communities to exchange their ideas, experiences and views on embedded vision system. The topics or chapters include the configuration and algorithm designs for embedded vision systems, and the applications of smart cameras on different autonomous robots, and etc. We prepare to invite the eminent scientists or engineers in the field of visual measurement and control for robotics and automation to contribute their currently works to this e-book. The actual effectiveness in practice will be emphasized for all methods or systems presented in this e-book. Our goal is to provide an excellent e-book about embedded visual system, which can be used as guidance book and advanced reference, for the readers from the postgraduates in university to the engineers in factory. I would like to thank all my colleagues and friends who have contributed to this e-book. De Xu Institute of Automation, Chinese Academy of Sciences Beijing 100190, China iii CONTRIBUTORS Shengyong Chen College of Information Engineering, Zhejiang University of Technology, Hangzhou 310014, P.R. China Shouxian Chen State Key Laboratory of Industrial Control Technology, and Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, P. R. China Jian Chu State Key Laboratory of Industrial Control Technology, and Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, P. R. China Zao Jun Fang Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, P. R. China Changxin Gao Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, Wuhan 430074, P. R. China Bingwei He Fuzhou University Jia Liu Robotics Institute, Beihang University, Beijing 100083, P. R. China Yong Liu State Key Laboratory of Industrial Control Technology, and Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, P. R. China Nong Sang Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, Wuhan 430074, P. R. China Rong Xiong State Key Laboratory of Industrial Control Technology, and Institute of Cyber-Systems and Control, Zhejiang University, Hangzhou 310027, P. R. China De Xu Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, P. R. China Junzhi Yu Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, P. R. China Zheng Tao Zhang Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, P. R. China Wei Zou Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, P. R. China Embedded Visual System and its Applications on Robots, 2010, 01-14 1 CHAPTER 1 Introduction of Robot Vision on the Aspects from Configuration to Measurement and Control Methods De Xu* Key Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, P. R. China, Email: [email protected] Abstract: Robot vision is a kind of science and technology concerned with multiple disciplines to enable a robot to see. Its specified aspects that are attracting the attention of the researchers in robot community include architecture and calibration of visual systems, visual measurement methods and visual control approaches. The aspects above are investigated and analyzed according to current works, respectively. Furthermore, their tendencies are also predicted. The visual measurement principles from parallax to knowledge and the visual control strategies from traditional control methods to humanoid approaches are regarded to have promising future. Keywords: Architecture, calibration, visual measurement, visual control, robot vision. INTRODUCTION Vision is an important sensing manner for a robot to get information from the environment where the robot locates. The importance of vision for a robot is just as the bright eyes for a person. As robot’s eye, the robot vision system consists of cameras, which conducts measurement based on perspective geometry. It greatly depends on the parameters of cameras. Unfortunately, the eyes of a robot are not so flexible or adaptable that they can be compared with persons’ eyes in the same level. Robot vision is considered as a branch of computer vision. And the definition of computer vision looks like robot vision according to Britannica Concise Encyclopedia [1] and Wikipedia [2]. The definition of computer vision given by Britannica Concise Encyclopedia is as follows. “Field of robotics in which programs attempt to identify objects represented in digitized images provided by video cameras, thus enabling robots to ‘see.’ Much work has been done on stereo vision as an aid to object identification and location within a three-dimensional field of view. Recognition of objects in real time, as would be needed for active robots in complex environments, usually requires computing power beyond the capabilities of present-day technology.” [1] Wikipedia defines computer vision as “the science and technology of machines that see.” The difference between computer vision and robot vision is shown in Fig. 1 [3]. Robot vision is considered as a kind of control-related computer vision. The measurement and control based on vision system in real time are the essential differences of robot vision distinguished from computer vision and machine vision. Based on the opinion of robot control in real time, a definition of robot vision can be drawn out as follows. Robot vision is the science and technology to provide the positions and orientations of objects or the information of environments from the images captured by cameras for robot controllers in image space and/or three-dimensional (3D) Cartesian space in real time in order to control the robot’s motions or behaviors. In this chapter, the discussions for robot vision focus on the main aspects such as the vision system configuration and calibration, visual measurement and control. In fact, the other aspects concerned with robot vision, such as image processing and object recognition, should also take the characteristics of the measurement in real time into account. They are omitted here because of length limitation and their similar principles to those of computer vision. De Xu (Ed) All rights reserved - © 2010 Bentham Science Publishers Ltd. 2 Embedded Visual System and its Applications on Robots De Xu The rest of this chapter is arranged as follows. In Section 2, the development of visual system configuration and calibration is discussed. In Section 3, the current visual measurement methods are investigated. The visual control methods and strategies are analyzed in Section 4. The tendency of robot vision is predicted in Section 5. Finally, Section 6 presents the conclusion. Artificial  Automatic  Signal  intelligence control Processing  (SP) Computational  Robotics intelligence Non‐linear SP Robot  vision Multi‐variable SP Computer  Machine  Machine  vision vision learning Optics Physics Cognitive  vision Image processing Statistics Smart  Geometry Biological  cameras vision Optimization Mathematics Neurobiology Imaging Figure 1: Relation between computer vision and various other fields [3]. VISION SYSTEM CONFIGURATION AND CALIBRATION Vision System Configuration A typical vision system consists of cameras, image grab card, and computer, as shown in Fig. 2(a). In traditional vision systems, several kinds of cameras are available. The most common mode of the signals output by the cameras is analog, including Phase Alternating Line (PAL) and National Television System Committee (NTSC) modes. Digital cameras are also often used in traditional vision system, which output image in digital with IEEE 1394 interface. The image grab card is used for capturing the image from cameras. It should be selected according to the type of cameras. If the cameras used in the vision system output analog signals, the image grab card captures images in specified size via converting the analog signals to digital signals with A/D conversion. If the cameras are digital ones, the image grab card directly captures images via IEEE 1394 interface. The image grab card is inserted into a computer via PCI or ISA bus. Generally, an image grab card can connect four cameras at most. The computer is used to access images from the image grab card, process the images and extract desired features. The image processing algorithms are executed on PC-based processors. The separated components make the traditional vision system be large and heavy, which prevents it from many applications requiring small and light vision system. Image processing Image grabcard Computer (a) Feature  Result  Image sensing Image processing extraction output (b) Figure 2: Vision system configuration, (a) traditional vision system, (b) embedded vision system. Introduction of Robot Vision on the Aspects from Configuration Embedded Visual System and its Applications on Robots 3 One alternative is to select embedded computer such as PC104. However, the vision system with embedded computer is not essentially changed. Another selection is to develop embedded vision system. As shown in Fig. 2(b), an embedded vision system integrates image sensing, image processing, and feature extraction together. It is much more similar to an image sensor that outputs image features directly. DSP is often employed as processor to control the image sensing and execute the image processing algorithms. FPGA is used to execute the general processing algorithms such as Gaussian filtering, Canny edge detection in order to improve the real time performance of image processing. Obviously, the computing capability of an embedded vision system is weaker than a traditional PC-based vision system. The algorithms used in embedded vision systems should be carefully designed in order to ensure the efficiency of image processing. Camera Calibration It is concluded that the intrinsic parameters of cameras are required in the vision measuring methods except 2D measurements. And the extrinsic parameters of cameras are also needed for stereovision and structured light vision. Therefore, it is necessary to investigate the calibrations for intrinsic and extrinsic parameters of cameras. In addition, the relation between cameras and manipulator’s end-effector is also known as hand-eye relation, which is considered as the extrinsic parameters of the camera relative to the end-effector frame. And the calibration for the hand-eye relation is so-called hand-eye calibration. The methods of camera calibration fall into two categories, such as given pattern method and target free method. In the given pattern category, a prepared pattern, such as cubic or planar chessboard pattern, is chosen as the calibration target. For example, Faugeras and Toscani [4] proposed a method using cubic target to calibrate the intrinsic parameters including the normalized focal lengths and the principal point in image plane. At the same time, the extrinsic parameters of the camera relative to the target are also obtained. In Faugeras’ method, linear pinhole model is adopted and the distortion in lens is ignored. Tsai [5] provided a linear method to calibrate the focal length, radial direction distortion in lens and extrinsic parameters. The target used in above methods needs to be well manufactured in 3D space with high precision features including the given points and frame axes. The difficulty in cubic target manufacture makes many researchers to develop new calibration methods with planar target. Zhang [6] developed a linear method with planar grid pattern to calibrate the radial distortion and linear intrinsic parameters separately. The distortion is corrected roughly, and then the linear parameters are calculated. The correction and linear calibration are alternatively carried out for multiple times because of the different results of the camera optical center in the processes of correction and linear calibration. In addition, images in multiple views are needed to improve the calibration accuracy in [6]. Kim et al [7] presented a method to calibrate the intrinsic and extrinsic parameters using a planar pattern with concentric circles. In [8], a calibration method with a view of planar grid pattern was proposed for nonlinear model cameras with large distortion in lens. It takes the intersections of the grid pattern as feature points. The factors of distortion correction are adjusted with an iteration algorithm, and the imaging positions of the intersections are modified to satisfy lines. When the points on each curve in the image of the grid pattern fit a line equation in the image space, the correction is realized, and the distortion correction factors are determined. The camera’s optical center is obtained through Hough transform. Then, a group of linear equations and a simple cubic equation are established in the corrected image space. The rest parameters of the camera are deduced from them. In the target free category, no special target is used for camera calibration. One sub-category is motion-based method with special camera motions to calibrate the intrinsic and extrinsic parameters, which is also known as self- calibration. For example, Basu et al [9] realized camera self-calibration with four groups of camera motions including two steps of translating motions in orthogonal directions. Nonlinear equations are formed from the specified motions, which are used to solve the intrinsic parameters. Du et al [10] conducted camera self-calibration via rotating the camera around specified axes. Ma [11] proposed a self-calibration method with two groups of translating motions in 3D space, in which one group of motions consist of three translations along three orthogonal axes. Hu and Wu et al [12-17] presented a self-calibration method based on planar second order curves and camera rotations, and also developed a method with multiple groups of orthogonal translations in plane. Hartley [18] provided a calibration method for stationary camera based on three steps of camera rotations at the same position. Another sub-category is environment-based method with special features. For example, Benallal et al [19] used the

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.