124 50
English Pages 549 Year 2011
Communications in Computer and Information Science
234
Yanwen Wu (Ed.)
Computing and Intelligent Systems International Conference, ICCIC 2011 Wuhan, China, September 17-18, 2011 Proceedings, Part IV
13
Volume Editor Yanwen Wu Huazhong Normal University 152 Luoyu Road Wuhan, Hubei, 430079, China E-mail: [email protected]
ISSN 1865-0929 e-ISSN 1865-0937 ISBN 978-3-642-24090-4 e-ISBN 978-3-642-24091-1 DOI 10.1007/978-3-642-24091-1 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: Applied for CR Subject Classification (1998): C.2, H.4, I.2, H.3, D.2, J.1, H.5
© Springer-Verlag Berlin Heidelberg 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
The present book includes extended and revised versions of a set of selected papers from the 2011 International Conference on Computing, Information and Control (ICCIC 2011) held in Wuhan, China, September 17–18, 2011. The ICCIC is the most comprehensive conference focused on the various aspects of advances in computing, information and control providing a chance for academic and industry professionals to discuss recent progress in the area. The goal of this conference is to bring together researchers from academia and industry as well as practitioners to share ideas, problems and solutions relating to the multifaceted aspects of computing, information and control. Being crucial for the development of this subject area, the conference encompasses a large number of related research topics and applications. In order to ensure a high-quality international conference, the reviewing course is carried out by experts from home and abroad with all low-quality papers being rejected. All accepted papers are included in the Springer LNCS CCIS proceedings. Wuhan, the capital of the Hubei province, is a modern metropolis with unlimited possibilities, situated in the heart of China. Wuhan is an energetic city, a commercial center of finance, industry, trade and science, with many international companies located here. Having scientific, technological and educational institutions such as Laser City and the Wuhan University, the city is also an intellectual center. Nothing would have been achieved without the help of the Program Chairs, organization staff, and the members of the Program Committees. Thank you. We are confident that the proceedings provide detailed insight into the new trends in this area. August 2011
Yanwen Wu
Organization
Honorary Chair Weitao Zheng
Wuhan Institute of Physical Education, Key Laboratory of Sports Engineering of General Administration of Sport of China
General Chair Yanwen Wu
Huazhong Normal Universtiy, China
Program Chair Qihai Zhou
Southwestern University of Finance and Economics, China
Program Committee Sinon Pietro Romano
Azerbaijan State Oil Academy, Azerbaijan
International Program Committee Ming-Jyi Jang Tzuu-Hseng S. Li Yanwen Wu Teh-Lu Liao Yi-Pin Kuo Qingtang Liu Wei-Chang Du Jiuming Yang Hui Jiang Zhonghua Wang Jun-Juh Yan Dong Huang JunQi Wu
Far-East University, Taiwan National Cheng Kung University, Taiwan Huazhong Normal University, China National Cheng Kung University, Taiwan Far-East University, Taiwan Huazhong Normal University, China I-Shou University, Taiwan Huazhong Normal University, China WuHan Golden Bridgee-Network Security Technology Co., Ltd., China Huazhong Normal University, China Shu-Te University, Taiwan Huazhong University of Science and Technology, China Huazhong Normal University, China
Table of Contents – Part IV
The Impact of Computer Based Education on Computer Education . . . . Yang Bo, Li Yingfang, Li Junsheng, and Sun Jianhong
1
Factors Affecting the Quality of Graduation Project and Countermeasures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xu Haicheng, Sun Jianhong, Xiao Tianqing, and Fu Jinwei
10
Social Network Analysis of Knowledge Building in Synergistic Learning Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wang Youmei
18
Query Rewriting on Aggregate Queries over Uncertain Database . . . . . . . Dong Xie and Hai Long
25
Ranking Tags and Users for Content-Based Item Recommendation Using Folksonomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shimin Shan, Fan Zhang, Xiaofang Wu, Bosong Liu, and Yinghao He
32
Research on Repair Algorithms for Hole and Cracks Errors of STL Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hu Chao, Yang Li, and Zhang Ying-ying
42
“Polytechnic and Literature Are All-Embracing”—Training and Practice of Game Software Talents on Comprehensive Quality . . . . . . . . . Yan Yu, Jianhua Wang, and Guoliang Shi
48
Design of Mobile Learning Scenario Based on Ad Hoc . . . . . . . . . . . . . . . . Zong Hu
54
SET: A Conceptual Framework for Designing Scaffolds in Support of Mathematics Problem Solving in One-to-One Learning Environment . . . . Wang Lina, Chen Ling, and Kang Cui
59
Study on Multi-agent Based Simulation Process of Signaling Game in e-Commerce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qiuju Yin and Kun Zhi
66
Research on Estimation of Nanoparticles Volumes on Rough Surface . . . . Yichen Song and Yu Song
73
Main Factors Affecting the Adoption and Diffusion of Web Service Technology Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Caimei Hu
81
VIII
Table of Contents – Part IV
The Application of Software Maintainability Design in the Intelligent Warehouse Archives System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fei Yang
88
An E-Business Service Platform for Agreement Based Circulation of Agricultural Products of Fruits and Vegetables . . . . . . . . . . . . . . . . . . . . . . Liwei Bao, Luzhuang Wang, Zengjun Ma, Jie Zhang, and Qingchu Lv
93
Two Improved Proxy Multi-signature Schemes Based on the Elliptic Curve Cryptosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fengying Li and Qingshui Xue
101
Online Oral Defense System Based on Threshold Proxy Signature . . . . . . Fengying Li and Qingshui Xue
110
Happy Farm an Online Game for Mobile Phone . . . . . . . . . . . . . . . . . . . . . . Quanyin Zhu, Hong Zhou, Yunyang Yan, and Chuanchun Yu
120
Analysis and Intervention on the Influencing Factors of Employee’s Job Insecurity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qiong Zou
129
An Analysis on Incentive Mechanism for Agents under Asymmetric Information Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhao Chenguang and Xu Yanli
136
A Study on Managerial Performance Evaluation . . . . . . . . . . . . . . . . . . . . . Zhao Chenguang, Xu Yanli, and Feng Yingjun
144
A Study on Contribution Rate of Management Elements in Economic Growth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhao Chenguang, Xu Yanli, and Feng Yingjun
151
An Analysis on Real Contagion Mechanism of Financial Crisis . . . . . . . . . Xu Yanli and Jiang Hongmei
159
Teaching and Learning Reform of Visual Foxpro Programming . . . . . . . . . Xiaona Xie and Zhengwei Chang
167
Numerical Simulation of a New Stretch Forming Process: Multi-Roll Stretch Forming Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haohan Zhang, Mingzhe Li, Wenzhi Fu, and Pengxiao Feng
172
Research on the Maturity of the CDIO Capability Evaluation System for Engineering Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liang Hong and XingLi Liu
181
The Web Data Extracting and Application for Shop Online Based on Commodities Classified . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianping Deng, Fengwen Cao, Quanyin Zhu, and Yu Zhang
189
Table of Contents – Part IV
IX
Image Registration Algorithm Using an Improved PSO Algorithm . . . . . . Lin-tao Zheng and Ruo-feng Tong
198
The Research of Network Intrusion Detection Based on Danger Theory and Cloud Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhang Ruirui, Li Tao, Xiao Xin, and Shi Yuanquan
204
A Network Security Situation Awareness Model Based on Artificial Immunity System and Cloud Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhang Ruirui, Li Tao, Xiao Xin, and Shi Yuanquan
212
A Research into the Tendency of Green Package Design . . . . . . . . . . . . . . . Zhang Qi, Jiang Xilong, and He Weiqiong
219
Time-Frequency Filtering and Its Application in Chirp Signal Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiumei Li and Guoan Bi
224
Hunting for the “Sweet Spot” by a Seesaw Model . . . . . . . . . . . . . . . . . . . . . Haiyan Li, Jianling Li, Shijun Li, and Zhaotian Liu
233
Multi-objective Optimization Immune Algorithm Using Clustering . . . . . Sun Fang, Chen Yunfang, and Wu Weimin
242
A Novel Hybrid Grey-Time Series Filtering Model of RLG’s Drift Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guo Wei, Jin Xun, Yu Wang, and Xingwu Long
252
The Final Sense Reverse Engineering Electroencephalography . . . . . . . . . . Mohammed Zahid Aslam
260
Eutrophication Assessment in Songbei Wetlands: A Comparative Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Han Bingxue
265
Capital Management of Real Estate Corporations under Tightening of Monetary Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liu qingling and Li Xia
273
Scotopic Visual Image Mining Based on NR-IQAF . . . . . . . . . . . . . . . . . . . Fengbo Tian, Xiafu Lv, Jiaji Cheng, and Zhengxiang Xie
280
Extraction of Visual-Evoked Potentials in Rat Primary Visual Cortex Based on Independent Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . . Zhizhong Wang, Hong Wan, Li Shi, and Xiaoke Niu
289
A Novel Feature Extraction Method of Toothprint on Tongue in Traditional Chinese Medicine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dongxue Wang, Hongzhi Zhang, Jianfeng Li, Yanlai Li, and David Zhang
297
X
Table of Contents – Part IV
Stability and Bifurcation of an Epidemic Model with Saturated Treatment Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jin Gao and Min Zhao
306
Study of Monocular Measuring Technique Based on Homography Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jia-Hui Li and Xing-Zhe Xie
316
An AHP Grey Evaluation Model of the Real Estate Investment Risk . . . Ba Xi, Zhang Yan, and Wu Yunna
325
The Error Analysis of Automated Biochemical Analyzer . . . . . . . . . . . . . . Chen Qinghai, Wu Yihui, Li Haiwen, Hao Peng, and Chen Qinghai
335
LOD-FDTD Simulation to Estimate Shielding Effectiveness of Periodic Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chen Xuhua, Yi Jianzheng, and Duan Zhiqiang
342
Lossless Compression of Microarray Images by Run Length Coding . . . . . A Sreedevi, D.S. Jangamshetti, Himajit Aithal, and A. Anil kumar
351
Wavelet-Based Audio Fingerprinting Algorithm Robust to Linear Speed Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jixin Liu and Tingxian Zhang
360
Design and Implementation for JPEG-LS Algorithm Based on FPGA . . . Yuanyuan Shang, Huizhuo Niu, Sen Ma, Xuefeng Hou, and Chuan Chen
369
A Comparative Study on Fuzzy-Clustering-Based Lip Region Segmentation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shi-Lin Wang, An-Jie Cao, Chun Chen, and Ruo-Yun Wang
376
A Novel Bandpass Sampling Architecture of Multiband RF Signals . . . . . Fachang Guo and Zaichen Zhang
382
Classification of Alzheimer’s Disease Based on Cortical Thickness Using AdaBoost and Combination Feature Selection Method . . . . . . . . . . . . . . . . Zhiwei Hu, Zhifang Pan, Hongtao Lu, and Wenbin Li
392
A Robust Blind Image Watermarking Scheme Based on Template in Lab Color Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . YunJie Qiu, Hongtao Lu, Nan Deng, and Nengbin Cai
402
Digital Circuit Design and Simulation of a New Time-Delay Hyperchaotic System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhang Xiao-hong and Zhang Zhi-guang
411
Hospital Information System Management and Security Maintenance . . . Xianmin Wei
418
Table of Contents – Part IV
XI
Design and Research of a Multi-user Information Platform Interface in Rural Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xianmin Wei
422
Research on Fluorescence Detection System of Ca2+ . . . . . . . . . . . . . . . . . . Bao Liu, Sixiang Zhang, Wei Zhou, and Yinxia Chang
427
Channel Estimation for OFDM Systems with Total Least-Squares Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tongliang Fan and Dan Wang
433
Detection of Double-Compression in JPEG2000 by Using Markov Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhao Fan, Shilin Wang, Shenghong Li, and Yujin Zhang
441
High Speed Imaging Control System Based on Custom Ethernet Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dawei Xu, Yuanyuan Shang, Xinhua Yang, and Baoyuan Han
450
Design and Implement of Pocket PC Game Based on Brain-Computer Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jinghai Yin and Jianfeng Hu
456
Authentication Service for Tactical Ad-Hoc Networks with UAV . . . . . . . Dong Han, Shijun Wang, and Laishun Zhang
464
An Adjustable Entropy Interval Newton Method for Linear Complementarity Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hu Sha and Yanqiang Wu
470
Applied Research of Cooperating Manipulators Assignments Based on Virtual Assembly Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shizong Nan and Lianhe Yang
476
Kusu Cluster Computing Introduction and Deployment of Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liang Zhang and Zhenkai Wan
484
Protein Function Prediction Using Kernal Logistic Regresssion with ROC Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jingwei Liu and Minping Qian
491
A Novel Algorithm of Detecting Martial Arts Shots . . . . . . . . . . . . . . . . . . Zhai Guangyu and Cao Jianwen
503
Method of Bayesian Network Parameter Learning Base on Improved Artificial Fish Swarm Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yan Wang and Liguo Zhang
508
XII
Table of Contents – Part IV
A Research of the Mine Fiber Communication System Based on GA . . . . ZuoMing
514
Research of Resource Management and Task Scheduling Based on the Mine Safety Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xuxiu
521
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
527
The Impact of Computer Based Education on Computer Education* Yang Bo, Li Yingfang, Li Junsheng, and Sun Jianhong Engineering College, Honghe University, Yunnan Mengzi, China, 66100 [email protected]
Abstract. It is well known that the computer technology impacts on modern education deeply, and the first to be affected is the computer education. In this paper, the results about the impact of computer based education (CBE) on computer education will be presented which researched based on a case study. For better results in teaching computer knowledge, the teaching method and curriculum program should be reformed as the computer technology developing. Some detail reference countermeasures will be discussed. Keywords: Computer based education, CBE, teaching method, learning method, Computer education.
1 Introduction The computer technology is developing sharply, in the circumstances, “computer science education research is an emergent area and is still giving rise to a literature [1].” What is the challenge in computer education? Reference [2] considered that the real challenge in computer education is to avoid the temptation to re-invent the wheel. Computers are a revolutionary human invention, so we might think that teaching and learning about computers requires a new kind of education. It is different with the traditional education, computing education should ignore the hundreds of years of education, cognitive science, and learning sciences. Over the past decade the teaching model and learning method has been changed a lot, mostly due to the introduction of information technology, especially in higher education. Computer based education (CBE) is usually used as an assistant education method helping students learning after class. Furthermore, CBE works as a major education method in distance learning and be used to replace conventional classroom teaching. As a result, any educational institutions use the Internet for collaborative learning in a distributed educational process. The students can learn at anytime just facing their computer, no matter where they stay. Computer education stands in the breach to adopt CBE and the scope is far more than other subjects. *
Pecuniary aid of Honghe University Course Construction Fund, ( Computer Course, WLKC0802).
》
Y. Wu (Ed.): ICCIC 2011, Part IV, CCIS 234, pp. 1–9, 2011. © Springer-Verlag Berlin Heidelberg 2011
《 Fundamentals
of
2
Y. Bo et al.
Many excellent methodology for computer education have been proposed by some of top computer education researchers, such as [1] [2] [3] [4] proposed. In this paper, we focused on the research of the impact of CBE on computer education based on a case study. A result of a case study researched on computer department of Engineering College of Honghe University (HU) will be presented. The research based on the current education method of computer department of HU via questionnaires investigation. The research work of this paper is one part of a series of our research works in HU.
2 Background HU is situated in Mengzi, Yunnan province and serves a diverse population of fulland part-time, national and international students. In 2009~2010 academic year, more than ten thousand students enrolled in 12 undergraduate colleges with 33 undergraduate specialties [5]. The same year, the major of computer science and technology as one of them, there are 367 students enrolled in. There are 28 full-time instructors support the routine instruction works. The title of technical post structure is shown as Fig. 1. Professor
associate professor 4%
4%
Lecturer
Assistant
11%
81%
Fig. 1. The title of technical post structure
As a local university, cultivating applied talents for society is the goal of HU and for this HU’s administration is promoting instructional technology as a crucial part of higher education for faculty and students. Response for this, the administrators of computer department felt obliged to reform teaching model and restructure curriculum program as the educational circumstances changing. Because of the CBE is widely adopted in computer education, a strong possibility existed that the student’s learning model is changing, especially the Internet and multimedia technology expand greatly access to knowledge sources. For better results in computer education, the teaching method and curriculum program should be reformed as the student’s learning model is changing [6]. How to reform teaching model and restructure curriculum? Why we investigated the impact of CBE on computer education? Because the impact of CBE on computer education is making the students’ learning method changed. And the changing of students’ learning method is an important factor impacting on computer education.
The Impact of Computer Based Education on Computer Education
3
3 The Questionnaires The about 5 minutes questionnaires consisted of four parts focusing on each student’s general information, perceptions about CBE, the opinions on effect of CBE and the opinions on shortcoming of CBE. The questionnaire construction was formed by close ended questions (multiple choices, scaled questions) and open ended questions. We issued a total of 150 questionnaires, and the number accounted for 41 percent of the total enrolled students. Among them, 142 valid responses were received. A. Student’s General Information The first question: “What is your sex?” The question surveys the general information of students for analysis the relationship between gender and computer education. The results showed as Fig. 2. Male
Female
46% 54%
Fig. 2. Students Gender
B. The Daily Computer Time of Male and Female Students From the Fig.3 we can see that the male students spend more time on computer than female students. Most of female students spend 2 to 4 hours on computer one day, but nearly 40% male students spend 4 to 6 hours on computer a day. Spend more than 6 hours on computer per day, the ratio of male students achieve 13%. Male
Female
100% 82%
80% 60%
44%
40% 20% 0%
18%
6% 0% In 2 hour
38%
2 to 4 hours
4 to 6 hours
13% 0% More than 6 hours
Fig. 3. Computer time daily sktch of the students
4
Y. Bo et al.
C. The Computer Time Spent on Learning It is well known, many students spend a lot of time on computer is not for study. As reference [7] researching, there are many students addict to the Internet. They spend most of computer time on watching movies, play online game and shopping, etc. On computer-related majors, this problem is particularly acute. As Fig.4 shown, the female students are better than male students, they spend more computer time on study, but the different was not significant.
Male 80% 70% 60% s t 50% n e 40% d u t S 30% 20% 10% 0%
Female
56% 54% 31%
25% 15% Less than 10%
19%
10% to 30% 30% to 50% Percentage of computer time
0% More than 50%
Fig. 4. The percentage of computer time spent on learning
D. Computer Based Learning Approach In this question, we attempt to survey what kinds of computer based learning approach is the students’ favorite. Our surveyed results are shown as Fig.5. Offline courseware was chosen by 72% students, although most of offline courseware was download form Excelsior Course website or commercial education website, at least we understand that the students prefer off-line learning. It is important to consider offering downloadable edition if we publish our courseware online. p
g
pp 72%
80% 60% 40%
33%
19%
20% 0%
33%
Learning Forum
Excelsior Commercial Offline Course education Coursware websites website
Fig. 5. Computer based learning approach
The Impact of Computer Based Education on Computer Education
E.
5
Reasons for Using Computer-Assisted Learning
How can we know the CBE is to be accepted by students? For getting answer of this question, we design scaled questions to survey it, the questions and surveyed results as Tab.I shown. F. What Teaching Method is the Students’ Favorite? The purpose of this question is investigation what teaching method is the student favorite? In Fig.6, obviously, most of students like the learning methods which combine traditional teaching method with CBE. However, we find that the traditional classroom teaching method still can not be replaced.
Traditional classroom teaching method Pure autonomic learning Combine traditional teaching method with CBE 12% 23% 65%
Fig. 6. What is the students’ choice on teaching method?
G. What Types of Courses via Computer-Assisted Learning Will Be Better? One question in our questionnaire is: “What types of courses via computer-assisted learning will be better?” The curriculums of Computer Science and Technology major of HU can be divided into four categories: Humanities and Arts course; Basic Theoretic Foundation courses; Software Application Skill Courses (such as Photoshop,
Humanities and Arts course Basic Theoretic Foundation courses Software Application Skill Courses Programming Language and Development Platform courses 88%
100% 80% 60% 40%
51%
42% 12%
20% 0%
Fig. 7. What types of course via computer-assisted learning will be better
6
Y. Bo et al.
3D Max); Programming Language and Development Platform courses. The surveyed result was shown as Fig.7. From the figure, we can see there are many students like computer-assisted learning except the Basic Theoretic Foundation courses, especially the Software Application Skill courses. And there are many students note that to learn English or other foreign language assisted by computer is very useful.
5 Countermeasure Analysis From previous section description, it is easy to know that the CBE significant effect on computer education. The traditional education model is being subjected to the challenge of CBE in computer education. As Fig. 6 shown, in our surveyed results show that there are 23% of interviewees like pure autonomic learning method through CBE; on the contrary, only 12% of surveyed students still favorite the traditional classroom teaching method. Obviously, CBE is effecting and promoting the change of computer education and education structure. Under the new environment, the teachings model how to respond to such change? Based on previous section description, we promote the following countermeasure: A. The Curriculum Program Reforming We have met a ticklish problem when we discussed how to design the curriculum program for the major of Computer Science and Technology. While students complained about their courses have too many theoretic courses, these courses are not conducive to really work. On the other hand, as undergraduate education, talent training objectives require students must master solid theoretical foundation knowledge. Such like Software Application Skill courses and Programming Language and Development Platform courses are the courses to enhance the students’ practical abilities. But we can not arrange too much in instruction plan because the curriculum limited by credits hours. If we offer courseware for these courses to support students study after-school, the problem will be solved. As Fig. 7 shown, this kinds of course using computer based instruction is acceptable for most of students. B. Assignment and Student’s Grades Assignment provides practice or can expand content that is taught during class time, and allows it to be reinforced or increased for deeper understanding. Assignment helps students become active learners, in charge of their own learning, goals, and outcomes. To the students of universities and colleges, assignment is a necessary way to offer teaches another form of assessment by which they can gage student’s understanding, and as important, student’s misunderstandings. The assignments usually are considered to assess student’s grades. However, many students no longer to complete their assignment independently, they directly search for the answer through Google, Baidu and Yahoo. As the Fig.8 shown, 20% surveyed students always use search engineer to find answer for their assignment and 34% students frequently do in the same way. Therefore, the teachers should consider the type of assignment to prevent occurrence of the similar situations, attempt to find a fair way to assess students’ grades.
The Impact of Computer Based Education on Computer Education
Always
Frequently 0%
Sometime
7
Never 20%
46%
34%
Fig. 8. Do you often use search engineer to get answer for your assignment
C. About Instruction Method Reforming Traditional teaching method of China emphasizes speaking, less interacting with students in the classroom. As previous description, if we want emphasize the instruction of theoretical foundation knowledge in classroom, we must expect students spend more time to study practice course such as software application skills and programming. It is important to let students understand the contact between theory and practical application. Reference [8] proposed a method, the role of participatory teaching methods in the computer science classroom. The list of the methods includes brainstorming, directed dialogues, small discussion groups, role playing, games, debates, panel discussions, and Socratic dialogues. The author has used such methods in Computers and Society classes and to a limited degree in Compiler Design, Computer Architecture and Operating Systems classes and believes that such techniques have a place in the computer science classroom. As a reference method, it may not be suitable for every teacher, but we can modify it to fit our teaching. In short, good teaching method is not unique; it will be different to different students, and different teachers. D. Teacher Team Building The students have strong ability in practice are basic requirement of computer related majors. However, the academic teachers have wealth of theoretical knowledge, but most of them usually lack of practical experience in software development [9]. How to solve the contradictory problem? Reference measures shown as follows: •
•
The teachers training must be intensified. The teacher training plan of general universities and colleges usually send their teachers to some famous universities to study. However, the teachers of computer related majors go to the famous software development companies or computer manufacturers for improve teachers’ practice ability will be better. We can consider hiring some senior engineers as the students’ advisors from the companies and enterprises with cooperation relationship [9].
8
Y. Bo et al. Table 1. Reasons for Using Computer-Assisted Learning Items Able to develop classroom teaching to enhance the level of individual knowledge In-depth study and improve their learning ability Extensive after-school learning, increase learning efficiency The need for completing homework Free choice of learning content, according to their own learning goals They can choose their own study time, decided to learn How long Excellent teaching courseware is more suitable than classroom teaching for computer courses education Personal interests/Study habits
Disagree
Acquiesce
Agree
Strongly agree
0%
16%
49%
35%
7%
16%
53%
23%
7%
23%
42%
28%
14%
21%
56%
9%
2%
12%
51%
35%
7%
19%
49%
26%
26%
23%
30%
21%
9%
26%
40%
26%
6 Conclusions In this paper, we have presented a case study results which showed the impact of CBE on computer education. It is well known that the computer technology impacts on modern education deeply, and the first to be affected is the computer education. For better results in teaching computer knowledge, we have discussed the countermeasures in this paper. Close observation the effect of our reform measures and improvement of deficiencies in a timely manner is the focus of our future work.
References [1] Fincher, S., Petre, M.: Computer science education research, p. 1. Taylor & Francis Group, London (2004) [2] Almstrum, V.L., Hazzon, O., Guzdzial, M., Petre, M.: Challenges to computer science education research. In: Proceedings of the 36th SIGCSE Technical Symposium on Computer Science Education, SIGCSE 2005, pp. 191–192. ACM Press, New York (2005) [3] Randolph, J., Julnes, G., Sutinen, E., Lehman, S.: A Methodological Review of Computer Science Education Research. Journal of Information Technology Education, Informing Science Institute, CA 7, 135–162 (2008) [4] Holmboe, C., McIver, L., George, C.: Research Agenda for Computer Science Education. In: 13th Workshop of the Psychology of Programming Interest Group, Bournemouth UK, pp. 207–223 (April 2001) [5] http://iro.uoh.edu.cn/Aboutus.asp (May 2010) [6] Sun, J., Zhu, Y., Fu, J., Xu, H.: The Impact of Computer Based Education on Learning Model (in press) [7] Sun, J.: Solving Strategies Research for the Negative Impact of Computer Technology on Education. In: 2010 Second International Workshop on Education Technology and Computer Science, ETCS 2010, vol. 1, pp. 671–674 (2010)
The Impact of Computer Based Education on Computer Education
9
[8] Jones, J.S.: Participatory teaching methods in computer science, Technical Symposium on Computer Science Education. In: Proceedings of the Eighteenth SIGCSE Technical Symposium on Computer Science Education, Louis, Missouri, United States, pp. 155–160 (1987) [9] Xu, H., Sun, J., Xiao, T., Fu, J.: Factors Affecting the Quality of Graduation Project and Countermeasures (in press)
Factors Affecting the Quality of Graduation Project and Countermeasures Xu Haicheng, Sun Jianhong, Xiao Tianqing, and Fu Jinwei Engineering College, Honghe University, Yunnan Mengzi, China, 661100 [email protected]
Abstract. The graduation projects (Thesis) and defenses play an important role in guaranteeing the educational quality and implementing the desirable educational goals. In recent years, many universities and colleges have to face a serious problem that the quality of graduation projects is declining. In order to avoid the graduation project and defense become a formality. In this paper, we researched based on a case study to analysis the main cause of graduation project quality going down and then proposed several corresponding countermeasures. Keywords: Graduation project, Graduation thesis, Educational quality, Education.
1 Introduction The graduation projects (Thesis) and defenses play an important role in guaranteeing the educational quality and implementing the desirable educational goals. According to the Department of Higher Education of China (DHEC) document No. 14 of 2004, “Graduation Project (Thesis) is an important part of teaching in order to achieve training objectives. The Graduation Project (Thesis) has an irreplaceable role on training students seeking truth; training students have ability in scientific research and strengthening students’ social awareness to improve their capacity and quality of general practice and so on. Graduation Project (Thesis) is a kind of important expression method combining with education, productive work and social practice, is an important practical aspects on training students’ creative ability, practical ability and pioneering spirit. Meanwhile, Graduation Project (Thesis) is also an important measure to evaluate the quality of teaching; it is a basic requirement to students for graduation and getting a degree. [1]” From the document, it is clearly to know that the highest educational authority of China attaches has strict requirement and great importance to graduation project (Thesis). The graduation projects (Thesis) of computer related majors general have practical value; usually complete a project as the goal. Therefore, we only discuss the situation that takes project as graduation project (Thesis) in this paper. In recent years, we have to face a serious problem that the quality of graduation projects of Computer Science and Technology of Honghe University (HU) is declining. The administrators of Department of Computer felt obliged to survey for Y. Wu (Ed.): ICCIC 2011, Part IV, CCIS 234, pp. 10–17, 2011. © Springer-Verlag Berlin Heidelberg 2011
Factors Affecting the Quality of Graduation Project and Countermeasures
11
changing this negative situation. In this paper, we will present the result of our survey and then will discuss the countermeasures.
2 Background HU is situated in Mengzi, Yunnan province, the capital city of Honghe Hani & Yi nationality Autonomous Prefecture. HU serves a diverse population of full- and parttime, national and international students. In 2009~2010 academic year, more than ten thousand students enrolled in 12 undergraduate colleges with 36 undergraduate specialties. Cultivating applied talents for society is the goal of HU and for this HU’s administration is promoting instructional technology as a crucial part of higher education for faculty and students [2]. As a local university, HU focus on cultivating applied talents for local society and promoting development of local economy. Therefore, the students have strong ability in practice are basic requirement of Department of Computer in HU. As a developing university, there are still many problems awaiting solution in HU. Some of them related with our focus topic of this paper. They are: •
• •
Lack of education funding. This is a serious common problem in many universities and colleges of China. Same as other developing universities and colleges, HU invested heavily in construction of infrastructural facilities for prepare for the Undergraduate Education Evaluation in recent years. HU has had to undertake heavy liability for this. The problem of lack of education funding made a great impact on the daily instruction. The structure of faculty is irrational. The ratio of instructors with extensive education experience and research ability is still lower and it need take many years to change it. Educational environment has changed significantly in recent years, but the policy has not changed in time.
In the next section we will analysis these issues how to affect on the quality of graduation projects.
3 Factors Affecting the Quality of Graduation Project A. The Problem of Students’ Attitude 1) They know that they will graduate, regardless how their graduation project going: In order to align with international universities, more and more universities and colleges of China began to use credit system to replace the traditional fixed four-year education system. To adopt academic credit system enables excellent students to finish their study as soon as possible. However, the academic credit system is a new issue at institutions of higher learning in our country. Not only the teachers and students, but also the administrators, their ideology still have not been changed by the academic credit system. Most universities and colleges of China are still “strict entry, easy out." After four years of study, almost every student can graduate, regardless of their academic performance is good or bad. Accordingly, most students do not treat their graduation project seriously.
12
X. Haicheng et al.
2) Positivity affected by the employment pressure: Although the credit system has been implemented in HU, but the proposal of graduation project is to be arranged usually in the seventh semester (the eighth semester usually is the last semester). The graduation project usually is arranged at study plan of the last semester. A few years ago, almost all the students can get a contract of a job before the last semester. Thus the students can hurl themselves into doing graduation project at the last semester if their jobs have been settled down already. Obviously, the qualities of the graduation projects were granted in this situation. Now, the situation has changed. The students have to take part in various examinations of recruitment. Competition for a steady job is very severe. Under the severe employment pressure, the students can not focus on their graduation project, especially at the last semester. In HU, the administrators of department of Computer Science and Technology noticed that the quality of graduation projects is declining year by year. This result was closely related to above two reasons. B. Problems Caused by Rapid Development 1) Problems of admission policies It is well known, the colleges and universities of China do not have the rights of autonomous enrolment. The strict enrolling system defined by DHEC. The universities and colleges were divided into three levels: the key universities; general universities and the third level universities and colleges (include private colleges and universities). After the unified exam once a year, the key universities cream off the highest achievers first. HU as a developing general university is hard to attract excellent students. Especially, while the institutions are expanded enormously after successive years of enrollment expansion, we have to admit that the quality of the new enrolled students has been declining. As Fig.1 shows, the promotion rate of senior school graduates from 27.3% in 1990 rising to 72.7% in 2008 [3]. Meanwhile, the teaching quality has become a problem that the universities hard to guarantee. Promotion Rate of Senior School Graduates ) 100 % ( e 80 t a R 60 n o i 40 t o m 20 o r P 0
1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008
Rate(%) 27.3 28.7 34.9 43.3 46.7 49.9 51 48.6 46.1 63.8 73.2 78.8 83.5 83.4 82.5 76.3 75.1 70.3 72.7 Year
Fig. 1. Promotion Rate of Senior School Graduates (The data come from Ministry of Education of the People's Republic of China [1]) Note: Promotion rate of senior secondary school graduates is the ratio of total number of new entrants admitted to HEIs (including those admitted into the regular full-time courses run by (TRVUs) to the total number of graduates of regular senior secondary schools of the current year.
Factors Affecting the Quality of Graduation Project and Countermeasures
13
2) Problems of the currently graduation policy Since 1977, the entrance examination system of institutions of higher education recovered, the graduation policy is pursuing “strict entrance, easy out”. Up still now, almost no student can not graduate for graduation project has been not passed defense in HU. The same is true of most universities and colleges of China. The part of graduation project defense is just a formality if no one ever fails it. This kind of policy was Okays to used admission policy, elite education. Now, the situation has been changed, as Fig. 2 shows, the gross enrolment rate of institutions of higher education (IHEs) from 3.5% in 1991 sharply rising to 23.3% in 2008. It means that the higher education in China transformed from elite education to mass education. The graduation policy, “strict entrance, easy out” has been become a factor which conducts the quality of education declined. Many papers have discussed about the issue, such as [4][5][6][7]. Same as these authors’ view, we all think the currently graduation policy should be reformed to adapt the new education environment. Gross Enrolment Rate of IHEs 22- 25.00% 81 of 20.00% eg a 15.00% eh t 10.00% fo oi 5.00% ta R 0.00%
1 9 9 1
2 9 9 1
3 9 9 1
4 9 9 1
5 9 9 1
6 9 9 1
7 9 9 1
8 9 9 1
9 0 9 0 9 2 1 0 Years
1 0 0 2
20 02
30 02
40 02
50 02
60 02
70 02
80 02
Fig. 2. (The data come from Ministry of Education of the People's Republic [1])
C. Lack of Qualified Advisors During the rapid development of higher education, the construction of teaching team is difficult to keep pace with. Especially in many general universities and colleges, it is still a difficult problem. The integral structure of the faculty of Department of Computer Science and Technology of HU is shown as Fig.3. The total number of faculty is 28. From Fig.3 we can see that most of them are lecturer, the professor and associate professor only have 3 persons. From academic title view, most teachers only got bachelor degree, as Fig.4 shown. HU as a developing university can not offer good payment to keep the outstanding teachers stay. The problem of qualified advisor is one of factors that affecting the quality of graduation projects. Building the teacher team still need be strengthening at first in developing universities and colleges such as HU.
14
X. Haicheng et al.
Structure of the faculty Professor
Associate professor 1, 4%
4, 14%
Lecturer
Assistant
2, 7%
21, 75%
Fig. 3. The title of technical post structure
Academic title structure PHD.
Master
Bachelors 2, 7% 8, 29%
18, 64% Fig. 4. Academic title structure of faculty
D. Education Funding Shortage Most universities and colleges of China are saddled with huge debt is an indisputable fact. This problem due to rapidly development, for passing “Evaluation of Undergraduate Education Levels” launched by Ministry of Education, many universities and colleges have to make a lot of investment in infrastructure; and another serious problem is that educational funding the government offered was obviously insufficient. In Japan, 1.5% of GDP was used for higher education; in America, the ratio is 1.5%, but China's higher education funding accounts for only 1% of GDP [8]. Funding shortage, the impact on the graduation project mainly in: • •
Education funding shortage conduct poor treatment of advisors of graduation projects, enthusiasm for work is not high. Education funding shortage, most aspects of graduation projects can not be implemented according to plan as required.
4 Countermeasures Analysis How to grantee the quality of education? In previous section, we have discussed about the factors that affecting the quality of graduation projects. In this section, we will promote some countermeasures for enhancing the quality of graduation projects.
Factors Affecting the Quality of Graduation Project and Countermeasures
15
A. The Flexible Scheduling of Graduation Project As previous description, the students can not pay their attention to graduation projects at the last semester under the employment pressure. The best solution for the problem is to implement the credit system to replace the traditional four-year study plan. It is time to gradually change people's viewpoint about university study is four-year plan. In HU, although the credit system had been implemented many years before 2009, actually the implementation was not perfect. Since 2009, the freshman charged on credits, this is a significant sign of real credit system. The credit system no longer has a common "the last semester" concept. The students can make their own study plan to accomplish their studies. The study plan relies heavily on the students. Reference measures shown as follows: • • •
Assign instructor for freshman. We must note that the class advisor is not advisor for study, the class advisor usually take in charge of a whole class. Provide an opportunity for students to apply the proposal and final defense of graduation projects each semester. Combine with professional practice to implement. The graduation projects should be related with actual requirement.
B. Strict Policy Enforcement From the policy of DHEC, we knew that the top level of management of higher education attaches great importance to graduation project. However, the implementation stage of the policy also is a determinant factor of that graduation projects play an important role in guaranteeing the educational quality. The every step of the procedure of education should be under strict controlled for guarantee the educational quality. The reference strategies as shown following: •
• •
Guarantee the quality of proposal defense. The proposal as an important part of graduate project, it specifies what the students will do, how they will do it, and how they will interpret the results. If the committee of graduation project (Thesis) lets a student pass the proposal defense, then must ensure that: the student have done sufficient preliminary reading/research in the area of his choice; he have thought about the issues involved and are able to provide more than a broad description of the topic which he plans to investigate. More effort should be made to control and supervise the procedure of graduation projects. Students should meet regularly with their advisor to discuss the progress of project and address problems in time. The quality of the final defense of graduation projects should be well controlled. Some noticeable practical measures should be taken to proof the defense of graduation project is not just a formality. The projects do not reach the standard required should be postponed to defense.
C. Choose a Topic with Application Value The graduation projects of computer related majors usually have practical value. Certainly, it depends on the chosen of project topic and quality of the projects.
16
X. Haicheng et al.
Consequently, we should ensure that the graduation project topics must come from the requirements of actual work. The projects topics can be update a used system or develop a new project for work requirement, but not a hypothetical application projects. D. Teacher Team Building The advisors of graduation projects are usually acted by experienced teachers. However, although the academic teachers have wealth of theoretical knowledge, but most of them usually lack of practical experience in software development. Because of this, many good teachers can not play as a good advisor of graduation projects. For solve the problem, the teachers training must be intensified; and another way, we can consider hiring some senior engineers as the students’ advisors from the companies and enterprises with co-operation relationship. The second method has two advantages: • •
The students may have an opportunity to participate in the actual project development to complete their graduation project. Some students’ job may settle down for his outstanding performance during the working of graduation projects.
5 Conclusions and Future Works In recent years, we noticed that the quality of graduation projects of computer major of HU is declining. Address the serious problem; we have discussed the reasons for causing this problem, and proposed several corresponding countermeasures. However, there are still many difficulties to solve the problem. The most typical problem is pressed for education funding. It is also a common problem of many developing universities and colleges of China. Because of lack of funds, many good measures can not effectively implement. It is also an issue worth of research topic in the future.
References [1] Ministry of Education P.R.C, With regard to the strengthening of common institutions of higher learning graduation project(thesis). Department of Higher Education (14) 2004 [2] Sun, J., Zhu, Y., Xu, H.: The Impact of Computer Based Education on Learning Model (in press) [3] Ministry of Education of the People’s Republic (June 2010), http://www.moe.edu.cn [4] Shihua, D.: From ‘Strict Entry‘ to ‘Strict Out‘: new ideas to get rid of stubborn of examoriented education. Research on Education Development, 27–30 (June 2005) (in Chinese) [5] Wang, Z.W.: After Enlarging Enrollment,Colleges and Universities Need’Easier Entrance and Stricter Exam Marking Schemes. Journal of Luoyang Teachers College 19(3), 77–79 (2000) (in Chinese)
Factors Affecting the Quality of Graduation Project and Countermeasures
17
[6] Wen, M.: Expand Enrollment and Easy Entry, Strict Out. pp. 61–63. Beijing Education. Higher Education (July 2003) (in Chinese) [7] Zheng, D., Gong, B., Pan, X., Fei, C., Jiang, Y., et al.: Stric Entry, Strict our, Imperative Way. Researches in Higher Education of Engineering 5, 32–34 (2002) [8] Wang, J.-h.: Competitive and Non-competitive–Analytical Framework of Funding for Higher Education Provided by Government. Journal of China University of Geosciences (Social Sciences Edition) 10(1), 13–19 (2010) (in Chinese)
Social Network Analysis of Knowledge Building in Synergistic Learning Environment Wang Youmei Dept. of Educational Technology, Wenzhou University, Wenzhou city, China [email protected]
Abstract. Knowledge building is a meaning making process with social negotiation and dialog. In the new environment adapting to the group learning context supported by synergistic learning toolkits, this study explores the community structure social relation and participation character in classroom knowledge building by Social network analysis. This results show that there come into being a social relation network with knowledge convergence under the learner knowledge building led by teachers, which make views and ideas of learners assembling to support knowledge building, at the same time this paper also describes the participant character in learning community. This study can provide a new way to learning technology innovation in different culture context.
,
Keywords: Knowledge building, Synergistic learning, Social Network Analysis.
1 Introduction Knowledge building is actually a term with different research contexts. While the constructivists highlight the construction from the angle of the subjects of knowledge processing, Scardamalia and Bereiter(2003) focused on the procedure of knowledge processing, they defined the knowledge building as “the continuous generation and development of valuable knowledge in the community”[1]. Social constructivism believes that knowledge is a constructive process that interacts with community members, and it can’t exist without the social cultural context where the individuals live. During the process of knowledge building the learner and team members have the common learning target and key points, share the learning achievements via synergistic learning. Therefore, the community structure and relationship formed by the learner during the process of knowledge building is a critical research perspective of enhancing the quality of knowledge building. Based on the new learning environment built by the synergistic learning toolkit which adapts to the collective learning context, this paper used the method of social network analysis, namely sociogram and social network matrix analysis to explore the community relationship and participative features formed by the learners during the interaction. These participative features and community interactive structure to a large extent determine the quality of synergistic learning and the effects of collective knowledge building. Y. Wu (Ed.): ICCIC 2011, Part IV, CCIS 234, pp. 18–24, 2011. © Springer-Verlag Berlin Heidelberg 2011
Social Network Analysis of Knowledge Building in Synergistic Learning Environment
19
2 Research Review Since the anthropologist Barnes(1954) first used the concept of “social network” to analyze the social structure of a Norwegian fishing village, the social network has been regarded as one of the simplest, clearest, and most persuasive research perspective for analyzing social structure [2]. “Social network” is defined as a community of cooperative or competitive entities, among which there is certain linage. In the cooperative community, each participant is called an actor, shown as a node in the graph. The relationships between the participants are shown as the connection lines between the nodes. Social network analysis (SNA) is a useful tool for studying the relationship between people, which focuses on the social system level, and pays attention to the functions of the entire interaction sphere and social context. The theoretical perspectives that use the method of social network analysis to study problems mainly center on the relationship between actors (network topology) but not the features of actors, and stress the mutual influence and dependence between actors which leads to the emergence of collective behaviors. The social network features of actors can be further observed and understood via social network information. Such graph analysis of social relations is widely used in social science, information science, economics, political science, computer networks, and other disciplines. In recent years, SNA is also applied in network synergistic learning, especially in knowledge building and interaction analysis of learning community. For example, Garton, Haythornawaite and Wellman recommended studying the online network, in particular, the network of learners, with the method of social network analysis. The studies have proved the application of social network analysis in explicit learning context. Martinez and his colleagues compared the centricity of inter-community and intra-community information communication. Haythornthwaite analyzed the power distribution features of several relationship networks, and she pointed out that the time changes and media communication channels depend on the distribution of network power. Cho, Stefanone and Gay found that the participant who owned more power in the network information communication also had greater influence. Reffay and Chanier concentrated on the evolution of cohesion in the learning community. De Laat combined the social network analysis and content analysis to prove the centralized interactive pattern in the learning process, suggested that the knowledge building process mainly focused on the information sharing and comparison phases (namely the first phase). The domestic scholar Chen Xiangdong(2006) systematically introduced the application of social network analysis in online learning study. Li Jiahou et al. (2007) deemed the social network analysis as a new method for the educational communication research in the network age. Wang Lu(2006, 2007) studied the quality of knowledge building in the interactive asymmetric network synergistic learning via the social network analysis, with the focus on the content analysis of interactive learning communication, and explored the research from the angle of individual case at the same time. The researches mentioned above mainly aim at the non-structural network learning environment, and there are few researches adopting social network analysis for studying the knowledge building problem in the well structured informationalized class teaching environment, especially for the class knowledge building in the specific cultural context.
20
W. Youmei
3 Technological Environment For the sake of exploring the knowledge building in the specific cultural context, this study is based on the technological system of synergistic learning. Based on the investigation of the current learning technological system, Professor Zhu Zhiting and his team pointed out that the current learning technological system framework represented a discrete way of thinking, the educators and the learners acted in a divisive pedagogic framework, it was a concept of teaching in isolation, difficult to meet the demand of the society. It is mainly manifested in the interaction layer—the learners and the educators lack deep interaction; in the communication structure layer—information aggregation mechanism is missing; in the information processing layer—collective thinking is absent; in the knowledge building layer – there is no division, cooperation, integration tool; in the practice layer – the information, knowledge, behaviors, emotions, and values have no efficient linkage. Such isolation severely influences the efficacy of teaching. Therefore, Professor Zhu Zhiting proposed the new synergistic learning framework and meta-model. The synergistic learning technological system is a practice-oriented innovative model [3]. The synergistic learning technological system is a well-structured collective teachingoriented learning environment. In order to support the synergistic learning, the team technologically developed synergistic learning tools, including 2 sub-tools – synergistic marking and synergistic building [4]. This study adopted the synergistic building tools to analyze the social network of knowledge building, the tools realize the collective knowledge building and the schematic presentation of collective memory. Understanding multiple solutions to a problem is really helpful to inspire the students’ thinking, in particular, the peers’ solution can more easily arouse the interest of other students in the class. For example, when the students get their corrected exercise books from the teacher, they want to know how the others do where they made mistakes. In addition to providing reference answer, the teacher should also display the correct solutions made by other students. Though the teacher can adopt various display or discussion forms in the class, once the class is over, these short-lived collective memories will disappear. If we can gather the individual memories to form collective memory and save it, it can remind the students of the previous class discussions during their review for the final and prevent them from making the same mistakes repeatedly. The synergistic building tool can be used for gathering, processing and saving collective memory. In a learning environment supported by the synergistic learning tools, the interactive structure and participative features of community will not only influence the learning atmosphere, but also determine the quality of synergistic learning and the effects of collective knowledge building to a large extent. This paper analyzed the relationship between the learners and the relationship between the learner and other people during the network synergistic learning process via social network analysis, and further analyzed and reflected the effects of the relationship between members on synergistic cooperation.
Social Network Analysis of Knowledge Building in Synergistic Learning Environment
21
Fig. 1. Teacher-Side of Synergistic Building Tool
4 Design of Research A. Objective and content of research This study adopted the method of experiment, with the synergistic learning tools as the technological context, to explore the functions and effects of the visualized synergistic learning knowledge building tools on supporting the knowledge building. By taking part in the synergistic class learning process, and using the method of social network analysis, this study evaluated the effects of synergistic building tools, analyzed the main path of knowledge building and processing, provided a reference for realizing deep knowledge building in class. B. Design and implementation of research The experiment was carried out with the class teaching of the undergraduate course in a university as the experiment scenario. We conducted network teaching once a week, and used synergistic technological tools to support class teaching and students’ learning. The purpose of data collection wasn’t told before the experiment to ensure the authenticity and reliability of data and reduce the interference. The data was collected in two classes randomly selected, the teacher controlled the process, and used the synergistic building tools for discussion. The questionnaire, questions and topic were all designed before the class. The topics for discussion include open questions such as the procedural process evaluation and course portfolio.
5 Analysis of Results C. Features of the sociogram of the learning community While doing the SNA, if there are not many nodes, sociogram can be used for representing the relationships between the actors, in which the nodes represent the
22
W. Youmei
actors, and the connection lines represent the linkages between the nodes. On the contrast, if there are many nodes, matrix is generally used for representing social network. The subject of this study was a 38-people class, thus we adopted the graphic method (sociogram) to analyze the relationships between the learners and the community structure of this class. We used the circles No.1-No.38 to represent the 38 community members, and connection lines with arrows to represent the relationships between them. The arrow of connection line points to the sender of the content which was replied, and the double-headed arrow represents the mutual reply. Based on the collected material, the s sociogram is as follows:
Fig. 2. Sociogram of Members Participating in the Synergistic Learning
From the above sociogram we can directly see that the features of the learning community in the synergistic building environment: using visualized knowledge building tools to conduct class teaching; the teacher plays the dominant role in the entire learning process, and controls the entire teaching process. Most of the learners thought about and replied to the questions proposed by the teacher, and during the discussion, the teacher controlled the synergistic building tools, shared the representative answers and questions with all the learners, and proposed them in the form of question. At the same time, some learners replied to the questions proposed by others. In the figure, No.6 and No.26 learners received the replies from others. There were also several learners that didn’t participate in such class learning. From the collected materials we can also find that most of the learners just replied to the problems proposed by the teacher or student, only a few students raised questions or complement. No.6 and No.26 learners put forward questions about the problems needed to be solved, which aroused the thinking of other learners, such learners are relatively more active in thinking. D. Social network analysis of subject–oriented community In the following, we respectively analyzed the social network formed by the two discussion topics (procedural evaluation, course portfolio), drew the corresponding sociogram and matrix for comparison, as shown in Figure 3:
Social Network Analysis of Knowledge Building in Synergistic Learning Environment
Fig. 3. Matrix Formed by the Discussion of Procedural Evaluation
Fig. 4. Sociogram of Members Participating in the Learning of Course Portfolio
Fig. 5. Matrix Formed by the Discussion of Course Portfolio
23
24
W. Youmei
From the figure above we can see that most of the learners participated in the discussion of procedural evaluation, and No.4, No.6, No.21 and No.26 learners showed higher degree of participation than others; and some learners had low degree of participation in the discussion of course portfolio, 28.9% learners didn’t participate in the discussion, and the participating learners had lower connectivity degree than they did in the former topic discussion. There may be three reasons for this phenomenon: first, the learners’ positivity reduced as time went by; second, the learners originally knew more about procedural evaluation than course portfolio, thus they had more ideas about the procedural evaluation; third, the main content of the two topics are different, for procedural evaluation, the learners mainly answer “why”, “which methods”, “differences”, and for course portfolio, the learners mainly answer “what it is”, “what it contains”.
6 Conclusions and Discussion Synergistic learning is a new learning framework, which reorganizes the current learning technological system based on the sociality of the cognitive subject, dynamic theory of cognitive process, ecological theory of knowledge building, to support the class teaching and learning activities in the technological context. This study studied the effects of the visualized synergistic learning tools on the knowledge building process, analyzed the knowledge building procedures and the relationship between learners formed accordingly. The study found that using these tools for class teaching formed a social network of participants different from the common network discussion platform, which can more easily and rapidly gather the thoughts of learners, mainly via the teacher. However, there are some problems at the same time: there are only a few direct connections between learners; the learners don’t have strong sense of participating in the synergistic knowledge building. And the factors like the experimental subject, the topics discussed in the experiment, and the time of experiment have certain instability. Since this study aims at specific experimental subject, its validity and pertinence need to be further improved.
References [1] Scardamalia, M., Bereiter, C.: Knowledge Building. In: Encyclopedia of Education, 2nd edn. Macmillan Reference, New York (2003) [2] Reuven, A., Zippy, E., Gilad, R., Aviva, G.: Network analysis of knowledge construction in asynchronous learning networks (2003) [3] Zhu, Z.: Synergistic Learning: A New Learning Paradigm in Connected Age. In: Keynotes on Advanced Seminar of 1st Global ET Summit Conference, Shanghai, China, July 30 (2006) [4] Wang, Y.: Usability Test on Synergistic Learning Technology System. Research on e-education (03), 62–66 (2009)
Query Rewriting on Aggregate Queries over Uncertain Database Dong Xie and Hai Long Department of Computer Science and Technology, Hunan University of Humanities and Science and Technology, Loudi, Hunan Province, China [email protected]
Abstract. Although integrity constraints effectively maintain certainty of data, and some situation may be satisfied with integrity constraints. This paper proposes uncertain databases and candidate databases. To uncertainty of tuple values in uncertain databases and their aggregation queries with different results for every candidate database, the work adjusts query semantics to employ query writing for computing the maximum and the minimum of aggregation attributes based on certain range semantics, the method can execute effectively SQL query in databases. The experiments show that the overloads of rewritten queries are longer than original queries, but the overloads should be accepted. Keywords: Relational database, Uncertain data, Aggregation query, Query rewriting.
1 Introduction Integrity constraints (ICs) effectively maintain data consistency and validity, it enables the data to conform to the rules of entities in the real-world. The current commercial DBMS supports ICs, but they focus on a series of ICs to maintain every database is certain. However, an entity that is in the real-world frequently corresponds to inconsistent data in the database. A database may become uncertain with respect to a given set of integrity constraints while data from different sources are being integrated. In the situation, it can be difficult or undesirable to repair the database in order to restore consistency. The process may be too expensive, and useful data may be lost. One strategy for managing uncertain databases is data cleaning [1], which identifies and corrects data errors. However, these techniques are semi-automatic and infeasible for some applications such as a user wants to adopt different cleaning strategies or need retain all uncertain data. The trend toward autonomous computing is making the need to manage uncertain data more acute. There are an increasing number of applications whose data must be used with a set of independently designed constraints. As a result, a static approach with respect to a fixed set of constraints is enforced by data cleaning may not be appropriate. Current database technologies don’t support query results without certain data based on certain databases. If databases violate ICs, conflict tuples don’t Y. Wu (Ed.): ICCIC 2011, Part IV, CCIS 234, pp. 25–31, 2011. © Springer-Verlag Berlin Heidelberg 2011
26
D. Xie and H. Long
effectively denote the uncertain semantics of data. An alternative approach is to employ the techniques of certain query answering to resolve inconsistencies at query time over uncertain databases [2]. In uncertain data situation, certain queries are the problem of retrieving “certain” answers over uncertain databases with respect to a set of integrity constraints and query conditions. A relevant decision problem is aggregation queries under the range semantics. Aggregation is common in data warehousing applications where uncertainties are likely to occur, and keeping uncertain information may be useful. In the presence of aggregation operators, the notion of certain queries needs to be slightly adjusted. An aggregation query returns a different answer in every possible world. Therefore, it has no certain answer. Under the range semantics, a minimum and maximum value of a numerical attribute of a relation may be computed. The paper employs an appropriate method, which may return certain results with respect to user queries and integrity constraints. We propose relevant concepts such as uncertain database and candidate database. Since the certainty of tuple values and aggregation queries may return different results in uncertain databases, query semantics are adjusted as certain range semantics for computing maximums and minimums of aggregation queries by query rewriting, it rewrites an original query to a SQL query which can return certain results effectively. The experiments show that the execution times of rewritten queries are longer than original queries, but the overloads should be accepted.
2 Preliminary Example 1. Table 1 denotes balances of customers as guest(guestkey,balance), guestkey attribute is primary key. Relational tuples violate the primary key constraint. Given a query Q1 returns customers whose balances are greater than 900. Table 1. Guest guestkey
balance
t1
g1
1900
t2
g1
500
t3
g2
2600
t4
g3
2100
t5
g3
2600
Q1 obtains {g1,g2,g3,g3}, but they are uncertain. g1 may be less than 900 such as t2, but t1 who is satisfied with Q1 contains g1; g3 appears twice in results. However, since guestkey is the primary key, the results don’t have repeated values. As a result, {g2,g3} are certain results for Q1.The first reason that g2 is a single tuple which is satisfied with Q1, the second reason that even if g3 appears twice but the two tuples are satisfied with Q1. so we may rewrite Q1 to obtain certain results.
Query Rewriting on Aggregate Queries over Uncertain Database
27
Candidate data are subsets of uncertain data, they contain certain tuples with respect to integrity constraints. The concept keeps all tuples and do not enforce to delete or ignore uncertain tuples. Candidate data show that data may be cleaned for users to understand data are uncertain. Definition 1. Give I is an instance in database D. To a series of integrity constraints∑ of D, if I is satisfied with∑, this denotes as I╞∑. If ∀ I╞∑, D is a certain database, otherwise D is an uncertain database. Definition 2. Candidate database(CDB). Give a series of integrity constraints∑ and an instance I in database D. If instance I’╞∑, I’ is a candidate database about I with respect to ∑, moreover, there are not I * ╞∑ and (I I’) (I’ I) ⊃ (I I *) (I * I). To a query Q, if every CDB D’ Rep(D,IC) and D’╞Q(t’), tuple t’=(t1 ,…,t n) are certain results. That is, x 1,…,x n are assigned as t1 ,…,t n respectively, if n 0, Q is a Boolean query. To every CDB D’ Rep(D,IC), if D’╞Q(t’), the query is true, otherwise is false.
- ∪ -
∪ -
∈
-
=
∈
Example 2. Table 1 has four candidate databases as following: D1 ={t1,t3,t4},D 2={t1,t3,t5},D3 ={t2,t3,t4},D 4 ={t2,t3,t5}. Every candidate database is an uncertain database, they are approximate with the uncertain database in table 1 as far possible as. If I is a candidate database with a series of integrity constraints∑ in database D, there are not I ⊂ I*, I*╞∑ and I* ⊆ I. If these conditions exist, there are tuples t ∈ I* and t ∉ I; since I*╞∑, so tuple t’ who is conflict with tuple t in I*, that is ,t’ ∉ I and t’ ∉ I*, this obtains (I,I’) ⊃ (I,I *), I is not a candidate database according to definition 3.
△
△
3 Aggregation Query Since tuple values in uncertain databases are certain, aggregation queries return different results for every candidate database, but don’t return certain results. Therefore, the work employs the range semantics to compute a minimum and maximum value of a numerical attribute in a relation. Definition 3. Interval. Give a series of integrity constraints∑ and an instance I in database D. If an aggregation query q returns values range from a to b over every candidate database, which is denoted as v ∈ [a,b],and the certain results of q over I are in the interval [a,b], it is denoted as I╞q ∑ ∈ [a,b], a is the lower bound, b is the upper bound. If there are not subsets that exist certain results, [a,b] is the optimal certain results, a is the maximum lower bound, b is the minimum upper bound.
28
D. Xie and H. Long
An aggregation query q denotes as the following: SELECT G,agg 1(e1) e1,…, agg n(en ) en FROM R [WHERE W] GROUP BY G Where, G is a series of aggregation attributes, agg 1(e1),…, agg n (e n) are aggregation expressions. q G is a query without aggregation and grouping in an aggregation query: SELECT G FROM R [WHERE W] Definition 4. Query range. Set D is a database,q is an aggregation query, give a series of integrity constraints∑, qG is a query without aggregation and grouping in an aggregation query q. To every candidate database I, if t is certain results of qG over D and every aggregation value v range from glb (the maximum lower bound) to lub(the minimum upper bound), (t,glb,lub is certain results of q over D.
)
Example 3. Give a relation R(K1,K2,K3). Its candidate databases are the following: I1={t1,t3,t5}, I2={t1,t3,t6}, I3={t1,t4,t5},I4={t1,t4,t6},I5={t2,t3,t5}, I6={t2,t3,t6},I7={t2,t4,t5},I8={t2,t4,t6}. Table 2. Table R K1
K2
K3
K4
t1
c1
n1
h
1000
t2
c1
n2
h
100
t3
c2
n2
h
2000
t4
c2
n2
l
200
t5
c3
n2
l
3000
t6
c3
n1
l
NULL
Give a query for summing K4.The amounts of attribute values {6000, 3000, 4200, 1200, 5100, 2100, 3300, 300} are not uncertain results, which are expressed as [300, 6000]. Suppose a query for summing K3 whose values=’h’. t1, t2 and t3 are tuples which are satisfied with the original query. The query range contains values of n1 and n2, it need obtain boundary values of every K3 value for summing K3. The interval values of candidate databases are {(n1,1000),(n2,2000)},(n1,1000), (n2,2000)},{(n1,1000),(n2,0)},{(n1,1000),(n2,0)},{(n1,0),(n2,2100)},{(n1,0),(n 2,2100)},{(n1,0),(n2,100)} {(n1,0),(n2,100)} respectively. As a result, the query range is {(n1, 0, 1000), (n2, 0, 2100)}. In every candidate database, attribute values range of K3 from 0(the maximum lower bound) to 1000 (the minimum upper bound) while K2=’n1’, the minimum upper bound is in I1,I2,I3
和
Query Rewriting on Aggregate Queries over Uncertain Database
29
and I4, the maximum lower bound is in other candidate databases; attribute values range of K3 from 0(the maximum lower bound) to 2100 (the minimum upper bound) while K2=’n2’, the minimum upper bound is in I5 and I6, the maximum lower bound is in other candidate databases. We compute the maximum lower bound and the minimum upper bound by employing query rewriting. Give a series of integrity constraints∑ and an SQL query q in relational database, the method rewrites original query q to an new other SQL query for obtaining certain results. The next step we should propose a rewriting algorithm for original queries. Algorithm 1. RewriteAgg(q,∑) Input: aggregation query q; a series of integrity constraints∑ Output: a rewritten query Q for q BEGIN Cand=projection sets of R ∪ subsets of maxima and minima of aggregation DISTINCT attributes and PRIMARY KEY; min_cand1= KEY projection sets of cand without repeated KEY ; filter_cand1=the key value of min_cand1 is the key value of R with negative conditional selection predicate or selection predicate value is null; min_cand2=min_cand1
-filter_cand1;
min_cand2= projection KEY subsets of min_cand2; min_cand3= projection subsets of G and the minumum of aggregation attributes after join between cand and min_cand2; max_cand= the summation maximum of aggregation attributes subsets in cand by grouping G; end_cand= maxima and minima of max_cand left join min_cand3 by G Q=cand+ min_cand1+ min_cand2+ min_cand3+ max_cand+ end_cand
;
END
4 Experimental Analysis The experimental setting is the following: OS(Windows XP),CPU(AMD Athlon(tm) 64 X2 2.01GHz), MEMORY(1GB),RDBMS(SQL Server 2005). Query writing achieves by JAVA, experiments employ the first query and the sixth one of TPC-H[7] for generating different size uncertain data and consider the references as follow: (1) Data size(s). Set s=0.1GB,0.5GB,1GB and 2GB.1GB data have 8,000,000 tuples approximately. (2) In uncertain databases, n tuples who violate the key constrain share a common key value. For example, if n=2, every key value appear twice.
30
D. Xie and H. Long
(3) The percent of uncertainty (p). if p=50%, 1GB data have 4,000,000 illegal tuples. Every relation has a same p value. Set p=0,1,5,10,20 and 50. Experiments should consider high p value to test the method. The number of projection attributes and aggregation attributes in queries. Table 4 shows that rewritten queries are longer than original queries in execution time. Since Q1 has more aggregation and projection attributes, its execution time is longer than Q6. The overload is affected by the result sizes of result original queries without grouping. We remove all groupings of Q1 to obtain result sets, which are more numbers than other queries. Algorithm 1 computes middle result sets to produce the candidate set filter_cand1, and this set produces other middle sets. Though overload of rewritten queries are longer than original queries, this may be accepted. Table 5 gives overloads of the rewritten query of Q6 with different data sizes(100MB,500MB,1GB,1.5GB and 2GB) while n=2. In order to compare certainty, we keep the number of violated tuples as a constant. To 400,000 violated tuples, 2.5,3.3,5,10 and 50 of p is correspond to 2GB,1.5GB,1GB,500MB and 100MB respectively. The change of overloads is linear with the change of the database size. Table 3. Features of queries the number of projection attributes
the number of aggregation attributes
Q1
10
8
Q6
1
1
Table 4. Execution time(s=1GB,p=5%,n=2) original query
rewritten query
Q1
35
512
Q6
32
59
Table 5. the rewritten query of Q6 with different data sizes original query
rewritten query
s=0.1
2
2
s=0.5
16
30
s=1
32
59
s=1.5
44
87
s=2
56
120
Query Rewriting on Aggregate Queries over Uncertain Database
31
5 Conclusion This paper employs an appropriate method to return certain result in uncertain databases. Query semantics are adjusted as certain range semantics for computing boundary values of aggregation queries by query rewriting, it rewrites an original query to a new SQL query which can return certain results effectively. The experiments show that the overloads of rewritten queries are longer than original queries, but the overloads may be accepted. The further work should consider aggregation queries with join among multi-relations in uncertain databases.
References [1] Dasu, T., Johnson, T.: Exploratory Data Mining and Data Cleaning. John Wiley, New York (2003) [2] Motro, A., Smets, P.: Uncertainty Management in Information Systems. From Needs to Solutions. Kluwer Academic Publishers, Boston (1997) [3] Xie, D., Yang, L.M., Pu, B.X., et al.: Aggregation query rewritings based on clusters in inconsistent databases. Journal of Chinese Computer Systems 29(6), 1104–1108 (2008) [4] Decan, A., Wijsen, J.: On First-order query rewriting for incomplete database histories. In: Proc. of the Int’l Conf.on Logic in Databases, pp. 72–83 (2008) [5] Soliman, M.A., Chang, K.C., Ilyas, I.F.: Top-k Query Processing in Uncertain Databases. In: Proceedings of the IEEE International Conference on Data Engineering, pp. 896–905 (2007) [6] Ruzzi, M.: Efficient Data Integration under Integrity Constraints: a Practical Approach: [PhD Thesis]. Roma: University of Rome La Sapienza (2006) [7] Transaction Processing Performance Council. TPC BENCHMARK H (Decision support) standard specification (2010), http://www.tpc.org/tpch
Ranking Tags and Users for Content-Based Item Recommendation Using Folksonomy Shimin Shan1, Fan Zhang1, Xiaofang Wu1, Bosong Liu1, and Yinghao He2 1
2
School of Software School of Management, City Institute 1,2 Dalian University of Technology 1,2 Dalian, Liaoning Province, China [email protected]
Abstract. Selecting tags for describing the given item is the key to develop practical content-based item recommender systems using folksonomy. A novel strategy is proposed in the paper. With the strategy, tags are selected on the basis of users’ behavior pattern analysis. According to the strategy, an algorithm was implemented to rank users with representativeness of the tagging behaviors. Results of the statistical experiments show that the proposed strategy and algorithm can rank tagging users and can be used to discover tagging solutions which are widely accepted by the majority of all the users. Therefore, the proposed strategy and algorithm can be utilized to refine tags for describing items. Keywords: folksonomy, content-based recommendation, tagging behavior.
1 Introduction Collaborative tagging systems allow internet users to classify and manage online resources with custom annotations. Tagging is not new, but the aggregation of personalized tags all over the web exhibit interesting features. For example, the distribution of tags in collaborative tagging systems has been shown to converge to a stable power law distribution over time without any central controlled vocabulary being used to constrain the individual tagging actions[1]. According to its convergent features and simplicity, collaborative tagging has become one of the most useful ways of categorizing or indexing content in case of there is no “librarian” to classify items or there are too many items to classify by a single authority[2]. In contrast with taxonomy, the unstructured vocabulary and the network formed by inter-related users, resources (items) and tags is commonly referred to as folksonomy[3]. As the production of proactive tagging behaviors performed by users, tags can be regarded as meaningful identifiers for classifying item collection. On the basis of the assumption, many researchers agree that the tags can be used to improve the personalized information retrieval and recommendation. Therefore, personalized recommendation using folksonomy has been becoming an important issue in the fields of E-Commerce. Y. Wu (Ed.): ICCIC 2011, Part IV, CCIS 234, pp. 32–41, 2011. © Springer-Verlag Berlin Heidelberg 2011
Ranking Tags and Users for Content-Based Item Recommendation Using Folksonomy
33
Corresponding to the two basic classical strategies of personalized recommendation, content-based filtering and collaborative filtering recommendation, tags have been utilized to facilitate recommendation in two ways. On the one hand, tags are employed as the meaningful metadata to describe the content of items and are used as the universal descriptors across multiple domains for content-based recommendation. On the other hand, tags are treated as the semantic indicators of the users’ preference and the foundation for measuring the users’ similarity towards the collaborative filtering recommendation. Although some works have been done, there is still a long way to get the practical personalized recommendation system based on folksonomy. Among all the obstacles, the thorniest one is the tag’s semantic ambiguity problem. The problem is the very key to developing various tag-related applications. Specially, for the content-based item recommendation, the semantic ambiguity makes it hard to judge whether a tag is suitable for describing the given item. The origin of the semantic ambiguity of tags has many facets. Besides the essential multiple meanings of the polysemy, the users’ different perceptions towards the same meaning of a term is also the key source of the semantic ambiguity. It is not difficult to understand that, in tagging applications, user’s perception towards resources is supported by his/hers background knowledge and is shown by user’s tagging behavior patterns. Corresponding to the diversity of perception, various users may assign the same tag to multifarious items or assign irrelevant tags to the given item according to their personalized understandings. Considering perception, tag-based item model is different from the other models based on natural attributes. The core problem and basis of content-based item recommendation is how to construct feature space in which items’ similarity can be computed with the common standard. From this perspective, as far as tag-based model is concerned, whether a tag is proper for identifying a given item do not depends on whether the tag describe the natural essence of the item, but on whether the tag-item pair is concordant with the common sense hold by the majority of all the tagging users. Therefore, a strategy is proposed in this paper for discovering representative tagitem pairs. In the strategy, tag-item pair is referred as tagging behavior and tagging users are employed as the mediator for locating representative tagging behaviors .Thus, candidate tags can be refined and the tag-based space used for finding similar items can be constructed more efficiently. Furthermore, an algorithm is introduced to reach the goal of the strategy. The rest of the paper is organized as follows: Section 2 provides a brief review of the content-based filtering recommendation using folksonomy. The proposed algorithm is presented in Section 3, and statistical experiments are conducted in Section 4. Finally, Section 5 gives conclusion remarks and future works.
2 Related Works The increasing volume of information on the web has motivated recommender systems to be widely used to support users to get information they need[4]. According to the strategy used for collecting and evaluating ratings, recommender system can be classified into content-based recommender system, collaborative filtering recommender system and hybrid recommender system[5].
34
S. Shan et al.
Comparatively speaking, content-based recommendation has a longer history. With this strategy, recommender systems analyze the features of the user’s preferred item collection to estimate his preference. Thus, those items whose features are similar with the user’s preference model are recommended to the user. In contrast with collaborative filtering recommendation, the basic content-based filtering has a disadvantage that its recommendation could only be made within the specialized domain, because the item similarity computation needs consistent attribute space. Folksonomy, also known as collaborative tagging, social classification, social indexing and social tagging, is formed by all the web users in a way of contributing tags. And it may be one of the right ways in which the bottleneck of the content-based recommendation mentioned above can be handled. In folksonomy, items’ descriptions are made on the basis of users’ perception in the form of tagging. Thus, resources of diverse domain can be compared in the same conceptual space. Therefore, folksonomy has been proposed to support content-based recommendation and several works have been reported. In [6], users and items are modeled with tag vectors and the TF*IDF measure is used to get the relevance of each tag with similar meaning. Furthermore, hierarchical agglomerative clustering algorithm was utilized for fighting ambiguity of tags and discovering tag clusters to infer resource relevance. Similarly, Marco de Gemmis et al. agreed that folksonomies can be valuable sources of user preference information and proposed a strategy that enables contentbased recommender to infer user interests by both the “raw” item attributes and tags [7]. Recently, a probabilistic framework attempting to generalize the personalization of social media systems was introduced by Jun Wang et al[8]. In that paper, 12 basic tasks that qualify for personalized in tagging systems were identified and three of them including collaborative item search were formally studied and modeled. It is noteworthy that all the works above are depend on the underlying assumption that all the tags in the tag-based model are of the same value no matter how special the contributors are. However, the assumption can’t be applied in every situation. For instance, some people are used to annotate items with custom tags for personalized classifying and item indexing. Obviously, such tags are too special to be used as the stable item descriptors widely accepted by others. Therefore, to eliminate the negative impact, the candidate tags for constructing item model and item feature space should be evaluated according to the corresponding user’s tagging behavior patterns. Unfortunately, few works has been done to solve the problem. Hence, a strategy and the corresponding algorithm are proposed in this paper for refining the candidate tags.
3 Proposed Strategy and Algorithm A. Strategy Description The proposed strategy is very simple: the tags used by representative tagging users should be considered with high priority for constructing item’s tag-based model. The representative tagging users are those whose tag usage solutions, that is tag-item pairs referred as tagging behaviors, are concordant with the common sense hold by the majority of all the tagging users. To apply the strategy, the definitions and the algorithm below is introduced. Unlike the ranking algorithm proposed in [9], a bipartite graph is introduced to model folksonomy.
Ranking Tags and Users for Content-Based Item Recommendation Using Folksonomy
35
B. Definitions Definition 1. Tagging Behavior (TB): tag usage solution of assigning a certain term to a given item (item). Tagging behavior is used to model the folksonomy as a bipartite graph in this paper for the sake of keeping the relation between tags and items. For instance, as shown in Fig. 1, tag T1, T2 and T3 assigned to the item I1 correspond to three tagging behaviors. Likewise, the relations between three items (I1,I2 and I3) and Tag T1 can also be viewed as three TBs. Definition 2. Tagging Behavior-Tagging User model (B-U model): A graph G(V,E) is bipartite with two vertex classes X and Y if V = X U Y with X I Y = φ and each edge in E has one endpoint in X and one endpoint in Y. In the paper, folksonomy is denoted by bipartite graph G(X,Y,E) where X represents the set of tagging users and Y represents the set of tagging behaviors, and E represents the set of relations between tagging users and their tagging behaviors. Definition 3. Exemplary Coefficient (EC): The representative degree of the tagging behavior or tagging users. The goal of the strategy proposed is to select tags by ranking the representativeness of tagging solutions and tagging users’ behavior pattern. Thus, those tagging solutions accepted by the majority can be used to support selecting proper tags to describe items with common sense.
Fig. 1. Definition of tagging behavior
Therefore, Exemplary Coefficient is introduced. The higher the EC is, the more representative a tagging behavior (user) is. In other words, a tagging behavior with high EC is the tagging behavior widely used and accepted by majority of users. Similarly, a tagging user with high EC is the user whose tagging behaviors are widely agreed by others. According to the proposed strategy, two heuristic rules are introduced. 1) Rule 1: representative users tend to perform representative tagging behaviors 2) Rule 2: representative tagging behaviors tend to be performed by representative users. C. Algorithm Description Given two vectors X and Y where X(u) stands for the EC of the tagging user u and Y(ti) stands for the tagging behavior ti respectively, X(u) and Y(ti) are updated with operations below considering the two heuristic rules.
36
S. Shan et al.
Operation I: X
(u )
=
∑Y(
ti )
(1)
ti (u )
Operation O: Y
(ti )
=
∑X()
(2)
u
u ( ti )
In operation I, ti(u) stands for all the tagging behavior performed by user u and EC of the given user u (X(u)) can be obtained by sum up all the EC of tagging behaviors performed by u. In the same way, EC of the given tagging behavior ti (Y(ti)) can be got by sum up all the EC of tagging users as shown in operation O where u(ti) stands for all the users performing the ti tagging behavior. Similarly to PageRank[10] and HITS[11], the algorithm initializes all the elements in the bipartite model with the same EC. As show below, the whole procedure of the algorithm is the iterations of updating elements’ EC according to the two heuristic rules based on the network structure. Algorithm: Calculate the Exemplary Coefficient of Tagging Users and Tagging Behaviors Input: B-U Model, k as number of iteration Output: Xk and Yk which are the results of X and Y by performing k iterations Steps: 1. Initialize X and Y as X0 and Y0 respectively. All the elements in X0 and Y0 are set to 1. 2. for j=1:k a. Get X 'j by using the operation I and Yi-1. That is, '( u ) '(ti ) X j = ∑ Y j −1 ti ( u )
b. Get
Y j'
by using the operation O and
X 'j . That is, Y
'(ti ) j
=
∑X
' (u ) j
u ( ti )
c. Get
X j by normalizing the X 'j
d. Get
Yj
by normalizing the
Y j'
3. End for 4. Return Xk and Yk
Taking the model shown in Fig. 2 as example, the effectiveness of the algorithm can be illustrated intuitively. It is not hard to get the following observations by examining the sample model:
1) U2 should be more representative than U1 even though they both perform three tagging behaviors and two of the behaviors are the same (TI2 and TI3). The reason is that TI4 performed by U2 is more popular than TI1 performed by U1 2) U4 performs the most tagging behaviors in the sample (5 times). Plus, TI4, TI7 and TI8 performed by U4 are also performed by many other users. Therefore, it is reasonable to draw the conclusion that U4 is the most representative user. 3) Although performed more tagging behaviors, U5 should not be more representative than U3. The reason is that the tagging behaviors of U3 are also adopted by U2 and U4, while U4 is the most representative user. On the other hand, TI9 and TI10 are only used by U5 without acceptance of the other users. 4) TI2 should be more representative than TI1 for the reason that TI2 is performed by more users.
Ranking Tags and Users for Content-Based Item Recommendation Using Folksonomy
37
Fig. 2. The Sample model
Fig. 3. Result of the ranking algorithm based on tagging behavior
5) In spite of being used by more users, TI3 should not be more representative than TI6 because that TI6 is used by more representative users. 6) Although be both performed once, TI5 should be more representative than TI9 because the tagging user of the former tagging behavior (U4) is more representative than the tagging user of the latter (U5). Result of the proposed algorithm is shown in Fig. 3. As shown in the result, all the former observations are supported. For example, EC of the U2 is higher than U1 which means that U2 is more representative than U1. Additionally, U4 has the most EC corresponding to its leading representative position. From the tagging behavior perspective, on the other hand, the EC of TI2 is higher than the EC of TI1 and the EC of TI5 is higher than the one of TI9.
4 Evaluation The whole experimental procedure was separated into representative user discovering phase and correlation verifying phase. In the former phase, tagging users are sorted with EC in each month and certain portion of all the users with highest EC was regarded as the representative users according to the given ratio. In the latter phase, the representative users’ TBs and the correlation mentioned above were evaluated.
38
S. Shan et al.
Del.icio.us datasets provided by Robert Wetzker et al. [12] is utilized in the experiments. Considering the data volume, earlier data (collected from Sep., 2003 to July, 2004) was used which includes 130213 tags, 426546 resources and 31701 users. The datasets are divided into training dataset (from Sep., 2003 to May, 2004) and testing dataset (including June, 2004 and July, 2004). And, the training dataset is used to find the representative users according to the users’ representative rank and the given selection ratio. The correlation between the ratio of selecting representative tagging users and their TBs’ average EC was firstly evaluated. The result is shown in Fig. 4 and the detail is listed in Tab.1. As shown in the result, the two variables are strongly negative correlated. In other words, average EC of TB’s is increasing while the ratio of selecting representative users is decreasing. Take the result in June, 2004 as an example, the average EC of TBs is 0.00537590288510 while the ratio is 2/1000. When the ratio was 1/1000, the average EC turned to 0.00881971282008. Moreover, It is clear that the representative users’ average EC is higher than the average EC of all the users. Similarly, as shown in Fig. 5 and Tab. 2, negative correlation was also discovered between the percentage of representative TB against personal TBs and the ratio of selecting representative tagging users. And, the representative TBs were selected with threshold determined by the extent of the EC change. Fig. 6 shows an intuitive explanation to the threshold selection. Table 1. Results of Representative users’ Average EC Representative user selecting ratio 1/1000 2/1000 3/1000 4/1000 5/1000 Average EC of all the users
Testing Data 2004-06 2004-07 0.00881971282008 0.00938925536752 0.00537590288510 0.00527648477112 0.00388170401006 0.00414521581582 0.00339073812395 0.00372271970412 0.00307910272506 0.00347574131248 0.00036985257740 0.0003292316979
Table 2. Results of Representative users’ representative TB’s Percentage
Representative user selecting ratio 1/1000 2/1000 3/1000 4/1000 5/1000
Testing Data 2004-06
2004-07
0.421144148843 0.257989663537 0.187054721843 0.163966942148 0.148992322456
0.597924187725 0.337507353559 0.266003032500 0.239701704545 0.224081745374
Ranking Tags and Users for Content-Based Item Recommendation Using Folksonomy
39
-3
10
x 10
200406 200407
9
Average EC Of TBs
8
7
6
5
4
3
1
1.5
2 2.5 3 3.5 4 Ratio Of Selecting Representative Users
4.5
5 -3
x 10
Fig. 4. Correlation between the ratio of selecting representative tagging users and their TBs’ average EC
60 200406 200407
Percentage Of Representative TBs(%)
55 50 45 40 35 30 25 20 15 10
1
1.5
2 2.5 3 3.5 4 Ratio Of Selecting Representative Users
4.5
5 -3
x 10
Fig. 5. Correlation between the percentage of representative TB against personal TBs and the ratio of selecting representative tagging users
All the results approved that the EC is suitable for measuring the representativeness of the tagging users and TBs. Furthermore, representative users with high EC are tend to perform representative TBs. The higher EC a tagging user has, the more portion of his TBs are representative. Therefore, a conclusion can be safely drawn that the proposed strategy and the algorithm are effective.
40
S. Shan et al.
0.018 0.016 0.014
EC Of TBs
0.012
Representative tagging behavior selection threshold
0.01 0.008 0.006 0.004 0.002 0
0
2000
4000
6000
8000 10000 Index Of TBs
12000
14000
16000
Fig. 6. Strategy of selecting representative tagging behavior threshold
5 Discussion and Conclusion In the paper, a strategy is proposed for discovering valuable tags by means of ranking tagging users and the corresponding algorithm was implemented. The experimental results confirmed the hypotheses of the strategy and the effectiveness of the algorithm. More experiments using other datasets need to do for fully evaluating the method. Meanwhile, several problems including the ETB threshold selection strategy for discovering the exemplary tagging behaviors and exemplary tagging users will be studied in the future. Acknowledgment. The work was partly supported by the Fundamental Research Funds for the Central Universities and National Nature Science Foundation of China (Grant No. 70972058 and 60873180).
References [1] Halpin, H., Robu, V., Shepherd, H.: The complex dynamics of collaborative tagging. ACM, New York (2007) [2] Golder, S.A., Huberman, B.A.: Usage patterns of collaborative tagging systems. Journal of Information Science 32(2), 198–208 (2006) [3] Marlow, C., Naaman, M., Boyd, D., Davis, M.: HT 2006, tagging paper, taxonomy, Flickr, academic article, to read. ACM, New York (2006) [4] Adomavicius, G., Tuzhilin, A.: Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge and Data Engineering, 734–749 (2005) [5] Dattolo, A., Ferrara, F., Tasso, C.: The role of tags for recommendation: A survey. In: 2010 3rd Conference on Human System Interactions, HSI (2010)
Ranking Tags and Users for Content-Based Item Recommendation Using Folksonomy
41
[6] Shepitsen, A., Gemmell, J., Mobasher, B., Burke, R.: Personalized Recommendation in Social Tagging Systems Using Hierarchical Clustering. In: Recsys 2008: Proceedings of the 2008 Acm Conference on Recommender Systems, pp. 259–266. Assoc. Computing Machinery, New York (2008) [7] de Gemmis, M., Lops, P., Semeraro, G., Basile, P.: Integrating Tags in a Semantic Content-based Recommender. In: Recsys 2008: Proceedings of the 2008 Acm Conference on Recommender Systems, pp. 163–170. Assoc. Computing Machinery, New York (2008) [8] Wang, J., Clements, M., Yang, J., de Vries, A.P., Reinders, M.J.T.: Personalization of tagging systems. Information Processing & Management 46(1), 58–70 (2010) [9] Hotho, A., Jschke, R., Schmitz, C., Stumme, G.: Folkrank: A ranking algorithm for folksonomies. In: Proc. FGIR 2006 (2006) [10] Page, L., Brin, S., Motwani, R., Winograd, T.: The pagerank citation ranking: Bringing order to the web (1998) [11] Kleinberg, J.: Authoritative sources in a hyperlinked environment. Journal of the ACM (JACM) 46(5), 604–632 (1999) [12] Analyzing social bookmarking systems: A del.icio.us cookbook. Robert Wetzker, Carsten Zimmermann, and Christian Bauckhage. In: Mining Social Data (MSoDa) Workshop Proceedings, ECAI 2008 pp. 26–30 (July 2008)
Research on Repair Algorithms for Hole and Cracks Errors of STL Models Hu Chao, Yang Li, and Zhang Ying-ying School of Mathematics & Physics, Changzhou University, Changzhou, JiangSu Province, China [email protected]
Abstract. In the process of transformation of STL model, there can be many data errors such as hole, crack, missing position, reverse normal vector, redundancy and so on,attribute to the existence of data option error and the possibility of missing of triangle.This can cause abnormality in the following Rapid Prototyping and have effects on the application of STL models. This paper get the relation of points, edges and surfaces by creating topology for STL model, and introduce graph data structure that representing the edges which in only one triangle. The points list of a hole can be got by using Breadth-first traversal of graph. And then get the holes which are caused by crack based on the characters of crack. So, design the repair algorithms based on different characters of errors. Hole errors can be repaired by adding triangles which is generated by the points have been recorded in hole check algorithm based on the principle of minimum angle, cracks can be repaired by alter the coordinate of point. Using VC++ and OpenGL develop the error check system , and the check and repair algorithms are verified valid. Keywords: STL models, Rapid Prototyping, error check, error repair, hole.
1 Introduction STL file is the commonest file format which is used in the data conversion between CAD model and Rapid Prototyping system.The U.S.3D System Inc. established the format of STL file in 1987,it describe the surface of a three-dimensional solid mode discretely and approximately with small triangle as the basic unit.And the amount of triangular patches greatly affects the degree of contour approximation,we can get a high approximation degree when the amount is large,but this can contribute to data overload or data round-off errors; the approximation degree is low when the amount is small. Overall, the common deficiency exists in the research of existing error repairing for STL models: there isn’t a whole description for an error, this lead to the possibility of missing check or some other kind errors; they didn’t consider the different characteristics of every kind error when they repairing errors, also they didn’t make out reasonable strategies and steps; most of them only study one or several errors not all, also the repairing algorithms the advanced are not good enough to check errors and they are lack of systematic and practical[1~3].Most foreign software have complicated Y. Wu (Ed.): ICCIC 2011, Part IV, CCIS 234, pp. 42–47, 2011. © Springer-Verlag Berlin Heidelberg 2011
Research on Repair Algorithms for Hole and Cracks Errors of STL Models
43
functions and it is complex to manager them, also they are very expensive to use. This paper put forward a method to get faster processing speed and simpler management by classifying the errors which exist in STL data files. Then, we can improve efficiency greatly and provide technical support for following rapid prototyping.
2 Description of STL Model Errors STL file is the actual standers of data conversion between three-dimensional data model and Rapid Prototyping system. In CAD modeling, one file is often converted from one CAD software to another. It is conventional for the lost of information in the conversion process contribute to the different solid surface domain in different CAD software. So, there are various kinds of errors in the model generated by STL data file. The common errors are as follows: hole and crack, reverse normal vector, overlap, and so on[4~6]. We describe the hole and crack errors and analyze their characteristics in this paper. A.
Reversal Normal Vector
When we say a triangle normal vector is reversal we mean that the direction of turning of the triangle is wrong or the triangle is recorded against the required orientation rules of STL file, namely, one triangle normal vector fall short of right hang rule or the normal vector doesn’t point outside of the solid model. Often it is the confusion of the order of three vertexes recorded in STL file generated the disagreement between recorded and calculated normal vector which we say reversal normal vector. We can see the normal vectors of the whole model or all triangles in a surface of the model are reversal when this error happens. We can calculate normal vector by the three vertexes of a triangle and their order, and then compare it with the recorded normal vector. In fact, we identify the existence of reversal normal vector based on the result of dot product of the two vectors. If the result is a positive value then we say there isn’t error, or else we say there is reversal normal vector. B.
Hole and Crack
Hole is the commonest error in STL file, it is generated by the lost of triangle, especially the model composed of by a number of large curvature surface. When we do surface triangulation on these models, the small triangles maybe lost, once there is any triangle lost, we get hole error. Especially to those complex models, if the triangle is very small or the number is very large, it is easy for losing small triangles, and this can lead to the existence of hole error. The existence of rounding error can lead to crack error. There are many common grounds between hole and crack. The existence of crack can also lead to the existence of hole, and the only difference is that there are two edges in crack-hole with the distance between them larger than the given value. The hole error is shown In Fig.1. We can get the following characteristics of hole as following by analyzing the causes of holes: the vertexes in a hole can be a ring by ordering end to end point. One edge can exist in two faces in normal situation, but there can be exist edges who only in one face when hole error exist. So a hole is composed by several edges only in one face. Similarly, every edge of a crack is can be in only one face.
44
H. Chao, Y. Li, and Z. Ying-ying
Fig. 1. Hole error
3 Error Check and Repair Algorithms for STL Model A.
Topology Creating
We get the STL model through triangulating the surface of CAD solid model, but there isn’t any topology information in any triangle. We must storage the vertexes of every triangle redundantly, but this can do nothing to the relation of faces. Topology can affect the assign of error check and repair algorithm. We generate three classes Point, Edge and Face to create topology for STL model. The Point class includes Point ID number, ID number of edges which include the current point, coordinates of points, ID number of faces which include the current point; Edge class includes Edge ID, Face ID, and two point IDs; Face class includes three coordinates of points, normal vector and three edge IDs. And then what we should do is adding the related data to appropriate variable to get the topology we need. First, we record the information of every triangle read from STL file to face class, and then add other data to relevant variables. Also, we do something to the repeated points and edges. If one point or edge has existed we didn’t storage them. B.
Error check algorithms for STL model
We design some algorithms to check model error based on the error characteristic to check error. Now, we introduce several algorithms to check different errors. 1) The check for reversal normal vector: We compare the actual normal vector calculated with the three vertexes of a triangle with the one read from STL file. We can decide whether it is reversal normal vector error. 2) The check for hole and crack: The crack is also a special hole in that a crack is a slit hole, and they all derived from single edges. We check whether the traverse of graph is a ring to identify this error for hole or crack in model doesn’t have certain shape. We can take it through the following steps.
Research on Repair Algorithms for Hole and Cracks Errors of STL Models
45
Step1: Create an undirected graph with all single edges in edge array. Step2: Get the points list and parent points list of a hole by traversing the undirected graph created in step1 using Breadth-first traversal of graph. a) Design an array to record the visit signs of all points and init them to be false. b) Select a point which is signed false in the mark array, and add it to queue. Go to end if we can’t find this point. c) Traverse adjacent point of all unvisited points and add them to queue. d) Set the first node of queue to be current point and delete it from queue if the queue isn’t empty, and go to c). e) Go to end if the queue is empty. Step3: Find all the points of a hole based on the points list got from Step2 and the weight of graph. a) Design a hole points array. b) Select a unvisited edge of the graph and set its visit mark by altering its weight, and then add the two points of the edge to hole point array. Go to h) if we can’t find this edge. c) Fix a vertex and determine whether the parent node of the other point is not the start node and is the same node with the parent node of the fixed point. d) Add the parent point of unfixed point to hole point array and make it the unfixed point. e) Add the unfixed point to hole point array if their parent nodes are same, go to g). f) Add the parent node of unfixed node to hole point array and make the unfixed point fixed and the fixed point unfixed if the fixed point isn’t start point. Go to c). g) Record the hole information and go to b). h) End. Step4: Traverse all the holes to check where there are holes with two points whose distance is smaller than the given value. If the hole exists, then we add the hole to crack hole array, or else add it to a temporary hole array. Step5: Add all the hole information in temporary hole array to hole array.
4 The Designation of Error Repair Algorithms for STL Model A.
The Repair Algorithm for Hole Error
The hole is resulted from triangle missing. So, we must replenish the missing triangles to repair hole error for STL model. We redistribute hole points to construct triangles based on that a triangle is composed by three points[7~9]. But there are many methods to find three points to make a triangle.We use the method based on the principle of minimum angle to construct triangles, we choose one point whose left point and right point composed the minimum angle. And we can calculate normal vector with the selected three points and add this triangle to model. So, the triangles we need to repair hole have one less. Also, we should set the middle point visit mark to be true for it can’t be in other triangles. Do as above-mentioned until there are only two points whose visit mark is
46
H. Chao, Y. Li, and Z. Ying-ying
Start No
Unrepaired holes?
end
Yes
Init point visit mark to be false Yes No
Unsettled points>=3?
Find three points based on minimum angle Set three point coordinates of the face to be added Find one adjacent face Calculate the cross of adjacent faces ,the added face Calculate dot of the two cross vector dot H 2 .
Hunting for the ″Sweet Spot″ by a Seesaw Model
235
Fig. 2. The effect of the ball rebound height when the ball falls at different locations
B. The Model Considering the Torque According to the abovementioned conclusion, the node is the smallest energy loss point, shown in fig. 3. Then, there will be three best sweet spots in fig.3, which is contradicted to the actual situation. Therefore, in order to further determine the sweet spot of the baseball bat, we must consider the torque of the baseball bat.
Fig. 3. The second bending mode
From the perspective of torque, the sweet spot should be close to the end of baseball bat, where the energy transmitted to the baseball is the maximum, whether the tangential speed or the torque of the bat is the maximum when the player hits the ball with the same speed. In order to find the real sweet spot out, we divide a baseball bat into two parts based on the above analysis on the vibration and torque of the bat. One part called "SEESAW" is AB segment shown in Fig. 4 whose pivot is near the end of node (O), that is, a sweet spot. The other part called "SOV" (the Source of Vibrator) is BC segment shown in Fig. 4. "SEESAW" part and "SOV" parts have no vibrations when there is no impact between the bat and the ball. Once the collision occurred, "SOV" starts to vibrate and affects "SEESAW" swing up and down. In general, bat has such a
Fig. 4. Basic SEESAW Model
236
H. Li et al.
movement model ,that is, the impact of the ball and the bat prompts bat vibration, while the vibration which in turn affects the total energy of the ball when it is bounced back. In Fig.4 AB is the "Seesaw" part, whose pivot is O, BC is the vibration source. It is concluded that there is a point of minimum energy loss in the seesaw part of the bat, which is the pivot of seesaw, called sweet spot.
3 Simulation and Result Analysis A. Why Isn’t the Sweet Spot at the End of the Bat? The question why the sweet spot is not at the end of the bat is now translated into why the pivot of "Seesaw" is not at the end. Based on the torque balance theory, it can be concluded by the Seesaw shown in Fig. 5: F1 × L1 = F2 × L2
(1)
Fig. 5. "SEESAW" model
If the pivot is at the end, then the L1 (or L2 ) tends to 0, so F1 (or F2 ) tends to ∞, while it is impossible to achieve F1 (or F2 ) →∞ . Therefore, the pivot of "seesaw" cannot be at the end, that is, the sweet spot cannot be at the end. C. “Corking” a Bat Affects the “Sweet Spot” Some players believe that "corking" a bat enhances the "sweet spot" effect. Here we use the proposed model to discuss this issue. The previous conclusion only considers the sweet spot of the bat in a vibration mode, but the vibration of a baseball bat is very complex based on many previous studies[3,4,6], which has a lot of bending modes, as shown in Fig. 6. However, the first and second bending modes have significant impact on ball energy, therefore, according to the convention used by Rod Cross [6] , the sweet zone is defined as the region located between the nodes of the first and second modes of vibration (about 4-7 inches from the barrel end of a 30-inch Little League bat), shown in Fig. 7. Since the vibration motion of the bat is small in this region, the impact in this region results in little vibration of the bat but a solid hit results in maximum energy transferred to the ball.
Hunting for the ″Sweet Spot″ by a Seesaw Model
237
Fig. 6. Three bending modes of a freely- supported bat
Fig. 7. The sweet zone
In this way, the model is further improved if the sweet spot become the sweet zone, which the vibration amplitude of SOV change will affect the SEESAW vibration amplitude. Particularly, the larger SOV vibration amplitude is, the larger SEESAW vibration amplitude will be, and vice versa. Now we analyze the sweet zone change based on the amplitude of SEESAW changes. In order to make the analysis more reliable, we assume: (1) The relationship between the collision energy loss of ball, bat and their position before the collision is that node has no loss of energy, at other location, the greater the amplitude of the bat is, the greater the energy of ball loss is, and vice versa. (2) The relationship between the oscillation amplitude of vibration and its quality, for a fixed frequency of the vibration source and fixed external factors, can be defined as that the larger of the quality the smaller of the amplitude; and vice verse. Based on the above analysis, it is clear that "corking" the bat decreases the quality of the SOV, so that its amplitude increases. The increased amplitude of SOV results in amplitude increase of SEESAW accordingly. When SEESAW amplitude increases,
238
H. Li et al.
the location where the bat loss the same energy, moves closer to the pivot of the SEESAW, and so the sweet zone becomes smaller. With the above analysis and conclusions, we know that "corking" a bat in the head will not enhance the "sweet spot" effect. Therefore, why Major League Baseball (MLB) prohibits "corking" does not depend on this effect. D. Is the Material Which the Bat is Constructed Matters? We make some more assumptions to predict how different material impacts the performance of a bat: (1) The bat has the same shape and the same volume (2) Only the density of the material affects the sweet spot of bat made of different materials. (3) The baseball bat is solid and the density is uniformly distributed (4) The mass of a bat changes because of the material density, that is, the mass increases as the density increases and vice verse. With the above assumptions, we make a more detailed analysis on the model in the basic vibration model diagram shown in Fig. 1, the basic vibration model. If the material of the baseball bat is changed, then the mass of the bat will be changed. The change finally leads to change the pivot of the "seesaw" part . Now we analyze the specific change. With the "seesaw" shown in Fig. 8, we have the following assumptions: (1) The balance of the see-saw is defined as the status when the seesaw does not swing. The “seesaw” is always in a state of equilibrium as long as the material remains unchanged. (2) The transition from one equilibrium state to another is achieved by shifting the pivot. (3) We only consider the final result of the transition that the see-saw varies from one equilibrium status to another while ignoring the specific process. (4) Left is defined as the positive shift direction. (5) When material changes, Δm denotes the difference between the variation of the mass increment on the left and the mass increment on the right.
Fig. 8. A special "seesaw" model
Hunting for the ″Sweet Spot″ by a Seesaw Model
239
When the "seesaw" is balanced, if M denotes the mass of the right, then the mass of the left is M + ΔM and ΔM > 0 . With the above assumptions and theory about torque, we obtain: ( M + ΔM ) gL1 = MgL2 ( M + ΔM + Δm ) g ( L1 − ΔL) = Mg ( L2 + ΔL )
(2) (3)
Equation (3) minus (2) ,we get: ΔmL1 = ΔL( ΔM + Δm )
(4)
As shown in Fig. 8, L1 < L2 and when the seesaw is in equilibrium state, according to (2) we obtain ρV1 gL1 = ρV2 gL2 , therefore V1 > V2 . So if the density of the ‘seesaw” increases, we get Δm > 0 .And because L1 > 0 and ΔM > 0 so the condition that makes the (3) correct is ΔL > 0 . Then, according to the assumption about the shift direction of fulcrum, it is concluded that the fulcrum should be shifted from right to left (positive direction) as the density of “see-Saw” increases or the fulcrum should be shift from left to right (negative direction) as the density of “see-saw” decreases. With above conclusions, we can predict different behavior of wood (usually ash) or metal (usually aluminum) bats. It is clear that the metal density is larger than that of wood. So the fulcrum of a wood seesaw shifts from right to left when the material has been replaced by metal. From the two bending modes (the fundamental bending mode and second bending mode as shown in Fig. 6), it is concluded that the sweep spot [3,4] is greatly impacted because the shift movement are not the same. Now we will figure out the difference of the movement. From (4), we obtain: ΔL =
ΔmL1 ΔM + Δm
(5)
To compare ΔL in the fundamental bending mode and the second bending mode, we set that L1 in (5) is corresponding to L11 and L12 in the fundamental bending mode and the second bending mode, respectively. In the same way, ΔM is corresponding to ΔM 1 and ΔM 2 , ΔL is corresponding to ΔL1 and ΔL2 , Δm is corresponding to Δm1 and Δm2 .Therefore, ΔL1 =
Δm1 L11 ΔM 1 + Δm1
(6)
ΔL2 =
Δm2 L12 ΔM 2 + Δm2
(7)
Then (5) is expressed as (6) and (7) in the fundamental bending mode and the second bending mode .According to our "seesaw" model, we know: L11 > L12 ΔM 1 < ΔM 2 , , Δm1
Δm2
So Δm1 L11 > Δm2 L12 and ΔM 1 + Δm1 < ΔM 2 + Δm2 , then, it is achieved that:
240
H. Li et al. ΔL1 > ΔL2
(8)
Equation (6) shows that the shift movement of the fulcrum in the fundamental bending mode is larger than that in the second bending mode when the material varies from wood to metal. Then with Fig. 7 we can conclude, the sweet zone of a solid metal bat is smaller than that of a solid wood bat. From the above analysis we can conclude that the greater the density of the material, the smaller the sweet spot area when the bat have the same shape and volume. On the contrary, the smaller the density of the material the greater the sweet spot area. Therefore, it is not the reason why MLB prohibit the use of metal bats. However, we must consider that metal (usually aluminum) baseball bat is hollow and its mass is even smaller than the wood bat. From this point of view, although the density of metal (usually aluminum) is larger than the density of wood, but the average density of a hollow metal bat is smaller than that of a solid wood bat. Therefore, the use of metal enlarges the sweet spot area. When the sweet spot area increases, in terms of the competition, to get the same batting effect needs lower batter's batting skill. In other words, when the sweet spot area increases, the fairness and competitiveness of the competition decrease. Therefore, in order to ensure the competitiveness and fairness of the baseball tournament, MLB should prohibit metal bats.
4 Conclusions A vibration model was proposed to simulate the interaction of bat and ball, which simplifies the vibration of the bat and the ball into two components, a seesaw part, whose pivot is the sweet spot of the bat; and the source of the vibrator (SOC). Based on the “SEESAW” model and the leverage theory, the model drew conclusions: (1) the sweet spot is a sweet zone close to the end of the bat. (2) “corking” a bat in the head decreases the area of the sweet zone while “corking” a bat at the end increases the area of the sweet zone; and (3)the sweet zone is increased if the material of the bat is metal compared to the solid wooden bat. Acknowledgment. This work was supported by the Grant (2008YB009) from the Science and Engineering Fund of Yunnan University, the Grant (21132014) from the Young and Middle-aged Backbone Teacher’s Supporting Programs of Yunnan University and the Grant (21132014) from on-the-job training of PHD of Yunnan University.
References [1] Russell, D.A.: The sweet spot of a hollow baseball or softball bat invited paper at the 148th meeting of the Acoustical Society of America, San Diego, CA, November 15-19 (2004); Abstract published in J. Acoust. Soc. Am., 116(4), Pt. 2, pg. 2602 (2004) [2] http://www.exploratorium.edu/baseball/sweetspot.html
Hunting for the ″Sweet Spot″ by a Seesaw Model
241
[3] Russell, D.: Vibrational Modes of a Baseball Bat. Science & Mathematics Department, Kettering University (2003) [4] Russell, D.A.: Vibrational Bending Modes of a Baseball Bat. Science & Mathematics Department, Kettering University (2003) [5] Cross, R.: The sweet spot of a baseball bat. American Journal of Physics 66(9), 771–779 (1998)
Multi-objective Optimization Immune Algorithm Using Clustering Sun Fang, Chen Yunfang, and Wu Weimin College of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing, China [email protected]
Abstract. In this paper, a Multi-objective Optimization Immune Algorithm Using Clustering (CMOIA) is proposed. The mutation operator based on affinity definition can make the generated antibodies develop into a much better group. It combines local search ability of evolutionary algorithm by using crossover and genetic mutation operators to operate on the immune mutated antibodies. Then a clustering based clonal selection operator is used to maintain a balance between exploration and exploitation. Four general multi-objective optimization problems are selected to test algorithm performance according to the widely used four performance indicator. It was shown that the Pareto fronts obtained by CMOIA were better convergence and diversity than the ones from the other four classical multi-objective optimization evolutionary algorithms. The experiment results show the highly competitive in terms of the originality and robustness of the proposed algorithm. Keywords: Multi-objective Optimization, Artificial Immune Systems, Immune Optimization, Clonal Selection.
1 Introduction Multi-objective optimization problem (MOPs) was first proposed by French economist Pareto in 1896, it attempts to find the optimal solution while dealing with multiple objective functions with decision-making variables, and usually subject to equality or inequality constraints. Due to the widespread presence of multi-objective problem and the difficulty of solving, the issue has been always an attractive and challenging. Traditional methods for solving multi-objective optimization is transform a complex multi-objective optimization propositions to a single-objective optimization of propositions. However, in practice most of the multi-objective optimization problems are complex non-linear problems, the use of these methods are either unable to converge, or the speed of convergence is difficult to accept. Therefore, heuristic search strategies have been introduced into the multi-objective optimization to solve these complex non-linear optimization problem, evolutionary algorithm is one of the biggest bright spot, including classics MOGA [1], NSGA [2], PAES [3], SPEA [4] etc. These traditional evolutionary algorithms has made important breakthroughs, but they are
Y. Wu (Ed.): ICCIC 2011, Part IV, CCIS 234, pp. 242–251, 2011. © Springer-Verlag Berlin Heidelberg 2011
Multi-objective Optimization Immune Algorithm Using Clustering
243
always accompanied by lack of one kind or another, so other heuristic strategies attack more focus, of which the immune based heuristic search for a meteoric rise, because some of the inherent features of the immune system in exactly can compensate for multi-objective evolutionary algorithm in solving optimization problems. Immune optimization algorithm is based on the idea and gradually developed a heuristic random search technique, which are simulated adaptive processes by a group of antibodies. Immune algorithm can avoid some defects in multi-objective evolutionary algorithm such as premature convergence, and points out a new solution for traditional non-linear mathematical model of complex multi-objective optimization problem. The paper firstly presents the definition of multi-objective optimization problems and the artificial immune algorithm, analyzes the progress of the study of immune optimization algorithm, and then put forward a novel artificial immune algorithm based on clustering (CMOIA). The CMOIA focused on the clonal selection operator, diversity maintenance strategies based on clustering and the affinity mutation operator. Then a number of widely used multi-objective optimization test problems and performance testing indicators are selected to experiment, optimal solution and performance indicators of the experiment results are analysis in-depth.
2 Backgrounds Information Artificial Immune Systems (AIS) is inspired by biology immune systems. It is a complex computing system constructed to solve various difficult problems based on the functionality, principle and basic traits of biological immune systems. The purpose of research within this area is mainly to penetrate deeply into the information processing mechanism of biological immune system, and then construct some algorithms and engineering models to solve the difficult problem we faced up with in reality. [5, 6] Antibodies always try to best recognize a antigen in biological immune procedure, this is very similar to the evaluation of MOPs. Consequently antigen can be seen as multi-objective problem, antibody is treated as solution to that problem and antigenantibody pair affinity can be seen as the quality of the solution to the problem. After relation mappings like this, biological immune mechanism is brought into the field of multi-objective optimization. The essence of the multi-objective optimization problem is that there always exists contradiction between different sub-objectives; the improving on specific sub object may result in worsening of the other objectives. This means it is nearly impossible to coordinate all the objectives to reach a minimization. It is required to make a concession between all sub-objectives to achieve the “best” required by specific application. Due to its high search efficiency, avoid of premature, keep of individual diversity and other qualities, the research within this area is always a hotspot. According to the underlying principles of the existing AIO methods, they can be classified into two main categories: clonal selection principle-based, and immune networks-based approaches.
244
S. Fang, C. Yunfang, and W. Weimin
Immune optimization based on immune network was first brought by Jerne [7], which suggests that B-cells are stimulated and suppressed not only by non-self antigens but also other interacted B-cells. Based on this theory, there are two subpopulations in the B-cells. One can be used to create an initial immune network, while the other is trained by the non-self antigens. An important feature of Jerne’s immune networks is their efficient adaptation to changing environments. Based on Jerne’s study Immune networks can be grouped into two sub-classes, the first one is De Castro and Timmis’ discrete immune network model [8] and the second one is Hajela and Lee’s immune network model [9]. The basic idea of clonal selection based immune optimization is that: Antibodies recognizes antigen can be reproduced, these cells will be selected and left for reproduce by immune system, and other antibodies won’t be selected or reproduced. Recent years many researchers started researches from various aspects within this area. Firstly proposed by de Castro and Von Zuben [10], CLONALG is one of the most widely applied optimization method in practice. It can be considered as a computational implementation of the clonal selection principle and affinity maturation of immune response. As we know, for mutation, crossover, and selection, GA only take the fitness of individual chromosomes into account. The CLONALG, on the other band, focuses on the affinities among chromosomes. That is, the proliferation and mutation rates of chromosomes are set to be inversely proportional to their affinities. During the evolution of CLONALG, those chromosomes with higher affinities are suppressed, while the ones with lower affinity are stimulated. Therefore, it can efficiently search in a diverse space of local optimum to find better solutions. Researches within this area mainly focus on the two models mentioned above at present, Improving and application in this area can be grouped into two classes based on their start point: One primarily focus on the improving of performance of MOIAs and the other focus mainly on how to refer more principle from existing biological immune systems.
3 Computational Framework of Cmola Here is the framework of solving multi-objective optimization problems using immune principles. First treat the feasible solutions of specific optimization problem as antibodies, optimization objectives as antigens. Generate random antibodies toward specific antigen as the Immune system. Get the feasible solutions from these random solutions. Then use immune memory pool to hold the Pareto optimal solutions which were just found. These memory cells need to be updated constantly with certain mechanisms to obtain an even distributed Pareto front. At last after sufficient amount of iteration the required Pareto front was fetched from the memory pool. In this paper, a new clustering based multi-objective immune algorithm was proposed in order to fully explore the potential of immune algorithms on multiobjective optimizations. Preliminary experimental results show that our algorithm obtains competitive results in terms of convergence and diversity.
Multi-objective Optimization Immune Algorithm Using Clustering
245
Table 1. The pseudo code of CMOIA 1. 2. 3.
4.
Define the antibody population size, cross rate, mutation rate and other Algorithm parameters. Initialize the population P with random generated antibodies. While ( Max Number of Iteration not reached ) a) Calculate the objective values and constraints of all solutions b) Perform Enhanced non-dominate sorting on all solutions, then use clustering based selection operator to select elite solution into C(t). c) Use our immune clone strategy to Clone C(t) to form a copy C'(t) d) Use our immune mutation strategy to mutate C'(t) to get A(t). e) Perform GA crossover and mutation strategy on A(t) to get A’(t) f) Generate some random antibodies and finish the objective and constraint calculation to get R(t) g) Merge A(t), A’(t), R(t) into Memory Cell M(t) h) Use M(t) as the new population for next iteration. Get pareto front from Memory Cell M(t).
In order to avoid early convergence caused by excessive local search, we adopted clustering based clonal selection strategy to maintain a balance between exploring new solutions and searching locally. Further more in order to accelerate the information exchange between antibodies; we adopted GA crossover and mutation strategy on antibodies after immune clone. With all these done, after every new solution was found, our algorithm can quickly perform local search around that solution point to get an even and diverse solution set. Referring to the adaptive response to specific antigen stimulus of biology immune system, we generate some random antibodies so that the algorithm can have better chance to find the isolated solutions. With these randomly generated antibodies, the algorithm can avoid early convergence and can cover the search space with a higher probability at the same time. At last, the cloned solution set, the mutated solution set and newly generated random solution set will all be merged into the memory pool for next iteration. As to the memory management, we adopted generally used crowding distance based strategy on memory pool management. Using memory pool in algorithm to maintain elite solutions is a common strategy for multi-objective optimization immune algorithms, generally after the specified amount of solution in the pool was achieved, algorithm can stop, but for better comparative study of CMOIA with other algorithms, we set the termination criteria to predefined number of iterations, generally speaking after these iterations, the memory pool must be full.
246
S. Fang, C. Yunfang, and W. Weimin
A. Clonal Selection Strategy Referring to the results Wang X. L. [11] et. al.‘s ACSAMO obtained, we define our antibody affinity in CMOIA as follows: A ff i = x i − x c + x i − x g
Affinity is defined as the sum of Euclidean distance between current antibody and best antibody in current generation and previous generation. Xc, Xg represents best antibody in current and previous generation respectively. In order to calculate the best antibody among generations such as Xc and Xg, we defined weighted aggregation approach to evaluate the performance of antibody. Antibody with the least weighted objective represents the best antibody within this generation. WO =
∑
n i =1
ω i x i Where ω i ∈ U (0,1) and
∑
n i =1
ωi = 1
Wi will be regenerated every iteration to avoid shield on some certain solutions caused by concentration on some certain objectives. The immune clone selection generally can be divided into two phases: first is selection, which is to select typical elite from the current generation, this procedure is finished mainly via sorting the solutions in current generation. To fully explore the typical ness of the selected individual we extend the existing Pareto dominating concept and bring out a new Pareto dominating condition. This method can insure the diversity of the population and resolve constraints at the same time. Use the extended Pareto dominance compares procedure to sort the existing population and select first 30% individuals in the population. Then the selecting part of clonal selection part is finished. The second part of the whole clonal selection procedure is to reproduce the elite which have been chosen out in the first phase. The second phase is also called local search. And exploring the unsearched area for potential elite solutions is carried out at the same time. Generally speaking how to maintain a balance between local searches and explore potential Pareto optimal solution is a key problem which Immune algorithms should focus on. CMOIA used clustering based diversity strategy to maintain the balance. B.
Cluster Based Diversity Maintain Strategy
CMOIA used a clustering based density measure strategy to keep the non-dominating solutions distribute evenly. This is different from generally used strategy at present which consider the individual’s information contribution to the whole population. Clustering based density measure strategy is carried out in the objective space while the other is in the variable space. Clustering analysis is carried out by analyze some certain pattern, the pattern is ndimension vector or a point in n-dimension space. We can analyze the Pareto front data which was generated by algorithm at specific time by carrying clustering analysis. The whole procedure is listed below: first cluster currently generated Pareto front into several groups according to its position in the objective space. Then we use these groups to further decide whether more local search or potential optimal solution
Multi-objective Optimization Immune Algorithm Using Clustering
247
explore should be carried out according to the solution number in specific sub group. If the clustered group contains many solutions then we should reduce search in this area, while if the group have less solutions then more search power should be injected into this area to fully explore the potential optimal Pareto solution. With all these done we can make clonal generated optimal solutions maintain a required diversity. We choose K-means cluster algorithm because of its simplicity and effectiveness. The time complexity of K-means cluster algorithm is only O(kNt), when dealing with massive data sets, we can get the processed data in time. Besides all these benefits it is also easy to implement and understand. The k-means algorithm is an algorithm to cluster n objects based on attributes into k partitions, k < n. It is similar to the expectation-maximization algorithm for mixtures of Gaussians in that they both attempt to find the centers of natural clusters in the data. It assumes that the object attributes form a vector space. The objective it tries to achieve is to minimize total intra-cluster variance, or, the squared error function
∈
where there are k clusters Si, i = 1, 2, ..., k, and µi is the centroid or mean point of all the points xj Si. The general procedure of K-means cluster algorithm CMOIA adopted is described bellow: Table 2. The pseudo code of K-MEANS Clustering algorithm 1. Initialize all the algorithm parameters , for K-means
algorithms: N 2. Randomly choose N solutions as N Clusters and calculate
cluster center for each cluster. 3. While the center of any cluster changes over each iteration
a) b) c)
select a solution, calculate its distance with each cluster center Di select the shortest distance Di and add the solution into that cluster. after that update this cluster center using mean method.
After clustering the Pareto front data, decision of whether to strengthen local search or explore new optimal solutions. Clusters containing a lot of solution indicate that this local space has been searched and putting more searching power within this area will merely be a waste. The representativeness of solution within this area is poor. Clusters contain little solution indicates that this space is not fully developed and It is recommended to append search power into this area. The representativeness of solution within this area is better. With all these strategies, CMOIA can achieve a better balance between local searches and explore new optimal solutions, thus searching a wider objective space and get better results. Besides clustering based diversity maintain strategy, CMOIA also referred to Tan [12] work for better results. They used GA crossover and mutation operators on the
248
S. Fang, C. Yunfang, and W. Weimin
immune mutated solutions so that we can get fully searched local optimal solutions. Compared with traditional immune algorithms, the antibodies within CMOIA will go through GA crossover and mutation operations, not only traditional immune mutation operations. After this procedure is finished, all the offspring and parent generation is archived and added into the memory archives. C. Immune Mutation Strategy In biological immune systems, the frequently mutated somatic cell plays an important role in the adapted immune response procedure: first it maintains a diversified instruction system by mutation; second combined with immune selection procedure, it will greatly increase the affinity of particular antibody. Artificial immune systems simulated this feature of biology system. On the other hand, the hyper mutation operator is different from the GA’s mutation operator. The difference mainly lies in its adjustment of mutation rate. For GA mutation operator, it usually uses a linear mutation rate which changes over iterations to control the mutation degree of all solutions. While on the other hand immune hyper-mutation operator is more complex. The mutation rate varies between antibodies and is decided by all solutions, those which has smaller mutation rate if they are close to the optimal solution while those which has greater mutation rate if they are far from the optimal solution. During the immune reaction phase, The antibodies undergo different degree of hyper-mutation, Those with superior affinity undergo much less mutation compared with those with inferior affinity. CMOIA made several adoptions to the general immune mutation strategies. For example every antibody going through immune mutation phase will undergo a perturbation, all the decision variable within the solution will vary within normally distributed interval [-S,S]. To further explore new optimal solutions at the beginning and in the proceeding phase, mutation intensity is defined as: S = [U (0,1) −
1 ] × A ffinity i × γ × β 2
(6)
Among which, A ffinity i is the affinity of the solution, γ is mutation rate, β is a adjustment coefficient. in our experiment, γ = 0.8, β = 1 . When mutated decision variable exceeds its bound, we set it to the bound value for simplicity. The mutation intensity is pretty complex because it connects the solution with the whole population, when the affinity is superior, namely A ffinity i is low, it is obvious that the mutation intensity S is small, thus optimal solutions won’t get lost in the procedure because of mutation. While on the other hand if A ffinity i is high, namely the affinity is inferior, then the mutation intensity is much bigger, then the solution will have more chance to evolve into a superior solution.
4 Cmoia Algoright Test and Analysis A. Benchmark Problem and Quality Indicator To fully evaluate the efficiency of the CMOIA algorithm, we compared CMOIA with SPEAII, NASGII, PAES, and PESAII. All the tests were finished on an Intel P4 1.8
Multi-objective Optimization Immune Algorithm Using Clustering
249
GHz computer with 512MB physical memory and JDK 1.5.0. All the algorithms run 30 times independently for accuracy. To evaluate the performance of CMOIA algorithm as objective as possible, we selected six frequently cited benchmark problem, including two unconstraint multiobjective problems (Kursawe, Viennet3) and two constraint problems (Osyczka2, Srinivas). Besides we used several widely used quality indicator to fully examine the diversity and convergence of CMOIA’s finally generated solution set. 1. Generation Distance (GD): a measure of the distance between the true and generated Pareto front. 2. S (Spread) Pareto front Spread: a measure to evaluate the distribution and coverage of solution point within the calculated Pareto front. 3. H Hyper volume quality: a quality indicator calculates the volume (in the objective space) covered by members of a non-dominated set of solutions Q (the region enclosed into the discontinuous line in the figure below,) for problems where all objectives are to be minimized. B.
Comparative on Performance
The shape of Pareto front can only gives a direct concept of performance toward specific problem, to fully and accurately evaluate the performance of specific algorithm, we need to measure various quality indicators to make further conclusion on them, only in this way can we objectively measure the performance of specific algorithm. 1. Comparation on Time Cost Table 3. CMOIA performances in the aspects of Time Cost Time Cost(ms) Kursawe Viennet3 Osyczka2 Srinivas
CMOIA PAES
NSGAII PESAII
SPEAII
4299 13366 5503 13826
5822 5621 6366 5081
26529 40517 22634 45329
1695 8057 1274 3203
10534 22528 10162 13268
Time cost of CMOIA is superior to three other algorithms on these testing problems including: Kursawe and Osyczka2. It also outperforms other two algorithms on Viennet3. Only on Srinivas CMOIA cost about the same time as average. Judge from time consuming we can see that CMOIA did pretty well than other problems. And at the same time, we can see all the great results SPEAII has achieved are at the cost of great time consuming. 2. Comparation on GD Table 4. CMOIA performances in the aspects of GD GD Kursawe Viennet3 Osyczka2 Srinivas
CMOIA
PAES
NSGAII
PESAII
SPEAII
0.00033 0.00022 0.00559 0.00022
0.00020 0.00238 0.01371 0.00030
0.00012 0.00026 0.00144 0.00020
0.00017 0.00038 0.01027 0.00024
0.00018 0.00029 0.00152 0.00011
250
S. Fang, C. Yunfang, and W. Weimin
The convergence is the most aspect of all quality indicators, it can be concluded from the table that, CMOIA outperforms three other algorithms on testing problem Viennet3. CMOIA performs fairly well as other algorithms has done on testing problem Osyczka2, Srinivas. On the other hand CMOIA is inferior to other problems on problem Kursawe. Generally speaking CMOIA takes a leading place in the result of GD indicator. 3. Comparation on Spread Table 5. CMOIA performances in the aspects of Spread Spread Kursawe Viennet3 Osyczka2 Srinivas
CMOIA
PAES
NSGAII
PESAII
SPEAII
0.54317 0.75173 1.28333 0.18115
0.84531 0.75701 1.11589 0.61604
0.48745 0.73068 0.56847 0.40671
0.80124 0.76551 0.96264 0.63171
0.21246 0.79225 0.79529 0.17168
On quality indicator S, It can be concluded from the table that CMOIA outperforms most of the other four algorithms on problem Viennet3 and Srinivas. CMOIA performs generally the same as the other four algorithms on problem Kursawe and Fonseca. Only on problem Osycaka2 CMOIA don’t do well than other problems. Generally speaking CMOIA takes a lead in quality indicator S. 4. Comparation on Hyper-area Table 6. CMOIA performances in the aspects of Hyper-area Hyperarea Kursawe Viennet3 Osyczka2 Srinivas
CMOIA
PAES
NSGAII
PESAII
SPEAII
0.78892 0.83615 0.57469 0.53966
0.82088 0.80489 0.47733 0.53542
0.83327 0.83259 0.77765 0.53796
0.81763 0.83203 0.57933 0.53581
0.83315 0.82682 0.76763 0.53999
Quality indicator H is used to measure the coverage of the algorithm. It can be concluded from the above table that CMOIA outperforms all other algorithms on problem Viennet3 and Srinivas. CMOIA performs generally the same as other algorithms on problem Osyczka2. But CMOIA don’t do well as other algorithms on Fonseca and Kursawe. Taking all these into account, CMOIA still dominant other algorithms on indicator H. After comparing all six indicators, a conclusion can be drawn that considering convergence, diversity, coverage and time cost CMOIA does fairly well compared with PAES, PESAII, NSGAII and SPEAII. It is an applicable algorithm which can be used for general purpose multi-objective optimization problem solving.
5 Conclusion In this paper, a novel artificial immune multi-objective optimization algorithm is proposed to make up for the lack of evolutionary algorithm and immune algorithm,
Multi-objective Optimization Immune Algorithm Using Clustering
251
which is based on clustering clonal selection strategy. Comparing with the existing research on clonal selection strategy, the algorithm is based on the diversity of populations that can better maintain the balance between in exploiting new optimal solution regions and strengthening the local search. CMOIA mutation strategy allows the algorithm convergence to the true Pareto Front with faster rate and is able to avoid premature convergence and diversity loss. Experimental results show that CMOIA has unique advantages on convergence, diversity and distribution uniformity. In highdimensional, complex issues CMOIA bound by poor performance, but in general the CMOIA is a new member of immune algorithms and can completely replace PAES, PESA, and NSGAII to solve the problem of multi-objective optimization.
References [1] Coello, C., Van Veldhuizen, D., Lamont, G.: Evolutionary Algorithms for Solving MultiObjective Problems (Genetic and Evolutionary Computation). Kluwer Academic Publishers, Dordrecht (2002) [2] Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGAII. Parallel Problem Solving from Nature, Berlin (2000) [3] Knowles, J.D., Corne, D.W.: Approximating the non-dominated front using the Pareto Archived Evolution Strategy. Evolutionary Computation 8, 149–172 (2000) [4] Zitzler, E., Thiele, L.: An Evolutionary Algorithm for Multi-objective Optimization: The Strength Pareto Approach. Computer Engineering and Communication Networks Lab, Swiss Federal Institute of Technology, Zurich, Switzerland, Technical Report 43 (1998) [5] Hart, E., Timmis, J.: Application Areas of AIS: The Past, the Present and the Future. In: International Conferences on Artificial Immune Systems. Springer, Heidelberg (2005) [6] Zheng, J.Q., Chen, Y.F., Zhang1, W.: A Survey of artificial immune applications. Artificial Intelligence Review (34), 19–34 (2010) [7] Jerne, N. K.: Towards a Network Theory of the Immune System. Ann. Immunology, Vol. 125C , pp. 373–389 (1974) [8] de Castro, L.N., Timmis, A.: An artificial immune network for multimodal function optimization. In: Proceedings of the IEEE Congress on Evolutionary Computation Honolulu, pp. 699–704 (May 2002) [9] Hajela, P., Lee, J.: Constrained genetic search via schema adaption: An immune network solution. Structural Optimization 12(1), 11–15 (1996) [10] de Castro, L.N., von Zuben, F.J.: Learning and optimization using the clonal selection principle. IEEE Transactions on Evolutionary Computation 6(3), 239–251 (2002) [11] Wang, X.L., Mahfouf, M.: ACSAMO: an adaptive multiobjective optimization algorithm using the clonal selection principle. In: 2nd European Symposium on Nature inspired Smart Information Systems, Puerto de la Cruz, Tenerife, Spain, November 29 - December 1 (2006) [12] Tan, K.C., Goha, C.K., Mamuna, A., Eia, E.Z.: An evolutionary artificial immune system for multi-objective optimization. In: Artificial Intelligence Review. Springer, Heidelberg (2002)
A Novel Hybrid Grey-Time Series Filtering Model of RLG’s Drift Data Guo Wei1, Jin Xun2, Yu Wang1, and Xingwu Long1 1 National University of Defense Technology, College of Opo-electronic Science and Engineering, Changsha, China 2 China Satellite Maritime, Tracking and Controlling Department, Jiangyin, China [email protected]
Abstract. In order to shutdown the random drift of mechanically dithered RLG’s output data effectively, a new method named Grey-Time series modeling is proposed, which has integrated the Metabolic GM(1, 1) model and Time series model. Kalman filter is used to filter the drift data based on the model which has been built, and the Allan variance is adopted to analyze the data of gyro before and after modeling and filtering. The results show that: the effect on inhibiting RLG’s random drift by using this new method is better than that of traditional time series modeling and succedent Kalman filter. The method effectively decreases random error in each term of RLG, in which the improvement on quantization error is quite obvious. Keywords: RLG, Grey System, Time series analysis, Kalman filter, Allan variance.
1 Introduction Ring laser gyro(RLG), as one of the core components of strapdown inertial navigation system(SINS), is widely used in many military field at cutting edge, however, the gyro’s drift error accumulated over time is the main factor which affects the accuracy of the navigation system [1][2]. In all the drift errors, the random error term is the real one affecting the performance of RLG, which is generally considered as slow timevarying and weak nonlinearity, and because of the strong randomness of successive start, as well as the interference by the external environment factors, it is not easy to catch the true signal of the RLG. Therefore, through the use of effective modeling and filtering programs to reduce the random drift of RLG is one of the keys to improve its accuracy. At present, one of the common methods to inhibit the random drift of RLG is making time series analysis on the random drift of RLG’s signal with the description of AR (n) or ARMA (m, n) model and then designing Kalman filter based on the model [3][4]. Y. Wu (Ed.): ICCIC 2011, Part IV, CCIS 234, pp. 252–259, 2011. © Springer-Verlag Berlin Heidelberg 2011
A Novel Hybrid Grey-Time Series Filtering Model of RLG’s Drift Data
253
Chinese scholar Professor Deng Julong created gray system theory in the 80's, focusing on some uncertainty problems, such as "small sample" and "poor information", which are hard to be dealt by probability statistics and fuzzy mathematics. The gray system which is characterized as "modeling by small amount of data" explores evolution law of data through the role of sequence operator according to information coverage[5]. It is believed in grey system theory that any random process is a grey magnitude changing in certain amplitude range and a certain amount of time zone, and the random process is a grey process. With the function of weakening sequence randomness and exploring the law of data evolution, gray model is a kind of excellent dynamic system modeling method and it has been widely used in the economic, control, and other fields [6][7][8]. Because of the working mechanism of RLG, external environmental interference and other reasons, there are also some random errors in the output signal of RLG, which brings uncertainty on the signal to a certain degree. Therefore, this paper adopts the grey-time series modeling, which is fused by metabolic GM (1, 1) grey model and time series model, based on the method of Kalman filter toward the drift data model, and uses Allan variance, which is widely acknowledged by IEEE, to analyze and compare the RLG drift data before and after the modeling filter.
2 Grey-Time Series Model and Kalman Fil Ter A. GM (1, 1) Model GM (1, 1) is a single argument first-order gray model. It implemented to establish a continuous dynamic differential equation model by using discrete data sequence through weakening the randomness of the data series by the sequence operator, exploring potential law of the data, and the exchange of difference equation and differential equations. Setting original data sequence as X = {x (1), x (2), ... , x (n)} , using first accumulating generation operator (1-AGO) to transform the original sequence (1) (1) (1) (1) into accumulated generating sequence X = {x (1), x (2), ... , x ( n)} , in (0)
which, x ( k ) = (1)
k
∑x
(0)
(i ) , k = 1, 2,..., n
i =1
The close average generated sequence of in which, z (1) (k ) =
(0)
(0)
。 X (1) is Z (1) = {z (1) (1), z (1) (2), ... , z (1) (n)} ,
1 (1) [ x (k ) + x(1) (k − 1)] , k = 2,3,..., n 2
GM (1, 1) model
(0)
。
x (0) (k ) + az (1) (k ) = b , its least squares estimation parameter
list satisfies the equation as below:
aˆ = (B TB)-1 B T Y
(1)
254
G. Wei et al.
⎡ x (0) (2) ⎤ ⎢ (0) ⎥ In which, Y = ⎢ x (3) ⎥ ⎢ M ⎥ ⎢ (0) ⎥ ⎣ x ( n) ⎦
,
⎡ − z (1) (2) ⎢ (1) − z (3) B=⎢ ⎢ M ⎢ (1) ⎣ − z ( n)
1⎤ ⎥ 1⎥ , ˆ ⎡ a ⎤ , − a is the developmental a=⎢ ⎥ ⎣b ⎦ M⎥ ⎥ 1⎦
factor, b is the amount of grey influence. Developmental factor reflects the developmental trend of the data, and the amount of grey influence reflects the relationship between the changes of data. The whitenization equation of GM (1, 1) model is
dx (1) + ax (1) = b dt
(2)
By solving it we get time response sequence of GM (1, 1) model, and by restoring it we get the estimation of the original sequence. The estimated value of the expression is as below:
b xˆ (0) (k + 1) = (1 − ea )( x(0) (1) − )e−ak , k = 1,2,..., n a
(3)
B. Modeling and Estimation of Metabolic GM (1,1) As time goes on, the meaning of the old data information will be gradually reduced. At the same time of the RLG outputting new data continuously, the old information will be timely removed, which will better reflect the real-time characteristics of RLG drift. Meanwhile, the constant metabolism could also avoid the difficulties brought by the increase of the information, such as the expansion of computer’s memory, the increasing operation quantity of modeling. It could carry out real-time modeling and estimation. The core model of grey system theory—GM (1, 1), can estimate model parameters by only four data, which is up to a certain degree of precision. In order to take into account the estimation accuracy and the real-time character, this paper carry out one modeling and estimation with the output of every 10 gyro’s data points, and store the estimated value of 10th data point which contains the real-time feature of the first 10 data points, then it inputs data by metabolism, constantly does modeling and estimation. The process of modeling and estimation of Metabolic GM (1, 1) is shown in diagram 1. (0)
(0)
In the diagram 1, x (t ) is the real-time data output of the RLG, and xˆ (t ) is the estimated value after GM (1, 1) being modeled. After each new data output by the RLG, it will remove the oldest data in the sequence of 10 elements. With the new data added, it forms a new sequence and GM (1, 1) model will be established. Thus, Metabolic GM (1, 1) model has strong adaptability. In order to investigate the feasibility which Metabolic GM (1, 1) model applied to estimate RLG drift data, authors used a small amount of RLG output data for modeling and estimation by the above methods , and the results are shown in Table 1.
A Novel Hybrid Grey-Time Series Filtering Model of RLG’s Drift Data Metabolic modeling
RLG Output
255
Real-time estimation
GM (1,1) , x (0) (2) , ... , x (0) (10) ⎯⎯⎯→ xˆ (0) (10)
x (0) (1)
GM (1,1) x (0) (2) , x (0) (3) , ... , x (0) (11) ⎯⎯⎯→ xˆ (0) (11) GM (1,1) x (0) (3) , x (0) (4) , ... , x (0) (12) ⎯⎯⎯→ xˆ (0) (12) M M GM (1,1) x (0) (t − 9) , x (0) (t − 8) , ... , x (0) ( t ) ⎯⎯⎯→ xˆ (0) (t )
Fig. 1. The modeling and estimating process of Metabolic GM (1, 1) Table 1. The estimating results of Metabolic GM (1,1) model for RLG’s drift data
x (0) (º/h) (0) Estimated data x (º/h)
6.9960
7.0752
7.1498
7.0917
7.0647
7.0175
7.0375
7.0828
7.1010
7.0986
Relative error (%) Developing coefficient − a
0.31 4.9×10-4
0.53 1.04×10-4
0.94 1.3×10-3
0.13 1.8×10-3
0.48 1.5×10-3
Original data
As it can be seen from Table 1, the development coefficient − a 255 T( x, y ) < 0
(3)
282
F. Tian et al.
where T(x, y) and O(x, y) represent the gray values of the target image and the origin image at point (x, y) respectively. Sita [0,255] is the gray/chroma value settled, and delta [1,255] is the gray/chroma scope that can be used to decide the gray/ chroma hierarchy of image mined. K is the space stretching factor. Here we let K=255.
3 Image Quallty Evaluation Parameters In order to ensure the image quality after mining is optimal. We first should learn the association of human subjective vision and objective physical quantities. There are a lot of factors that influence human subjective vision. The factors can be described by some suitable objective physical quantities. They are averaging gray, information entropy and averaging contrast. An image with high quality should include appropriate gray, enough information, right contrast, low noise level and uniformity trending gray spectral distribution. A. Averaging Gray, AG The quality of an image or a photograph is considered bad by human subjective vision if it is too bright or darkness. Therefore, the image quality is linked to the proper brightness of the images (the gray of images) [7]. The averaging gray (AG) of an image is calculated as equation (4): N −1 M −1
AG = ∑ ∑
n =0 m =0
g (m, n) M *N
(4)
where AG means average grays. Gray(x, y) is the gray of the pixels at point (x, y). The averaging gray of an image with the uniform distribution histogram as equation (5):
AG =
1 M*N
255
∑ i* i =0
M*N = 127.5( gray level ) 256
(5)
This is the optimal auxiliary index evaluating the image quality. B. Image Information Entropy, InEn The gray information contained in an image is richer, the quality is better. A nature scenery image with good visual effect has nearly 256 different gray-levels information. Information entropy (InEn) is used to denote the information contained in the image. InEn is calculated as equation(6): 255
InEn = −∑ p(i )Log2 p(i ) i =0
(6)
Scotopic Visual Image Mining Based on NR-IQAF
283
where p(i) denotes the probability of pixel distribution at the gray level(i). Let Log2p(i) =0 when p(i)=0. An image with uniform distribution histogram has the biggest information entropy as equation(7): 255
I nEn=-
1
Log ∑ i=0 256
2
1 = 8( bit ) 256
(7)
C. Image Averaging Contrast, AC Human perceiving different things by distinguishing the difference among these things. Without different gray-scale, an image has no contrast. Here the contrast discussed is the simultaneous contrast. There are a variety of definitions about contrast relate to the image processing [8]. Here the definition of the simultaneous contrast is adopted as equation(8) [9]:
Csimul = abs [ Lt − Lb ]
(8)
where Csimul means the simultaneous contrast. Lt and Lb denote the gray level of the target and the background respectively. The following formula is used to calculate the averaging contrast (AC) of an image. N −2 M −2
AC =
∑ ∑ Gray( x, y ) − Gray( x + 1, y ) y =0 x =0
( M − 1 )* ( N − 1 )
(9)
Where Gray(x, y) is the gray of the pixels at a point(x, y). M and N are the number of pixels in the directions of x and y respectively. The above three objective physical quantities are very important to obtain the optimal quality image. A NR-IQAF can be established by them.
4 Acquirement of Optimal Quality Image Consistency is the most important aspect in measuring the effective of image quality assessment method. If the image after mining is consistent with human visual system(HVS), then the optimal quality image could be obtained. The built of NRIQAF is to get the right image which is consistent with the results of visual perception. An image’s quality can be depicted by the above NR-IQA parameters AG, IE, AC. In addition, their product is a function with maximum value. The image with better quality should have right averaging gray under keeping enough information and
284
F. Tian et al.
appropriate contrast. Considering these factors, A Comprehensive Assessment Function (CAF) to obtain the optimal quality is designed as follow:
CAF = InEnα ∗ AC β ∗ NGD γ
(10)
where NGD denotes normalized gray distance (NGD). It can be obtained by the following formula (11):
NGD = ( 127.5 − dist( 127.5 − AG )) / 127.5
(11)
where dist (·) is distance operator. Scotopic visual image input Gray spectrum analysis Obtain the value of Delta and Sita Zadeh-X transformation Is CAF maximum?
Change the value of Delta
N
Y
Image output Fig. 1. Flowchart of image mining process
Here α, β, γ is the weight value of CAF, the image quality is different when the CAF with maximum under different weight value. If the optimal image is right the maximum of CAF under certain α, β, γ weight value. So an objective image quality evaluation function can be obtained. The result of statistical experiments is estimated and reported as α=1, β=1/4 and γ =1/2 respectively. Hence CAF can be also expressed as equation (12):
CAF = InEn ∗ AC 1 4 ∗ NGD 1 2
(12)
Scotopic Visual Image Mining Based on NR-IQAF
285
CAF is the function of theta and delta. The value of CAF and the image quality variation with the value of sita and delta as follows: • • •
For any sita, CAF has only a maximum value when sita increases gradually form 1 to 255. For any sita, the image quality first becomes better, and then becomes worse with delta gradually increases form 1 to 255. The maximum value of CAF corresponding to the relatively optimal quality image. The image quality will become worse with the increase of sita; hence the optimal quality image is the image that CAF has a maximum value when sita equal to 0.
Therefore, to obtain the optimal quality image, Let sita=0. The maximum value of CAF could be gotten by changing the value of delta from 1 to 255. Meanwhile, the optimal quality image can be obtained after mining[10].
5 Experimental Results The mining process is shown in Fig. 1. The results of the image mining are listed in table 1 and shown in Fig. 2 (a)~(f). Fig. 2(a) is the original image, (b) ~ (f) are the images mined by various values of deltas respectively. Each image of (a) ~ (f) consists of three parts, the above is the original image or the image mined, the middle is the gray spectrum corresponding to the above image and the below is the parameters of image quality assessment. As show in Table 1 and Fig.2, the image(c), whose CAF value is maximum (6.184), can be considered the optimal quality image. As shown in Table 2 and Fig. 3, the image quality mined is improved; the results of evaluation and the trends of human visual perception are consistent. Therefore, NR-IQAF(CAF) model is effective to improve the contrast resolution limitations of human visual under the scotopic vision, and achieve the optimal image effect. Table 1. The variations of evaluation parameters with delta Sita
0
0
0
0
0
0
Delta
255
3
6
9
12
15 3.059
InEn
3.154
1.710
2.688
2.947
3.027
AG
3.584
183.3
120.1
85.60
65.92
59.95
AC
0.945
35.34
31.55
23.96
18.69
15.23
NGD
0.028
0.563
0.942
0.671
0.517
0.423
CAF
0.521
3.128
6.184
5.342
4.526
3.932
Image
(a)
(b)
(c)
(d)
(e)
(f)
286
F. Tian et al.
(a) Sita=0, Delta=255, CAF=0.521
(b) Sita=0, Delta=3, CAF=3.128
(c) Sita=0,Delta=6, CAF=6.184
(d)Sita=0,Delta=9,CAF=5.342
(e) Sita=0,Delta=12,CAF=4.526
(f) Sita=0,Delta=15,CAF=3.932
Fig. 2. The variation of an image quality with Delta. Table 2. The evaluation parameters of original images and optimal images Sita Delta InEn AG AC NGD CAF Image
0 255 4.331 8.401 0.958 0.066 1.100 (a)
0 17 4.007 124.5 12.49 0.977 7.444 (b)
0 255 4.141 6.883 0.670 0.054 0.871 (c)
0 13 3.686 121.9 10.93 0.956 6.553 (d)
0 255 4.023 10.38 0.855 0.081 1.103 (e)
0 20 4.018 131.9 10.87 0.966 7.168 (f)
Scotopic Visual Image Mining Based on NR-IQAF
(a) Sita=0, Delta=255, CAF=0.521
287
(b) Sita=0, Delta=3, CAF=3.128
(c) Sita=0,Delta=6, CAF=0.871
(d)Sita=0,Delta=9,CAF=5.342
(e) Sita=0,Delta=12,CAF=1.103
(f) Sita=0,Delta=15,CAF=3.932
Fig. 3. Original images and optimal images Note: The image (a)(c)(e) are the original images, and (b)(d)(f) are the corresponding optimal images with them respectivity
288
F. Tian et al.
6 Conclusions The built model of image quality assessment function (CAF) can be used to assess the image quality taken from scotopic condition. The CAF possesses a convex feature shown in Fig. 2 and Fig. 3, and its maximum corresponding to the optimal quality image. The assessment results are well consistent to the results of the subjective assessment by human vision. Therefore, this paper proposes NR-IQAF(CAF) model is feasible to obtain the optimal images.
References [1] Li, J., Narayanan, R.M.: Integrated spectral and spatial information mining in remote sensing imagery. IEEE Transactions on Geoscience and Remote Sensing 42(3), 673–685 (2004) [2] Zhang, J., Hsu, W., Lee, M.: Image mining: issues, frameworks, and techniques. In: Proceedings of the 2nd International Workshop on Multimedia Data Mining (MDM/KDD 2001), pp. 13–20 (August 2001) [3] Wang, Z.F., Liu, Y.H., Xie, Z.X.: Measuring contrast resolution of human vision based on digital image processing. Journal of Biomedical Engineering 25(5), 998–1002 (2008) [4] Wang, Z., Li, Q.: Video quality assessment using a statistical model of human visual speed perception. J. Opt. Soc. Amer. 24(12), B61–B69 (2007) [5] Xie, Z.X., Wang, Z.F., Liu, Y.H.: The theory of gradually flattening gray spectrum. Chinese Journal of Medical Physics 23(6), 15–17 (2006) [6] Xie, Z.X., Wang, Y., Wang, Z.F.: A method for image hiding and mining based on Zadeh transformation. Chinese Journal of Medical Physics 24(1), 13–15 (2007) [7] Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multi-scale structural similarity for image quality assessment. In: Proc. IEEE Asilomar Conf. Signals, Syst., Comput., pp. 1398– 1402 (November 2003) [8] Li, W.J., Zhang, Y., Dai, J.R.: Study on the Measurement Techniques of MRC in Visible Imaging System. Acta Metrologica Sinica 27(1), 32–35 (2006) [9] Agostini, T., Galmonte, A.: A new effect of luminance gradient on achromatic simultaneous contrast. Psychonomic Bulletin and Review 9(2), 264–269 (2002) [10] Wang, Z., Lu, L., Bovik, A.C.: Video quality assessment based on structural distortion measure. Signal Processing: Image Communication 19(2), 121–132 (2004) [11] Albonico, A., Valenzise, G., Naccari, M., Tagliasacchi, M., Tubaro, S.: A reducedreference video structural similarity metric based on noreference estimation of channelinduced distortion. In: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Taipei, TW (April 2009)
Extraction of Visual-Evoked Potentials in Rat Primary Visual Cortex Based on Independent Component Analysis Zhizhong Wang, Hong Wan, Li Shi, and Xiaoke Niu School of Electric Engineering, Zhengzhou University, Zhengzhou, China [email protected]
Abstract. The visual-evoked potentials(VEPs) is very important and meaningful to study the brain function and the information processing mechanism of visual systems. In the paper first the characteristics of electromyography (EMG), electro-oculogram (EOG), electroencephalogram (EEG) and VEPs in rats were obtained respectively in time and frequency domain. Then a novel abstracting algorithm based on independent component analysis (ICA) was proposed and applied to extract the VEPs from the mixed above under different colors’ stimulation. The correlation coefficient between the extracted and original signals is 0.9944. The experiments demonstrated this new method could extract VEPs correctly and efficiently Keywords: Visual-evoked Potentials, Rat, Primary Visual Cortex, Independent Component Analysis.
1 Introduction Visual-evoked potentials (VEPs) is an important way to assess the functional integrity of visual pathways in the nervous system [1]. VEPs can be easily recorded from the visual cortex of the experiment animal which responses to different visual stimuli. VEPs consist of electrical signals generated by the nervous system in response to a stimulus. There are several types of VEPs, including flash evoked potential (FEP) and pattern evoked potential. The FEP is produced by a visual stimulation with a brief and diffuse flash light, which is frequently used to evaluate the neural activity and sensory processing in the visual system and to identify and characterize the changes occurring in the retina and the occipital cortex [2,3]. VEPs can also provide a further therapeutic approach through the stimulate of monitoring neurophysiologic changes related to diseases [4]. The pattern evoked potentials have been used to assess parametric characteristics of visual perception, detect neuronal irritability and diagnose neurological diseases [5]. With the development of brain-computer interface (BCI), the electrophysiological activity of the brain can be obtained from implanted electrodes in the cortex. At least five kinds of brain signals have been detected for BCI so far: visual evoked potentials,
Y. Wu (Ed.): ICCIC 2011, Part IV, CCIS 234, pp. 289–296, 2011. © Springer-Verlag Berlin Heidelberg 2011
290
Z. Wang et al.
slow cortical potentials, cortical neuronal activity, β rhythms, and event-related potentials [6]. One of the main issues in designing a BCI is to find the patterns of brain activity, which could be easily explored. One of these patterns is VEPs which can be directly stimulated by light and extract from the brain electrical activity through a number of methods [7] such as Fourier analysis [8], Wavelet Transform [9] and independent component analysis (ICA) [10]. Continuous visual stimulation, however, could cause fatigue or tiredness of subjects’ visual system. Therefore, the feature extraction of EEG has been widely used in BCI. ICA is one of the most important methods to obtain the VEPs by single extraction. A lot of hard works have indicated that ICA could separate VEPs, EEG, EOG and EMG from the mixed signals. But the VEPs won’t be recognized by the algorithm [11] because there are some problems such as the irregular order and random when the ICA algorithm is applied to extract the VEPs. In the paper a novel improved VEPs extraction algorithm based on ICA was proposed. First we detected the signals from the primary visual cortex with red, green, blue and white LED stimulus. Second we analyzed the characteristics of the electromyography (EMG), the electro-oculogram (EOG), the electroencephalogram (EEG) and the VEPs of the rats in time and frequency domain respectively. Third we got the valuable components from the test signals by ICA . At last we extracted the VEPs from the components considering the characteristics obtained above.
2 Materials and Methods A. Animals
~
3-4 month-old male Sptrague-Dawley rats with weighting 270 300g were used in the experiments, 8 of the total 16 rats for the VEPs testing, and the rest for the EEG recording. The rats were born and raised in Henan Experimental Animal Center of Zhengzhou University. The animals were maintained at 12 h light–dark cycles and a constant temperature of 23±2 °C. Water and food were allowed free access. All aspects of the care and treatment of laboratory animal were approved by the Animal Care Center of Zhengzhou University. B. Surgery Animals were anaesthetized with sodium pentobarbital (50 mg/kg, i.p.) and fixed on a stereotaxic apparatus. During the experiment, the rectal temperature was maintained at 37°C using a thermostatically controlled heating pad. For EEG recording, a pair of Teflon-covered tungsten electrodes (0.4 mm diam./3mm length each) were implanted into the left visual cortex (7.3 mm from Bregma,AP-7, ML 1.5 3, DV 0.7 from dura mater surface) and served as EEG electrodes. The EMG and EOG electrodes were made by stainless steel wires soldered onto stainless steel springs with a 0.5 mm outside diameter. The springs were sutured onto the nuchal muscles and lateral of orbita respectively. The reference electrode was placed on the skull far away from the recording electrodes, and the ground electrodes were stitched on the scalp of the rat. All wires were connected to a 12 pin connecter and bonded onto the skull with a dental acrylic mixture.
~
Extraction of Visual-Evoked Potentials in Rat Primary Visual Cortex
291
For EVPs testing, silver wires soldered to the stainless-steel screws as recording electrodes (1.2mm diam./3mm length each ) were screwed into the skull over the left visual cortex (7mm posterior to bregma and 2.5mm lateral to the midline). Similar screws placed over the ipsilateral and the contralateral frontal cortex served as reference and grounding electrodes, respectively. All electrodes were led to an 8-hole plastic cap, and secured to the skull with dental acrylic. After surgery, the animals were individually housed in the experimental cage with free access to water and food, and allowed to recover for 1 week before EEG or VEPs testing. C. Visual Stimuli VEPs were elicited by a flash visual stimulator designed by Electrical Engineering School of Zhengzhou University. Luminance control device is composed of different color light emitting diodes (LED) with red, green, blue and white. The stimulator can provide multiform stimulations in different color and intensity by changing the connection mode or the parameters of the stimulator. For VEPs experiment, four different color flashes (the intensity of the flash: 5 cd ⋅ s / m , duration: 100ms, frequency: 0.5Hz) were used to stimulate the left eye of the rat in an order of white, red, green, and there is 5 min interval between different color flashes. 2
D. Electrical Recording Rats were habituated to the experimental environment for at least 24 h prior recording and connected to long recording leads which allowed them to move freely in a soundproof observation box. Four LEDs, parallel to the level of the rat’s eyes, were fit on each side of the box in order to catch the flash stimuli. The EEG, EOG and EMG of rats were continuously recorded from Multi-channel Physiological Signal Acquiring System (model RM6240CD, Chengdu Instrument Co.) at least 30min (sampling frequency: 10 kHz, high pass barrier frequency: 100Hz, low pass barrier frequency: 0.8Hz) before the testing. Five days after implantation, rats were placed in the stereotaxic frame under anaesthesia (10% chloral hydrate, 35 mg/kg, intraperitoneal ) for VEPs recording. The pupils of the rat were dilated with a freshly prepared mixture of 0.75% tropicamide and 2.5% phenylephrine drops (Sigma Chemical Co., St. Louis, MO) prior visual stimulation. Ten minutes later, VEPs were detected by the flash stimulator that was located approximately 15 cm in front of the eyes of the rat. VEPs were calculated by averaging 60 electrical responses of extracellular field potentials over the 3 min stimulation period (trains of 100 msec visual stimuli, 0.03 Hz). Evoked signals were amplified (5000x) and filtered at 3 Hz, 1 kHz and collected in the Multi-channel Physiological Signal Acquiring System (model RM6240CD, Chengdu Instrument Co). E. Analysis of VEPs in Time and Frequency Domain Power spectrum was used to analyze the frequency characteristic of VEPs. The purpose of the VEPs spectral analysis is turning the time-varying amplitude wave to
292
Z. Wang et al.
the frequency-varying power spectrogram. Here, AR (Auto Regression) parameter model [12] was used to carry out the power spectrum. AR parameter model regards the current output x(n) as the weighted sum of the current excitory input u (n) and the past p output, which can be represented as p
x ( n) = −
∑ a x ( n − k ) + u ( n)
(1)
k
k =1
where p is the order and ak is the weighted number, k = 1,2, L p . When the parameter of the model is obtained, the power spectrum is computed by ) σ 2 Δt G ( jw) = 2 p (2) 1+ a k exp(− j 2πfΔt )
∑ k =1
where σ 2 is the variance of the excitory white noise, Δt is the sample interval. In time domain, the key point of the VEPs will be located based on the waveform characteristic.
3 Results A. VEPs and EEG Recorded from the Rat Primary Visual Cortex
To analyze the signal feature of VEPs , the evoked responses of the rat primary visual cortex were examined by the different color stimulations. VEPs were elicited by different colors.The VEPs waveforms were composed of a negative peak followed by a positive deviation corresponding to electrophysiological signals recorded in the visual cortex. Furthermore, the EEG ,EMG and EOG were recorded in order to extract the signal feature induced by light stimuli in the condition of awake and moving. The energy distribution histogram above 5Hz of the each signals were shown in Fig.1.
70.00%
The energy above 5Hz %
)60.00% (50.00% 40.00% 30.00% 20.00% 10.00% 0.00% EOG
EM G
EEG
VEP(white) VEP(green) VEP(blue)
VEP(red)
Fig. 1. The energy distribution histogram above 5Hz of different signals
Extraction of Visual-Evoked Potentials in Rat Primary Visual Cortex
293
Fig.1 compared with the VEPs, the EOG, EEG and EEG were all low-frequency signals, the energy above 5Hz is under 20%. The energy distribution of VEPs was mostly higher than other signals; the energy above 5Hz is over 40%. The difference of the energy distribution in these signals has fully illustrated that the diversity between VEPs and EOG, EEG, etc. in frequency domain is obviously. The VEPs elicited by blue, red, green and white light were shown in Fig.2.
Fig. 2. B. Example of VEPs elicited by blue light; R. Example of VEPs elicited by red light; G. Example of VEPs elicited by green light; W. Example of VEPs elicited by white light.
Fig.2 was shown see that EEG, EOG and EMG were series of messy and irregular signals. On the contrary, the latency of P2 peaks in VEPs was very steady, the mean value of the P2 peaks latency were shown in Table 1. Table 1. The mean value of the P2 peaks latency in our experiments
Mean value of the P2 peaks latency(s)
Blue
Red
Green
White
0.093±
0.109±
0.082±
0.094±
0.013
0.014
0.012
0.012
Most of the P2 peaks appeared from 0.04s to 0.16s, which was regarded as a temporal characteristic of VEPs. B. The Improved VEPs Extraction Algorithm Based on ICA
AR parameter model was used to estimate the power spectrum of each independent components separated by ICA. If the latency of P2 peaks in this component was consistent with the range 0.04-0.16s, it will be regard as the VEPs extracted. The procedure of the improved VEPs extraction algorithm based on ICA was described as following:
294
Z. Wang et al.
Step1: First of all, extract the independent component of the mixed signals, and then estimate the power spectrum of each independent components, marked as Pf i , (i = 1,2, L n) ; Step2: Calculate the energy distribution above 5Hz of each component, which is shown as follows:
∫ P( f )df , (i = 1,2,L n) ∫ P( f )df 30
Pg i =
5 30
1
i
(3)
i
Step 3: Choose the maximum value of Pg j , marked as Pg j . Step 4: Locate the moment of the maximum peak in the jth component, marked as t m . Step 5: If t m ∈ [0.04,0.16] , export the component and regard it as the VEPs we expected. If t m ∉ [0.04,0.16] , we should turn back to Step 1. In this paper, EEG, EOG, EMG and VEPs were applied in different ways. The mixed signals were shown in Fig.3. Then, we extract the VEPs by the improved VEPs extraction algorithm based on ICA proposed, the result is shown in Fig.4. The correlation coefficient between between the extracted and original signals is 0.9944. The results were proved correctly.
Fig. 3. The mixed signals
Fig. 4. The result of improved VEPs extraction algorithm based on ICA
Extraction of Visual-Evoked Potentials in Rat Primary Visual Cortex
295
4 Discussion In the paper a novel improved VEPs extraction algorithm based on ICA was proposed. First we detected the signals from the primary visual cortex with red, green, blue and white LED stimulus in the awake and move freely rats. Second we analyzed the characteristics of the electromyography (EMG), the electro-oculogram (EOG), the electroencephalogram (EEG) and the VEPs of the rats in time and frequency domain respectively. Third we got the valuable components from the test signals by ICA . At last we extracted the VEPs from the components considering the characteristics obtained above. The experiments demonstrated this new method could extract VEPs precisely and efficiently. The correlation coefficient between the extracted and original signals is 0.9944. As mentioned in the introduction, the methods extracting VEPs based on Fourier always bring some signal errors because they can not get the optimal resolutions in time domain and frequency domain simultaneously, which can be avoided by using Wavelet Transform(WT). But this method also has its own disadvantages, for example, the noise with the same frequency cannot be eliminated and it is difficult to choose the proper basis function, both of which are crucial but difficulty when analyzing VEPs with WT. All of these problems can be solved by ICA. However, there are some problems when using ICA to extract the VEPs, such as irregular order and randomicity. In order to extract the VEPs from the mixed signal detected from the primary visual cortex, the ICA algorithm combined with AR parameter model to overcome the problems. Comparering with Fourier-based methods and WT, there are three advantages for our method. First, the VEPs can be extracted effectively. Second, signal-to-noise ratio is increased significantly. Thrid, every component of the extracted VEPs can be detected precisely under single-trial extraction. The improved extraction algorithm based on ICA can also apply to other brain signals if the signals are statistic independent.
References [1] Regan, D.: Human Brain Electrophysiology: Evoked Potentials and Evoked Magnetic Fields in Science and Medicine. Elsevier Science Publishing Company, New York (1989) [2] Parisi, V., Uccioli, L.: Visual electrophysiological responses in persons with type diabetes. Diabetes Metab Res. Rev. 17, 12–18 (2001) [3] Zhou, X., Shu, H.L.: Analysis of visual aculity with VEP technology. Int. J. Ophthalmol. 7, 124–126 (2007) [4] Guarino, I., Lopez, L., Fadda, A., Loizzo, A.: A Chronic Implant to Record Electroretinogram, Visual Evoked Potentials and Oscillatory Potentials in Awake, Freely Moving Rats for Pharmacological Studies. Neural Plasticity 11, 241–250 (2004) [5] Boyes, W.K., Bercegeay, M., Ali, J.S., Krantz, T., McGee, J., Evans, M., Raymer, J.H., Bushnell, P.J., Simmons, J.E.: Dose-Based Duration Adjustments for the Effects of Inhaled Trichloroethylene on Rat Visual Function. Toxicologica Sciences 76, 121–130 (2003) [6] Piccione, F., Giorgi, F., Tonin, P., Priftis, K., Giove, S., Silvoni, S., Palmas, G., Beverina, F.: P300-based brain computer interface: Reliability and performance in healthy and paralysed participants. Clinical Neurophysiology 117(3), 531–537 (2006)
296
Z. Wang et al.
[7] Middendorf, M., Mcmillan, G., Calhoun, G., Jones, K.S.: Brain-computer interfaces based on the steady-state visual-evoked response. IEEE Trans., Rehab. Eng. 8(2), 211–214 (2000) [8] Duhamel, P., Vetterli, M.: Fast Fourier transform:a tutorial review and a state of the art. Signal Processing 19, 259–299 (1990) [9] Quian Quiroga, R., Sakowitz, O.W., Basar, E., Schürmann, M.: Wavelet Transform in the analysis of the frequency composition of evoked potentials. Brain Research Protocols 8, 16–24 (2001) [10] Lee, P.-L., Hsieh, J.-C., Wu, C.-H., Shyu, K.-K., Chen, S.-S., Yeh, T.-C., Wu, Y.-T.: The Brain Computer Interface Using Flash Visual Evoked Potential and Independent Component Analysis. Annals of Biomedical Engineering 34, 1641–1654 (2006) [11] Barros, A.K., Vigário, R., Jousmaki, V., Ohnishi, N.: Extraction of event-related signals from multichannel bioelectrical measurements. IEEE Transactions on Biomedical Engineering 47(5), 583–588 (2000) [12] Faust, O., Acharya, R.U., Allen, A.R., Lin, C.M.: Analysis of EEG signals during epileptic and alcoholic states using AR modeling techniques. IRBM 29, 44–52 (2008)
A Novel Feature Extraction Method of Toothprint on Tongue in Traditional Chinese Medicine Dongxue Wang, Hongzhi Zhang, Jianfeng Li, Yanlai Li, and David Zhang School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China [email protected]
Abstract. The toothprint on tongue is an important objective index for revealing the human sub-health state, and thus the extraction and description of toothprint is of great significance in clinical applications. Current toothprint detection methods are only based on concave point. These methods, however, heavily depend on the accuracy of the segmentation of the tongues from the background, and are difficult to detect the weak toothprint on tongue. In this paper, we propose an effective method to make toothprint detection more robust and accurate. The proposed method first extracts both the curvature feature and the color feature around the contour of the tongue, and then analyses the toothprint using these two kinds of features. Experimental results show that the proposed method is promising for the detection of both the obvious and the weak toothprints. Keywords: tongue diagnosis, toothprint, curvature feature, color feature, contour.
1
Introduction
The tongue with toothprints on its contour is a kind of abnormal tongue appearance, and is of great diagnostic significance to clinical applications [1, 2]. Compared with other diagnostic features, toothprint is easy to be identified in the practice of tongue diagnosis, and is robust against external factors, such as food, medicine, and so on. In traditional Chinese medicine (TCM), the toothprint on tongue is an important objective index for revealing the human sub-health state. It has attracted many researchers’ attention in TCM tongue diagnosis research. With the progress in medicine science, image processing, and pattern recognition, the research of the toothprint on tongue is going towards microcosmic, quantitative, and objective. Zhong et al. [3] proposed two feature extraction methods for the detection of the toothprint on tongue, which are based on convex closure structure and curve fitting, respectively. These two methods, however, seriously depend on the accuracy of the segmentation of the tongues from the background, and are difficult to detect the weak toothprint on tongue. Moreover, the methods would also perform poor for detecting the toothprint with serious deformation. Y. Wu (Ed.): ICCIC 2011, Part IV, CCIS 234, pp. 297–305, 2011. © Springer-Verlag Berlin Heidelberg 2011
298
D. Wang et al.
In this paper, we first investigate the characteristics of curvature and color on the contour of the tongue with toothprints, and propose a novel feature extraction method of the toothprint on tongue by using these two features. Then we carry out several experiments to show the effectiveness of the proposed method. The remainder of the paper is organized as follows. Section 2 describes the factors and definition of the toothprint on tongue. Section 3 describes the feature extraction method of the toothprint on tongue using the feature of curvature on the contour of the tongue. Section 4 presents the feature extraction method using the feature of color on the contour of the tongue. Section 5 uses the two methods together to make a comprehensive analysis to the toothprint on tongue. Section 6 provides the experimental results and Section 7 offers the conclusion of this paper.
2
Factors and Definition of Toothprint on Tongue
In TCM, the formation of the toothprint on tongue can be attributed to splenic asthenia, where the spleen cannot transmit and distribute the fluids, and the fluids is stopped in the tongue, then the tongue becomes big and fulfills the alveolus, finally the contour of the tongue is pressed and the toothprint is left. The substance of the toothprint on tongue is the conjunctive tissue hyperplasia and edema caused of obstacle in the circumfluence of blood or lymph in the tongue [4]. On one hand, because of the edema of the tongue, it belongs to deficiency of spleen yang. On the other hand, because of the laxity of the tongue’s muscle, it belongs to deficiency of spleen qi. From the observation of numerous tongue images with toothprints from the tongue image data set of Bio-computing Research Center of Harbin Institute of Technology, we summarize the characteristics of the toothprint on tongue from the following three aspects [5]: • Location: toothprint is usually found on the two sides of the tongue, sometimes on the tongue tip; • Shape: toothprint has obvious prints of tooth pressing, and usually exhibits dentate contour; • Color: the toothprint region usually has a dull-red color, whereas the nontoothprint region has a white color.
Fig. 1. Tongue with toothprints
A Novel Feature Extraction Method of Toothprint on Tongue
299
Fig. 1 shows two typical tongue images with toothprints, where obvious prints of tooth can be observed on the two sides of the tongues.
Fig. 2. The R component values along the tongue contour
Fig. 3. The ci’ values along the tongue contour
Fig. 4. An contour curve of the left side of a tongue image
3
Curvature Feature Extraction of Toothprint on Tongue
The contour points of the tongue are picked out from tongue images using the snake algorithm developed by Bio-computing Research Centre of Harbin Institute of
300
D. Wang et al.
Technology [6, 7]. Using the snake algorithm we can obtain 120 contour points from a tongue image, and the curve through these points makes up the contour of the tongue. Assume that there is no toothprint on the root of the tongue, and the probability that the toothprints appear on the two sides of the tongue is higher and the toothprint feature is more obvious than the tongue tip. So we focus on the contour information on the two sides of the tongue. Fig. 2 is a contour curve of the left side of a tongue with toothprints. The x-axis and y-axis represent the vertical and horizontal coordinate of the pixel of the tongue contour, respectively. If there is no toothprint on the tongue, the curve should be concave. That is to say, the gradients of the curve are monotonically increasing. If there are toothprints, the corresponding regions of the curve might be convex (see the red-framed regions in Fig. 2). So we intend to utilize this obvious characteristic to detect the toothprint candidate. In order to make estimate of the toothprint candidate more reliable, we assume that for any non-toothprint segment on the curve the second derivative should be lower than 0. If the average curvature is higher than the predefined threshold ThresholdC (empirically determined as 0.002), we should call this region a toothprint candidate. The definition of the discrete curvature ci of the point i is given as follow:
⎛ Δx Δx ⎞ ⎛ Δy Δy ⎞ ci = ⎜ i − i+t ⎟ + ⎜ i − i+t ⎟ ⎝ Δsi Δsi+t ⎠ ⎝ Δsi Δsi+t ⎠ , 2
2
(1)
where Δxi=xi−xi-t , Δyi=yi−yi-t , and Δsi=(Δxi 2+Δyi 2)1/2, and t is the step parameter with t=15. Then we add the sign of the second derivative y’’ before the curvature ci :
⎡⎛ Δxi Δxi+t ⎞2 ⎛ Δyi Δyi + t ⎞ 2 ⎤ ci' = sgn ( y'' ) ⎢⎜ − − ⎟ +⎜ ⎟ ⎥ ⎣⎢⎝ Δsi Δsi + t ⎠ ⎝ Δsi Δsi + t ⎠ ⎥⎦
(2)
The result of ci’ obtained is shown in Fig. 4. The x-axis represents the vertical coordinates of the tongue contour in the tongue image, and the y-axis represents the curvature value with the sign. Because there is some noise on the contour curve and the contour curve is not absolutely smooth, the concavity and convexity of the contour curve may be unstably. Through the experiments, we found that the toothprints in tongue images are commonly between 12 and 76 pixels. So we set the threshold Threshold1 and Threshold2 12 and 76, respectively, representing the maximum and the minimum of the length of the toothprint candidate. Thus we could obtain four toothprint candidates in Fig. 3 (marked as the red - framed regions), and the positions coincide with the positions of the convex curve segments in Fig. 2.
A Novel Feature Extraction Method of Toothprint on Tongue
(a)
(c)
(e)
301
(b)
(d)
(f)
Fig. 5. Toothprints obtained only use the curvature or color feature: (a) toothprints obtained use the curvature feature (b) toothprints obtained use the color feature (c) curvature value of the left side of the tongue (d) R value of the left side of the tongue (e) curvature value of the right side of the tongue (f) R value of the right side of the tongue
4
Color Feature Extraction of Toothprint on Tongue
From the characteristics of the toothprint described in Section 2, we find that the color of the non-toothprint region would prefer to be white because of edema, and the color of the toothprint region would prefer to be dull-red because of the obstruction of blood flow. Considering that red is the principal color of tongues, we only use the R component to represent the color of the pixels along the tongue contour. In the proposed method, we use a diamond region which is 5 pixels away from the tongue contour and contains 25 pixels.
302
D. Wang et al.
By averaging the R component of the diamond region, we obtain a description of the color of the pixel along the tongue contour, which would be valuable for toothprint detection. As shown in Fig. 4, the values of the R component would become lower when the diamond region is close to the toothprint. On the contrary, the values would become higher when the diamond region is far from the toothprint. In Fig. 4, the x-axis represents the vertical coordinate of the diamond region in the tongue image, and the yaxis represents the average value of the R component of the pixels in the diamond region. The R component value would be lower in the toothprint region, and result in a fluctuation. Thus we can utilize this characteristic to select toothprint candidate. If a curve segment is in the fluctuation region, there is more probable that this segment is a toothprint. On the contrary, if a curve segment is not in the fluctuation region, the probability would be much less. That is to say, if the difference of the minimum of the R component values with the R component of the two endpoints of the curve segment is higher than the threshold ThresholdR (with the optimal value 8), and the length of the curve segment is within the range [Threshold1, Threshold2], we set this curve segment as a toothprint candidate. As shown in Fig. 4, we could obtain two toothprint candidates (marked as the redframed regions), and the positions coincide with the positions of the first two convex curve segments in Fig. 2.
5
Comprehensive Analysis of Toothprint on Tongue
Because the extracted contour curve of the tongue may be not smooth, and the light and angle would cause some interference during capturing the tongue image, it is not sufficient to judge the toothprint only use the curvature feature or the color feature, and it is likely to make error, as shown in Fig. 5. So we intend to use these two features together. In the result of toothprint regions obtained by the curvature value, if the average curvature value of a toothprint candidate is higher than a high threshold ThresholdC2 (the best value is 0.03 by experiment), that is to say if the average curvature value with the sign of the second derivative is lower than the threshold -ThresholdC2, we set it a toothprint directly. Else, in the result of toothprint regions obtained by the R component value, if there is a toothprint candidate whose position is close to some one obtained by the curvature value, then we set it a toothprint. Else, we do not set it a toothprint. The result obtained is shown in Fig. 6. In Fig. 6, the average curvature values of the second and the third toothprint candidates in (a) are higher than the threshold ThresholdC2. So we directly set them toothprints (as is shown in (e)). Since there is no toothprint candidate in (c) which is close to the first and the forth tothprint candidates in (a), we do not set them toothprints. There is no toothprint candidate in (b) whose average curvature value is higher than the threshold ThresholdC2, and only the first toothprint caididate in (d) is close to the second one in (b), and thus we set the second toothprint candidate in (b) is a toothprint, and the others are not toothprints (as is shown in (f)).
A Novel Feature Extraction Method of Toothprint on Tongue
303
6 Experimental Results and Discussion To evaluate the proposed method, we build a data set of 200 tongue images with toothprints, which includes 534 obvious toothprints and 329 weak toothprints. The result is shown in Table 1. From Table 1, one can see that 507 of 534 obvious toothprints and 273 of 329 weak toothprints are correctly detected, with the correct rates 94.9% and 83.0%, respectively.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Fig. 6. The result of the comprehensive analysis: (a) curvature value of the left side of the tongue (b) curvature value of the right side of the tongue (c) R value of the left side of the tongue (d) R value of the right side of the tongue (e) the result of toothprints of the left side of the tongue (f) the result of toothprints of the right side of the tongue (g) the result of toothprints of the tongue
304
D. Wang et al.
The results above indicate that the proposed method can obtain satisfactory detection performance for not only the obvious toothprint but also the weak toothprint. Among the toothprints which are not correctly detected, most can be attributed to that the curvature feature of these toothprints is not very obvious. Fig. 7 shows one tongue image with toothprints which are not correctly detected. As shown in Fig. 7, there is no obvious concave region on the tongue contour, so we should detect toothprint only using the color feature. However, if we only use the color feature to detect the toothprint, we cannot achieve satisfactory for the general toothprint. Thus it is valuable to further investigate more effective method to simultaneously make use of the contour and color features. Table 1. Result of toothprint detection
Type of Toothprints
Identification Result Actual number
Detected number
Correct rate (%)
obvious toothprints
534
507
94.9
weak toothprints
329
273
83.0
total
863
780
90.4
Fig. 7. A tongue with toothprints which are not correctly detected
7
Conclusions
The detection of the toothprint on tongue is a very important component in tongue diagnosis. In order to identify the toothprint on tongue automatically and accurately, we propose a novel method that could analyze the tongue image by using the two kinds of features on the contour of the tongue, curvature feature and color feature.
A Novel Feature Extraction Method of Toothprint on Tongue
305
The effectiveness of the proposed method has been experimentally demonstrated. The method can act as assistant diagnostic tool and be applied in TCM clinical applications and be useful in future computational tongue diagnosis research. Acknowledgment. This work is partially supported by the NSFC under Contract Nos. 60902099, 60871033 and 61001037, and the National Science & Technology Major Project of China under Contract No. 2008ZXJ09004-035.
References [1] Shen, Z.: Reference standard of the diagnosis of deficiency syndrome in Traditional Chinese Medicine. Journal of Integrated Traditional and Western Medicine 3, 117 (1983) [2] Medicine Bureau of Health Ministry, Guiding principles in the clinical research of spleen deficiency treatment by Chinese Medicine. Acta Medica Sinica 3, 71–72 (October 1988) [3] Zhong, S., Xie, Z., Cai, Q.: Researches on tooth-marked tongue recognition method. Journal of Hanshan Normal University 29, 34–38 (2008) [4] Li, M.: Clinical researching situation of the formation mechanism of tongues with toothprints and the correlation with diseases. Hunan Journal of Traditional Chinese Medicine 21, 80–82 (2005) [5] Gong, K.: Research on feature extraction and classification of tongue shape and toothmarked tongue in TCM tongue diagnosis, Master Thesis of Harbin Institute of Technology, pp. 31–41 (June 2008) [6] Pang, B., Wang, K.: Time-adaptive Snakes for tongue segmentation. In: Proc. 1st. Int’l. Conference on Image and Graphics (ICIG 2000), Tianjin, China, pp. 228–331 (August 2000) [7] Pang, B., Wang, K., Zhang, D.: On automated tongue image segmentation in Chinese Medicine. In: Proc. Int’l. Conf. Pattern Recognition, pp. 616–619 (2002)
Stability and Bifurcation of an Epidemic Model with Saturated Treatment Function* Jin Gao1 and Min Zhao2,** 1
College of Mathematics and Information Science, Wenzhou University,Wenzhou, China 2 College of Life and Environmental Science, Wenzhou University,Wenzhou, China [email protected]
Abstract. In this paper, we studied an epidemic model with nonlinear incidence and treatment. We described and analyzed by elementary means of the model, a limited resource for treatment is proposed to understand the effect of the capacity for treatment. It is shown that a backward bifurcation will take place if the capacity is small. The dynamical behaviors of the SIR epidemic model with nonlinear incidence and treatment were also studied. Keywords: Epidemic model, Backward bifurcation, stability analysis, Treatment.
1
Introduction
The classical Kermack-McKendrick epidemic system models the infectivity of an individual depends on the time since the individual became infective [1]. In the study, the saturated incidence rate is expressed in numbers forms , such as β IS , kI 2 S 1+α I 2
kIS 1+ α I 2
,
([2,3,4,5,6,7,8,9]).
In the paper of Wang and Ruan [10], they studied the model with treatment, several authors consider different kinds of treatment( [11, 12, 13,14]. Giving the patients timely treatment will reduce the numbers of infective patients. We must try our best to avoid the delayed effect for treatment by improving our medical technology and investing more medicines, beds and so on. In compartment models for the transmission of communicable there is usually a basic reproductive number R0 , representing the mean number of secondary infections *
**
This work was supported by the National Natural Science Foundation of China (Grant No. 30970305). Corresponding author.
Y. Wu (Ed.): ICCIC 2011, Part IV, CCIS 234, pp. 306–315, 2011. © Springer-Verlag Berlin Heidelberg 2011
Stability and Bifurcation of an Epidemic Model with Saturated Treatment Function
307
caused by a single infective introduced into a susceptible population [15]. Papers [14, 15, 16, 17,18] found backward bifurcations due to social groups with different susceptibilities, pair formation, nonlinear incidences, and age structures in epidemic models. Backward bifurcation is important to obtain thresholds for the control of diseases. we restrict our attention to the following model:
λ SI dS = A − dS − + μR 1+α I dt λ SI εI dI = − (d + r ) I − 1 + kI dt 1 + α I εI dR = rI + − (d + μ ) R 1 + kI dt
(1)
where S ( t ) , I (t ), R (t ) denote the number of susceptible, infective, and recovered individuals at time t respectively, A is the recruitment rate of the population, d is the natural death rate of the population, μ is the rate at which recovered individuals lose immunity and recovery and return to the susceptible class,
r
is the natural
recovery rate of the infective individuals. ε I / (1 + kI ) is the removal rate of infective individuals due to the treatment sites. The organization of this paper is as follows. In the following section, we present equilibrium analysis, mathematical analysis of this model and the bifurcation and some numerical simulation . A brief discussion are given in section 3.
2
Main Results
A. Equilibrium Analysis We consider the existence of equilibria of model (1). For any values of parameters, model (1) always has a disease-free equilibrium E0 = ( A / d , 0, 0) . In order to find the positive equilibria, set λ SI + μR = 0 1+ α I λ SI εI − (d + r ) I − =0 1+ α I 1 + kI εI rI + − (d + μ ) R = 0 1 + kI A − dS −
(2)
308
J. Gao and M. Zhao
We sum up the three equations of (4), this yields S =
A − I − R d
(3)
we eliminate R using the third equation of (2) and substitute it into the equation of (1), then substitute S into the second equation to give the form
aI 2 + bI + c = 0. a = k[α (d μ + rd + r μ ) + λ (d + μ + r ) + d 2α ],
(4) (5)
b = (d + ε + r )λ + d 2 (α + k ) + μα (d + ε ) + r (k + α )(d + μ ) + d (εα + dk ) − λ kA(1 + μ ),
c = (μ + d )(ε + r + d ) − λ A(1 + μ ) Define the basic reproduction number as follows: R0 =
λ A(1 + μ ) . ( μ + d )(ε + r + d )
(6)
It means the average new infections caused by a single infected individual in a whole susceptible population [17]. From Eq.(4), we can see that (1) If
k = 0 , Eq.(4) is a linear equation with a unique solution I =−
which is positive if and only if R0 > 1 (i.e. (2) If
c b
c 1 (i.e.
c < 0 ), then there is a unique nonzero solution of (4) and thus
there is a unique endemic equilibrium (iii) If R0 < 1 (i.e.
I1 =
c > 0 ), then there are two positive equilibria
−b − b2 − 4ac , 2a
I2 =
−b + b 2 − 4ac . 2a
(7)
Stability and Bifurcation of an Epidemic Model with Saturated Treatment Function
309
B. Mathematical Analysis To study the dynamics of model (1), we first present a lemma. Lemma 1. The plane S + I + R = A / d is an invariant manifold of system (1), which is attracting in the first quadrant. Proof. Denoting N (t ) = S (t ) + I (t ) + R (t ) , then summing up the three equations of (1), we have dN = A − dN dt
(8)
It is clear that N (t ) = A / d is a solution of Eq.(8) and for any N (t0 ) ≥ 0 , the general solution of Eq.(8) is N (t ) =
When
t
1 [ A − ( A − d N ( t 0 ) e x p ( − d ( t − t 0 )))] d
tends to infinity, N (t ) = A / d ,
which we can get the conclusion. This means that the limit set of system (2) is on the plane S + I + R = A / d . Thus, we focus on the reduced system dI λI A εI Δ = ( − I − R) − (d + r ) I − = P( I , R) dt 1 + α I d 1 + kI Δ dR εI = rI + − ( d + μ ) R = Q ( I , R ). dt 1 + kI
Theorem 1. If
α > k , system (9) does not have nontrivial periodic orbits.
Proof. In system (9), taking into account the practical significance, we know that I > 0 and
R>0.
Take a Dulac function D ( I , R ) =
1+α I . λI
We have ∂ ( DP ) ∂ ( DQ ) α (d + r ) 1+ α I α −k + = −1 − − (d + μ ) − ∂I ∂R λ λI (1 + kI ) 2
If
α >k, ∂ ( DP ) ∂ ( DQ ) +