111 89 36MB
English Pages 861 [846] Year 2021
Lecture Notes in Electrical Engineering 754
Pengfei Zhao · Zhuangzhi Ye · Min Xu · Li Yang · Linghao Zhang · Rengao Zhu Editors
Advances in Graphic Communication, Printing and Packaging Technology and Materials Proceedings of 2020 11th China Academic Conference on Printing and Packaging
Lecture Notes in Electrical Engineering Volume 754
Series Editors Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli Federico II, Naples, Italy Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán, Mexico Bijaya Ketan Panigrahi, Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, Munich, Germany Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China Shanben Chen, Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore Rüdiger Dillmann, Humanoids and Intelligent Systems Laboratory, Karlsruhe Institute for Technology, Karlsruhe, Germany Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China Gianluigi Ferrari, Università di Parma, Parma, Italy Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid, Madrid, Spain Sandra Hirche, Department of Electrical Engineering and Information Science, Technische Universität München, Munich, Germany Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA, USA Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Alaa Khamis, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt Torsten Kroeger, Stanford University, Stanford, CA, USA Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA Ferran Martín, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA Sebastian Möller, Quality and Usability Laboratory, TU Berlin, Berlin, Germany Subhas Mukhopadhyay, School of Engineering & Advanced Technology, Massey University, Palmerston North, Manawatu-Wanganui, New Zealand Cun-Zheng Ning, Electrical Engineering, Arizona State University, Tempe, AZ, USA Toyoaki Nishida, Graduate School of Informatics, Kyoto University, Kyoto, Japan Federica Pascucci, Dipartimento di Ingegneria, Università degli Studi “Roma Tre”, Rome, Italy Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Gan Woon Seng, School of Electrical & Electronic Engineering, Nanyang Technological University, Singapore, Singapore Joachim Speidel, Institute of Telecommunications, Universität Stuttgart, Stuttgart, Germany Germano Veiga, Campus da FEUP, INESC Porto, Porto, Portugal Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Beijing, China Junjie James Zhang, Charlotte, NC, USA
The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments in Electrical Engineering - quickly, informally and in high quality. While original research reported in proceedings and monographs has traditionally formed the core of LNEE, we also encourage authors to submit books devoted to supporting student education and professional training in the various fields and applications areas of electrical engineering. The series cover classical and emerging topics concerning: • • • • • • • • • • • •
Communication Engineering, Information Theory and Networks Electronics Engineering and Microelectronics Signal, Image and Speech Processing Wireless and Mobile Communication Circuits and Systems Energy Systems, Power Electronics and Electrical Machines Electro-optical Engineering Instrumentation Engineering Avionics Engineering Control Systems Internet-of-Things and Cybersecurity Biomedical Devices, MEMS and NEMS
For general information about this book series, comments or suggestions, please contact [email protected]. To submit a proposal or request further information, please contact the Publishing Editor in your country: China Jasmine Dou, Editor ([email protected]) India, Japan, Rest of Asia Swati Meherishi, Editorial Director ([email protected]) Southeast Asia, Australia, New Zealand Ramesh Nath Premnath, Editor ([email protected]) USA, Canada: Michael Luby, Senior Editor ([email protected]) All other Countries: Leontina Di Cecco, Senior Editor ([email protected]) ** This series is indexed by EI Compendex and Scopus databases. **
More information about this series at http://www.springer.com/series/7818
Pengfei Zhao · Zhuangzhi Ye · Min Xu · Li Yang · Linghao Zhang · Rengao Zhu Editors
Advances in Graphic Communication, Printing and Packaging Technology and Materials Proceedings of 2020 11th China Academic Conference on Printing and Packaging
Editors Pengfei Zhao China Academy of Printing Technology Beijing, China
Zhuangzhi Ye China Academy of Printing Technology Beijing, China
Min Xu China Academy of Printing Technology Beijing, China
Li Yang China Academy of Printing Technology Beijing, China
Linghao Zhang China Academy of Printing Technology Beijing, China
Rengao Zhu China Academy of Printing Technology Beijing, China
ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-981-16-0502-4 ISBN 978-981-16-0503-1 (eBook) https://doi.org/10.1007/978-981-16-0503-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Preface
“2020 11th China Academic Conference on Printing and Packaging and Forum of Technology Integration Innovation Development”, one of the series “China Academic Conference on Printing and Packaging” which is mainly hosted by China Academy of Printing Technology, was held on November 26–29, 2020, in Guangzhou, China. The conference was co-hosted by China Academy of Printing Technology, South China University of Technology and co-organized by State Key Laboratory of Pulp and Paper Engineering, School of Light Industry and Engineering, South China University of Technology, and Key Laboratory of Environmental Protection Technology for Printing Industry of China Academy of Printing Technology, Editorial Department of Digital Printing, and Printing Technology Professional Committee of Chinese Society for Imaging Science and Technology. By far, “China Academic Conference on Printing and Packaging (CACPP)” and its series of events have been held for eleven sessions since the first session in 2010 and have already become the most influential academic exchange activities in printing and packing fields in China. In recent years, the printing and packaging industry of China has been keeping a stable growth, and by the end of 2019, the total output value of China printing industry has topped to RMB 1.302 trillion. The total number of enterprises is 97,200, and the number of employees reaches 2.583 million. The number of large-scale enterprises (with annual operating income of over RMB 50 million) has exceeded 4000, and the total industrial output value has exceeded RMB 800 billion. Intensive and sustainable development is taking shape, with the industrial structure being optimized, efficiency being rapidly improved and new drivers of growth accelerating. Scientific research innovation is playing a leading role in the development of the industry and enterprises. A multi-level scientific research force composed of universities, scientific research institutions, listed companies, and innovative enterprises has been formed, committed to new technologies, new materials, new equipment. Many scientific research results have emerged in the research of new technology. “China Academic Conference on Printing and Packaging and Forum of Technology Integration Innovation Development” sets up a platform for the exchange of scientific research and innovation
v
vi
Preface
achievements, and at the same time explore new ideas and new approaches for the industrialization application of scientific research achievements, so as to promote the comprehensive collaborative innovation of industry, education, research and application, create an innovation chain of the printing and packaging industry, and realize the all-round improvement of the innovation capacity in the field of printing and packaging. Although there are hardships in the period of COVID-19, this year we specially invited Prof. Kefu Chen, academician of the Chinese Academy of Engineering, and Prof. Honglong Ning who is the team representative of academician Yong Cao, from South China University of Technology, Prof. Liangpei Zhang of Wuhan University, and researcher Zhi Tang of the Wangxuan Institute of Computer Technology, Peking University to make keynote speeches on topics of cleaner production and water pollution in the pulp and paper industry, full-print color OLED display, blockchain data security, image recognition and artificial intelligence, etc. We also invited Prof. Xiaohui Wang of South China University of Technology and Dr. Haihua Zhou from the Institute of Chemistry, Chinese Academy of Sciences, Dr. Mengmeng Wang of Jiangnan University, and Prof. Kun Hu of Beijing Institute of Graphic Communication respectively to give the main speeches as outstanding young scholars. At the same time, four panel discussion meetings were organized to conduct oral reports and academic exchanges on topics such as color science and image processing technology, digital media technology, mechatronics and information engineering technology, printing innovative materials, packaging innovative materials and testing technology. The conference received 202 papers this year, among which 118 were selected to be published on Lecture Notes in Electrical Engineering (LNEE) (ISSN: 1876-1100) by Springer. Here we greatly acknowledge all the organizations that offered great support for the conference, and they are as follows: the Printing Technology Association of China, Chinese Society for Imaging Science and Technology, School of Printing and Packaging Engineering of Beijing Institute of Graphic Communication, Faculty of Printing, Packaging Engineering and Digital Media Technology of Xi’an University of Technology, School of Media and Design of Hangzhou Dianzi University, School of Light Industry Science and Engineering of Qilu University of Technology (Shandong Academy of Science), School of Printing and Packaging of Wuhan University, State Key Laboratory of Modern Optical Instrumentation of Zhejiang University, College of Light Industry Science and Engineering of Tianjin University of Science and Technology, College of Bioresources Chemical and Materials Engineering of Shaanxi University of Science and Technology, Light Industry College of Harbin University of Commerce, School of Packaging and Material Engineering of Hunan University of Technology, School of Light Industry and Chemical Engineering of Dalian Polytechnic University, College of Materials Science and Engineering of Beijing University of Chemical Technology, School of Mechanical Engineering of Tianjin University of Commerce, School of Food and Chemical Engineering of Beijing Technology and Business University, School of Materials and
Preface
vii
Chemical Engineering of Henan University of Engineering, College of Communication and Art Design of University of Shanghai for Science and Technology, College of Material Science and Engineering of Zhengzhou University, College of Engineering of Qufu Normal University, School of Mechanical Engineering of Jiangnan University, Shanghai Publishing and Printing College, School of Media and Communication of Shenzhen Polytechnic, and College of Communications of National Taiwan University of Arts. We would like to express our gratitude to the 57 experts from China, Germany, Britain, America, and Japan for reviewing and recommending papers for the conference with strict standards. We also thank Springer for offering us an international platform for publishing. We look forward to our reunion at 2021 12th China Academic Conference on Printing and Packaging. October 2020
Edited by Key Laboratory of Environmental Protection Technology for Printing Industry China Academy of Printing Technology Beijing, China
Committees
Sponsors China Academy of Printing Technology South China University of Technology
Support The Printing Technology Association of China Chinese Society for Imaging Science and Technology
Organizers State Key Laboratory of Pulp and Paper Engineering, South China University of Technology School of Light Industry and Engineering, South China University of Technology Key Laboratory of Environmental Protection Technology for Printing Industry, China Academy of Printing Technology Editorial Department of Digital Printing, China Academy of Printing Technology Printing Technology Professional Committee, Chinese Society for Imaging Science and Technology
Co-sponsors School of Printing and Packaging Engineering, Beijing Institute of Graphic Communication Faculty of Printing, Packaging Engineering and Digital Media Technology, Xi’an University of Technology ix
x
Committees
School of Media and Design, Hangzhou Dianzi University School of Light Industry Science and Engineering, Qilu University of Technology, Shandong Academy of Science School of Printing and Packaging, Wuhan University State Key Laboratory of Modern Optical Instrumentation, Zhejiang University College of Light Industry Science and Engineering, Tianjin University of Science and Technology College of Bioresources Chemical and Materials Engineering, Shaanxi University of Science and Technology Light Industry College, Harbin University of Commerce School of Packaging and Material Engineering, Hunan University of Technology School of Light Industry and Chemical Engineering, Dalian Polytechnic University College of Materials Science and Engineering, Beijing University of Chemical Technology School of Mechanical Engineering, Tianjin University of Commerce School of Food and Chemical Engineering, Beijing Technology and Business University School of Materials and Chemical Engineering, Henan University of Engineering College of Communication and Art Design, University of Shanghai for Science and Technology College of Material Science and Engineering, Zhengzhou University College of Engineering, Qufu Normal University School of Mechanical Engineering, Jiangnan University Shanghai Publishing and Printing College School of Media and Communication, Shenzhen Polytechnic College of Communications, National Taiwan University of Arts
Conference Organizing Committee Chairmen Min Zhu, Vice President of South China University of Technology Pengfei Zhao, President of China Academy of Printing Technology
Vice Chairmen Hao Zou, Member of the Standing Committee of CPC, Executive Vice Minister of Publicity Department of South China University of Technology Zhuangzhi Ye, Vice President of China Academy of Printing Technology Xinghua Jiang, Deputy Director of the Science and Technology Department of South China University of Technology
Committees
xi
Honorary Chairmen Desen Qu, Former President of Beijing Institute of Graphic Communication Tingliang Chu, Former President of China Academy of Printing Technology, Executive Vice Chairman of the Printing Technology Association of China Wencai Xu, Former Vice President of Beijing Institute of Graphic Communication, Vice Chairman of the Printing Technology Association of China Jialing Pu, Chairman of Chinese Society for Imaging Science and Technology
Secretary-Generals Guangxue Chen, Professor of School of Light Industry and Engineering, South China University of Technology Min Xu, Director of Industry Development Research Department, China Academy of Printing Technology
Conference Academic Committee Chairman Deren Li, Academician of Chinese Academy of Sciences, Academician of China Engineering Academy, Academician of International Eurasian Academy of Sciences, Academician of New York Academy of Sciences, Academician of International Academy of Astronautics, Professor of Wuhan University, Director of State Key Laboratory of Information Engineering in Surveying, Mapping and Remote
Vice Chairmen Kefu Chen, Academician of China Engineering Academy, Professor of South China University of Technology, Director of Academic Committee of State Key Laboratory of Pulp and Paper Engineering Jinping Qu, Academician of China Engineering Academy, Professor of South China University of Technology, Director of National Engineering Research Center of Novel Equipment for Polymer Processing, Director of the Key Laboratory for Polymer Processing Engineering of Ministry of Education Guangnan Ni, Academician of China Engineering Academy, Researcher of Institute of Computing Technology Chinese Academy of Sciences, Board Chairperson of Chinese Information Processing Society of China
xii
Committees
Songlin Zhuang, Academician of China Engineering Academy, Director and Professor of School of Optical-Electrical and Computer Engineering of University of Shanghai for Science and Technology, Optical Expert
Commissioners Alexander Roos, Professor of Stuttgart Media University Alexander Tsyganenko, Professor of Media Industry Academy Anita Teleman, Doctor, Research Manager of Printing Solutions at the Research Institute Innventia, Sweden Aran Hansuebsai, Doctor, Associate Professor of Chulalongkorn University Benjamin Lee, Professor, Director of Department of Technology, California State University Bin Yang, Doctor, Researcher of Peking University Changqing Fang, Doctor, Professor of Xi’an University of Technology Congjun Cao, Doctor, Professor of Xi’an University of Technology Dongming Lu, Doctor, Professor of Zhejiang University Eduard Neufeld, Doctor, Managing Director of Fogra Research Institute for Media Technologies Fuqiang Chu, Doctor, Professor of Qilu University of Technology, Shandong Academy of Science Guangxue Chen, Doctor, Professor of South China University of Technology Guodong Liu, Doctor, Associate Professor of Shaanxi University of Science and Technology Guorong Cao, Doctor, Professor of Beijing Institute of Graphic Communication Haigen Yao, Professor of Shanghai Publishing and Printing College Haiqiao Wang, Doctor, Professor of Beijing University of Chemical Technology Haiyan Zhang, Professor of Xi’an University of Technology Haoxue Liu, Doctor, Professor of Beijing Institute of Graphic Communication Hong Chen, Professor of Beijing Institute of Graphic Communication Houbin Li, Doctor, Professor of Wuhan University Howard E. Vogl, Visiting Professor of Rochester Institute of Technology Jeroen Guinée, Doctor, Associate Professor of Leiden University Jialing Pu, Doctor, Professor of Beijing Institute of Graphic Communication Jimei Wu, Doctor, Professor of Xi’an University of Technology Jinda Cai, Professor of University of Shanghai for Science and Technology Jinlin Xu, Professor of Xi’an University of Technology Jinzhou Chen, Professor of Zhengzhou University Jon Yngve Hardeberg, Doctor, Professor of Norwegian University of Science and Technology Jose Maria Lagaron, Doctor, Professor, Leader and Founder of the Group Novel Materials and Nanotechnology for Food Related Applications at the Institute of Agrochemistry and Food Technology of the Spanish Council for Scientific Research
Committees
xiii
Junfei Tian, Doctor, Professor of South China University of Technology Kaida Xiao, Doctor, University Academic Fellow of University of Leeds Kamal Chopra, Professor, President of All India Federation of Master Printers Lijie Wang, Professor of Shenzhen Polytechnic Lijing Zhang, Doctor, Overseas Distinguished Experts of Taishan Scholar, Distinguished Professor of Qilu University of Technology, Shandong Academy of Science Lixin Lu, Doctor, Professor of Jiangnan University Luciano Piergiovanni, Professor of the Department of Food, Environmental and Nutritional Sciences, Faculty of Agricultural and Food Sciences, University of Milan Luhai Li, Doctor, Professor of Beijing Institute of Graphic Communication M. Ronnier Luo, Doctor, Professor of University of Leeds, Director of Color and Image Science Center Maohai Lin, Doctor, Associate Professor of Qilu University of Technology, Shandong Academy of Science Martin Dreher, Doctor, Director and General Manager of DFTA Technology Center, Professor of the Hochschule der Medien (HdM) Martti Toivakka, Doctor, Professor and Head of the Laboratory of Paper Coating and Converting at Åbo Akademi University Mathias Hinkelmann, Doctor, Professor of Stuttgart Media University Min Huang, Doctor, Professor of Beijing Institute of Graphic Communication Ngamtip Poovarodom, Doctor, Associate Professor of Department of Packaging and Materials Technology of Faculty of Agro-Industry, Kasetart University Patrick Gane, Doctor, Professor of Printing Technology at the School of Chemical Technology, Aalto University Pengfei Zhao, Senior Engineer of China Academy of Printing Technology Phil Green, Professor of Colour Imaging, London College of Communication Philipp Urban, Head of Emmy-Noether Research Group, Institute of Printing Science and Technology, Technische Universität Darmstadt Pierre Pienaar, Professor, President of the World Packaging Organization Punan Xie, Professor of Beijing Institute of Graphic Communication Qiang Wang, Doctor, Professor of Hangzhou Dianzi University Roger D. Hersch, Doctor, Professor of Computer Science and Head of the Peripheral Systems Laboratory at the Ecole Polytechnique Fédérale de Lausanne (EPFL) Shaozhong Cao, Doctor, Professor of Beijing Institute of Graphic Communication Shijun Kuang, Consultant Engineer of China National Pulp and Paper Research Institute, Chief Engineer Shisheng Zhou, Doctor, Professor of Xi’an University of Technology Songhua He, Doctor, Professor of Shenzhen Polytechnic Stephen W. Bigger, Doctor, Professor, Vice President of Faculty of Engineering and Science of Victoria University Takashi Kitamura, Doctor, Professor of Graduate School of Advanced Integration Science, Chiba University Thomas Hoffmann-Walbeck, Professor of the Faculty of Print and Media Technology, Stuttgart University of Media
xiv
Committees
Tingliang Chu, Professorial Senior Engineer of China Academy of Printing Technology Wanbin Pan, Doctor, Associate Professor of Hangzhou Dianzi University Wei Wang, Doctor, Professor of Beijing Institute of Graphic Communication Wei Wu, Doctor, Professor of Wuhan University Wencai Xu, Professor of Beijing Institute of Graphic Communication Xianfu Wei, Doctor, Professor of Beijing Institute of Graphic Communication Xiaochun Li, Doctor, Professor of Henan University of Engineering Xiaoxia Wan, Doctor, Professor of Wuhan University Xiufeng Ma, Professor of Qufu Normal University Xiulan Xin, Doctor, Professor of Beijing Technology and Business University Xiuping Zhao, Professor of Tianjin University of Science and Technology Xuesong Mei, Doctor, Professor from Xi’an Jiaotong University Yadong Yin, Doctor, Professor of University of California, Riverside Yan Wei, Doctor, Professor of Tsinghua University, Beijing Institute of Graphic Communication Yanan Liu, Doctor, Engineer of China Academy of Printing Technology Yanlin Song, Doctor, Professor of Institute of Chemistry, Chinese Academy of Sciences Yingquan Zou, Doctor, Professor of Beijing Normal University Yiqing Wang, Doctor, Professor of Xi’an Jiaotong University Yuansheng Qi, Doctor, Professor of Beijing Institute of Graphic Communication Yuemin Teng, Professor of Shanghai Publishing and Printing College Yunfei Zhong, Professor of Hunan University of Technology Yungcheng Hsieh, Doctor, Professor of National Taiwan University of Arts Yunzhi Chen, Doctor, Professor of Tianjin University of Science and Technology Yuri Andreev, Head of Moscow State University of Printing Arts Zhen Liu, Professor of University of Shanghai for Science and Technology Zhengning Tang, Doctor, Professor of Jiangnan University Zhicheng Sun, Doctor, Professor of Beijing Institute of Graphic Communication Zhigeng Pan, Doctor, Professor of Hangzhou Normal University Zhihui Sun, Professor of Harbin University of Commerce Zhijian Li, Doctor, Professor of Shaanxi University of Science and Technology Zhijiang Li, Doctor, Vice Professor of Wuhan University
Reviewers Beiqing Huang, Associate Professor of Beijing Institute of Graphic Communication Congjun Cao, Doctor, Professor of Xi’an University of Technology Dehong Xie, Doctor, Teacher of Nanjing Forestry University Erhu Zhang, Doctor, Professor of Xi’an University of Technology Fazhong Zhang, Doctor, Engineer of China Academy of Printing Technology
Committees
xv
Fuqiang Chu, Doctor, Professor of Qilu University of Technology, Shandong Academy of Science Guangxue Chen, Doctor, Professor of South China University of Technology Guodong Liu, Doctor, Associate Professor of Shaanxi University of Science and Technology Guowei Zhang, Doctor, Teacher of Tianjin University of Science and Technology Haiqiao Wang, Doctor, Professor of Beijing University of Chemical Technology Haiwen Wang, Doctor, Associate Professor of Quzhou University Haoxue Liu, Doctor, Professor of Beijing Institute of Graphic Communication Hongwei Xu, Doctor, Associate Professor of Xi’an University of Technology Jiangping Yuan, Doctor, Research Assistant of Institute for Visualization and Data Analysis, Karlsruhe Institute of Technology, Karlsruhe, Germany Jing Liang, Lecturer of Dalian Polytechnic University Junfei Tian, Doctor, Professor of South China University of Technology Junyan Huang, Master, Professor of Dalian Polytechnic University Leihong Zhang, Doctor, Teacher of University of Shanghai for Science and Technology Lijiang Huo, Doctor, Professor of Dalian Polytechnic University Linghua Guo, Doctor, Shaanxi University of Science and Technology Liqiang Huang, Doctor, Professor of Tianjin University of Science and Technology Luhai Li, Doctor, Professor of Beijing Institute of Graphic Communication Maohai Lin, Doctor, Associate Professor of Qilu University of Technology, Shandong Academy of Science Mengmeng Wang, Doctor, Associate Professor of Jiangnan University Min Huang, Doctor, Professor of Beijing Institute of Graphic Communication Ming Mu, Master, Engineer of China Academy of Printing Technology Ming Zhu, Doctor, Associate Professor of Henan Institute of Engineering Minghui He, Doctor, Associate researcher of South China University of Technology Qiang Liu, Doctor, Associate Professor of Wuhan University Qiang Wang, Doctor, Professor of Hangzhou Dianzi University Rubai Luo, Doctor, Teacher of Xi’an University of Technology Ruizhi Shi, Doctor, Professor of Information Engineering University Shaozhong Cao, Doctor, Professor of Beijing Institute of Graphic Communication Shi Li, Doctor, Teacher of Hangzhou Dianzi University Shuangyang Li, Doctor, Associate Professor of Beijing Technology and Business University Xianfu Wei, Doctor, Professor of Beijing Institute of Graphic Communication Xiaozhou Li, Doctor, Associate Professor of Qilu University of Technology, Shandong Academy of Science Xing Zhou, Doctor, Associate Professor of Xi’an University of Technology Xinghai Liu, Doctor, Associate Professor of Wuhan University Xiulan Xin, Doctor, Professor of Beijing Technology and Business University Xue Gong, Doctor, Harbin University of Commerce Yalin Miu, Associate Professor of Xi’an University of Technology Yanan Liu, Doctor, Engineer of China Academy of Printing Technology
xvi
Committees
Yanan Liu, Doctor, Engineer of China Academy of Printing Technology Yaohua Yi, Doctor, Associate Professor of Wuhan University Yi Fang, Doctor, Teacher of Beijing Institute of Graphic Communication Yuanlin Zheng, Doctor, Associate Professor of Xi’an University of Technology Yuansheng Qi, Doctor, Professor of Beijing Institute of Graphic Communication Yufeng Wang, Doctor, Senior Engineer of Tianjin University of Science and Technology Yunzhi Chen, Doctor, Professor of Tianjin University of Science and Technology Zhanjun Si, Professor of Tianjin University of Science and Technology Zhen Huang, Doctor, Professor of Tianjin University of Commerce Zhengjian Zhang, Doctor, Associate Professor of Tianjin University of Science and Technology Zhicheng Sun, Doctor, Associate Professor of Beijing Institute of Graphic Communication Zhijiang Li, Doctor, Associate Professor of Wuhan University Zhuang Liu, Doctor, Associate Professor of Harbin University of Commerce Zhuofei Xu, Doctor, Teacher of Xi’an University of Technology
Contents
Color Science and Technology Development of a Uniform Colour Space for Evaluating Colour Differences Under HDR Viewing Conditions . . . . . . . . . . . . . . . . . . . . . . . . . Qiang Xu and Ming Ronnier Luo
3
Effects of Lighting Method on Atmosphere and Color Perception in Children’s Clothing Stores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ze Liu, Jing Liang, Zhisheng Wang, Nianyu Zou, and Yusheng Lian
8
Influencing Factors of Color Prediction of Cellular Neugebauer Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Huimin Jiao and Ming Zhu
15
Preferred Skin Tones on Displays Under Different Ambient Lighting Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mingkai Cao and Ming Ronnier Luo
21
Models to Predict Naturalness of Images Including Skin Colours . . . . . . Dalin Tian, Rui Peng, Chunkai Chang, and Ming Ronnier Luo
30
Research of Natural Lip Colour and Preferred Lip Colour . . . . . . . . . . . . Liyi Zhang, Jiapei Chen, and Mengmeng Wang
36
Spectrum Optimization for Surgical Lighting by Enhancing Visual Clarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lihao Xu and Ming Ronnier Luo Observer Metamerism for Assessing Neutrality on Displays . . . . . . . . . . . Hui Fan, Yu Hu, and Ming Ronnier Luo A Study of Incomplete Chromatic Adaptation of Display Under Different Ambient Lightings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rui Peng, Qiyan Zhai, and Ming Ronnier Luo
41 47
54
xvii
xviii
Contents
Newcolour Appearance Scales Under High Dynamic Range Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xi Lv and Ming Ronnier Luo Testing the Performance for Unrelated Colour Appearance Models . . . . Keyu Shi, Changjun Li, Cheng Gao, and Ming Ronnier Luo
60 66
Modelling Simultaneous Contrast Effect on Chroma Based on CAM16 Colour Appearance Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuechen Zhu and Ming Ronnier Luo
71
Developing HDR Tone Mapping Operators Based on Uniform Colour Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Imran Mehmood and Ming Ronnier Luo
77
Effects of Cone Response Function on Multispectral Data Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qian Cao, Xiaozhou Li, and Junfeng Li
82
Study on Colorization Method of Grayscale Image . . . . . . . . . . . . . . . . . . . Siyuan Zhang, Ruze Zhuang, Jing Cao, Siwei Lu, and Qiang Wang Research on Optimal Samples Selection Method for Digital Camera-Based Spectral Reflectance Estimation . . . . . . . . . . . . . . . . . . . . . . Jinxing Liang, Jia Chen, and Xinrong Hu
89
96
Viewed Lightness Prediction Model of Textile Compound Multifilament . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 Yujuan Wang, Jun Wang, Jiangping Yuan, Jieni Tian, and Guangxue Chen A Representing Method for Color Rendering Performance of Mobile Phone Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Yanfang Xu, Zengyun Yan, and Biqian Zhang Color Measurement and Analysis of Unpacked Jujube in Shelf Life . . . . 116 Danyang Yao, Jiangping Yuan, Xiangyang Xu, and Guangxue Chen Color Assessment of Paper-Based Color 3D Prints Using Layer-Specific Color 3D Test Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Jiangping Yuan, Jieni Tian, Danyang Yao, and Guangxue Chen Ink Color Matching Method Based on 3D Gamut Visualization . . . . . . . . 132 Xu Zhang and Maohai Lin Study on Color Gamut of UV Ink-Jet Printing Based on Different Wood Substrates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Yongqing Liu and Maohai Lin Preparation of Photonic Crystals by Rapid Coating Method . . . . . . . . . . . 146 Jie Pan, Bin Yang, Yonghui Xi, Min Huang, and Xiu Li
Contents
xix
Image Processing Technology Study on Image Enhancement Method for Low Illumination Target Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Hongli Liu, Jing Cao, Xueying Wang, Anning Yang, and Qiang Wang Analysis of the Influence of Color Space Selection on Color Transfer . . . 162 Wenqian Yu, Liqin Cao, Zhijiang Li, and Shengqing Xia A Detection Method for Definition Quality of Printed Matter . . . . . . . . . . 170 Yanfang Xu, Haiping Wang, and Biqian Zhang Neutral Color Correction Algorithm for Color Transfer Between Multicolor Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Ane Zou, Xuming Shen, Xiandou Zhang, and Zizhao Wu Application of Image Processing in Improving the Precision of a 3D Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Wei Zi, Xiaoyang Fang, and Runqing Su A Face Changing Animation Framework Based on Landmark Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Run Zhao, Yehong Chen, and Won- Sook Lee Research on Embedding Environment of Digital Watermark Resistant to Print-Scan and Print-Camera (PSPC) . . . . . . . . . . . . . . . . . . . . 199 Meng Mu, Linghua Guo, Nan Li, Cejian Ma, Jindou Xu, and Tingwen Ding Ultrasound Image Preprocessing Method for Deep-Learning-Based Fatty Liver Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Yongle Hu, Yusheng Lian, Yanxing Liu, Yang Jin, Xiaojie Hu, Zilong Liu, Zixin Lin, Lingbo Li, Guannan He, and Yiyang Chu Classification of Map Printing Defects Based on Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Siyang Liu and Ruizhi Shi Digital Media Technology Users’ Emotion Recognition in Virtual Scenes Based on Physiological Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Yang Chen, Lichan Zhang, Anda Yong, Yan Zhu, and Qiang Wang Research on Map Knowledge Visualization Based on Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Shenghui Li, Shuang Wang, and Chun Cheng Implementation of “AR + Books” Production Process Based on Virtual Simulation Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Ruiling He and Zhanjun Si
xx
Contents
A Motion-Driven Large Scene Interactive System for Media Art . . . . . . . 244 Yuanzhuo Yuan, Zizhao Wu, Jinyi Qiao, and Ping Yang Design and Manufacture of Weight-Loss Meal Replacement Packaging Box Based on AR Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Jin Wang, Zhanjun Si, and Yongqin Zhang Visual Information Transfer Design of Packaging Product Instructions Based on Unity Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Lu Qin and Zhanjun Si User Experience Research and Analysis Based on Usability Testing Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Yaqi Li and Caifeng Liu Printing Engineering Technology Simulation Study on Water-Based Ink Transfer in Gravure Printing . . . 271 Li’e Ma, Chaowei Xu, Shanhui Liu, Hongli Xu, Zhengyang Guo, and Jimei Wu Influence of Digital Printing Paper on Color Reproduction . . . . . . . . . . . . 277 Xinting Wang, Zhijie Li, Jun Chen, and Yun Chen Research on the Quality of Digital Printing . . . . . . . . . . . . . . . . . . . . . . . . . . 284 Yanbin Wei, Tian Zhang, and Yonghong Qi Chromaticity Constancy in Printing Output Equipment . . . . . . . . . . . . . . . 293 Zhangying Jin, Liang Zheng, and Quanhui Tian Study on the Hi-Fi Replication Technology of the Digital Printing of Silk Scroll Painting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301 Jing Cao, Yan Zhu, Ruze Zhuang, and Weiyan Zhang Research on Encryption and Anti-counterfeiting Combining One Thing a Code and Variable Digital Printing . . . . . . . . . . . . . . . . . . . . . . . . . . 309 Kangmei Tan, Wenying Li, Pingping Liu, Ting Chen, and Wei Li Study of High-Definition (HD)—Flexo Halftoning Based on the Region Growth and Segmentation Algorithm (RGSA) . . . . . . . . . . 316 Ping Gu, Quanhui Tian, Yan Liu, and Yun Gong Development and Application of Inkjet Printing Quantum Dots . . . . . . . . 321 Ying Pan, Lulu Xue, Cheng Xu, Tianyi Ding, Yinjie Chen, and Luhai Li Design of Portable Express Surface Sheet Printer Based on Inkjet Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Jinxuan Jiang, Ningyu Xiao, Liyuan Peng, Shanshan Wang, Zijie Cui, Siyuan Chen, and Yunfei Zhong
Contents
xxi
Printability of Ink for On-Demand Inkjet Printing on Different Paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 Jilei Chao, Ruizhi Shi, Yanling Guo, Fuqiang Chu, and Qian Deng Analysis of Factors Affecting Flexographic Plate-Making Technology Based on Surface Imaging Stereolithography . . . . . . . . . . . . . . 340 Liang Zheng Printing Time Optimization of Large-Size Powder-Based 3D Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Chen Chen, Lei Wang, Xiaochun Wang, Taotao Xiong, and Guangxue Chen Research on the Anisotropy of Mechanical Properties of Powder-Based 3D Printed Models After Dipping . . . . . . . . . . . . . . . . . . . 352 Xiaochun Wang, Ziyu Hua, and Guangxue Chen Development Status and Trends of Several Major Flexible Printed Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Jundong Wang, Kun Hu, Weiwei Sun, Haibo Wang, Guijuan Yang, Kunlan Wang, LinXinzheng Guo, Fan Zhang, Guangqin Lin, HanPing Yi, Yen Wei, and Luhai Li Design and Manufacture of Programmable Electrochromic Digital Tube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Zhenyu Tan, Huiqin Yin, Jie Xu, Futian Jiang, Ting Chen, and Wei Li Printed Fexible Circuit Based on PVA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 Guangping Liu, Fuqiang Chu, Pingping Li, and Jiazhen Sun Fabrication of Three-Dimensional Graphene Electrodes by Direct-Write Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378 Pingping Li, Fuqiang Chu, Guangping Liu, and Jiazhen Sun Research on the Morphology and Adhesion Performance of Screen Printed Antenna for RFID Tag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Yi Fang and Yuyan Cai Study on Preparation of Counter Electrodes for Trans-flexible Dye-Sensitized Solar Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 Jinyue Wen, Zhicheng Sun, Furong Li, Qingqing Zhang, Weixia Wu, Ruping Liu, and Xuan Ma Application of Biomass Material in Fused Deposition Molding . . . . . . . . . 395 Sheng Li, Huimin Lu, Guangxue Chen, and Junfei Tian Packaging Engineering Technology An Review on Food Packaging Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Xiaofang Wan, Qian He, and Guangxue Chen
xxii
Contents
Extension of Cheese Shelf–Life Using Chitosan–Procyanidin Composite Film with High Antioxidant Activity . . . . . . . . . . . . . . . . . . . . . . 410 Li Zhang, Hui Liu, Yunzhi Chen, Zhengjian Zhang, and Xiaojun Ma Development of a Time-Temperature Indicator Based on Lipase and Glycerol Trioleate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 Lin Wang and Chuang Liu Peeling Strength of Solventless Lamination Films for Retort Packaging and Its Bonding Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Cunxia Hou, Le Li, Dan Yang, Jiazi Shi, Fei Li, and Yabo Fu Research on Properties of Bio-Based Water and Oil Resistant Paper Made by PVA Modified Beeswax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Huayang Zhu, Banzou You, Xiaoling Sun, Kangmei Tan, Hongbing Cao, and Ting Chen Influence of Hot Built-In Objects on the Performance of Corrugated Carton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 Qiuhong Zhang, Junyan Huang, Qi Guo, and Huizhong Zhang Research on Compression Performance of Honeycomb Paperboard Based on Finite Element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 Gaimei Zhang, Chenyu Liu, Haoyu Wang, Xiaoli Song, Jingjing Hu, and Shasha Li Research on Polyvinyl Alcohol Reinforcing Board and Corrugated Fiberboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 Xing Yin, Zhiqiang Chen, Shuangshuang Liu, and Xiaoxiu Hao Environmental Protection Status and Analysis of Pulp Molded Products Based on LCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Xueqin Ni, Zimin Li, Zijie Cui, Danfei Liu, and Yunfei Zhong Design of Glass Packaging Container Based on SolidWorks . . . . . . . . . . . . 471 Lijuan Wang Greening of Glass Packaging Viewed from Life Cycle and 3R1D Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Xuehui Zhu, Yuansheng Qi, Meixiang Yang, and Yanyan Wang Mechanical Engineering Technology Research on Fault Diagnosis of Rolling Bearing in Printing Press Based on Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Zhuofei Xu, Dan Yu, Heping Hou, Wu Zhang, and Yafeng Zhang A Research on Fault Diagnosis of Rolling Bearing in Printing Press Based on Empirical Wavelet Transform and Symbolization . . . . . . . . . . . . 495 Wu Zhang, Yafeng Zhang, Zhuofei Xu, and Dan Yu
Contents
xxiii
Application of CDIO Mode in the Design and Development of Printability Instrument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 Lin Zhu, Xing Guo, Jingyi Sun, Qiyun Yang, and Zhuang Liu Interactive Design of Post-Press Equipment Based on Virtual Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512 Pufei Yang, Xiaohua Wang, Su Gao, Xahriyar Arken, and Yuansheng Qi Research on the Influence Factors of Unwinding Tension in Roll-To-Roll Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519 Li’e Ma, Shilin Feng, Yuchen Yang, Zhenlong Zhao, Qiang Wang, and Shanhui Liu Design of Commodity Positioning Management System Based on Read-Write RFID Tag Printer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 Wenbo Deng, Yihao Li, Ningyu Xiao, Jinxuan Jiang, Liyuan Peng, and Yunfei Zhong Information Engineering and Artificial Intelligence Technology Intelligent Cartoon Image Generation Based on Text Analysis and MCMC Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 Yutao Li, Dehao Ying, Kuang Yin, and Zhijiang Li An End to End Method for Scene Text Detection and Recognition . . . . . . 542 Jing Geng and Congjun Cao Study on the Virtual Localization System of AGV in Printing Workshop Based on Digital Twin Technology . . . . . . . . . . . . . . . . . . . . . . . . 548 Hongwei Xu, Xiang Li, He Wang, and Yijun Chen Research of the SCADA System of Working Status of Equipment in Printing Workshop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 Yijun Chen, Wenhang Lu, Xudong Wang, and Hongwei Xu Printing Material and Related Technology Research on the Ink Absorption Performance of Coating on Foamed Plastic Surface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Qian Deng, Ruizhi Shi, and Jilei Chao Research on Surface Modification Technology of Water-Based Aluminum Powder Pigment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 Liuxin Zhang, Xianfu Wei, Beiqing Huang, Hui Wang, and Wei Zhu Influence of Pigment Particle Size on Anti-counterfeiting Effect of Structural Color Ink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580 Yuhan Zhong, Guangxue Chen, Kaili Zhang, and Qing Wang
xxiv
Contents
Preparation of Silica Encapsulated Fe3 O4 Microwave Absorber and Its Application in 3D Printing Inkjet Ink . . . . . . . . . . . . . . . . . . . . . . . . 584 Yingjie Xu, Wan Zhang, Xianfu Wei, Beiqing Huang, and Qi Wang Surface Modification of Nano-TiO2 Microspheres . . . . . . . . . . . . . . . . . . . . 593 Mengfei Wang, Chunxiu Zhang, Xinyue Zhao, Chenhui Wei, Zhengran Wang, Lina Zhang, Li An, and Chunmei Zhang Effect of Polymerization Process on Properties of Styrene-Acrylic Microemulsion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599 Meng Wang, Xiulan Xin, and Chunxu Zheng Progress in the Preparation and Printability of Water-Based Polyurethane Plastic Printing Ink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607 Tao Hu, Zehui Zhong, Jiaying Zhong, Peng Gao, Qunhu Wu, and Mingxi Liao Research on Waterborne Printing Varnish and Anti-adhesion . . . . . . . . . 614 Qiancheng Liu, Xiaochun Xie, Yunfei Zhong, Hua Fan, and Xiaoyang Qv Preparation and Characterization of UV Varnish with High Resistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620 Chen Zhang, Beiqing Huang, Xianfu Wei, Wei Zhu, and Zixin Lin Study on the Application of UV Ink in Printing Manufacturing . . . . . . . . 629 Qi Lu, Chen Zhang, Beiqing Huang, Xianfu Wei, and Yizhou Wang Preparation and Characterization of Carbon Based Composite Conductive Ink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637 Jieni Tian, Jiangping Yuan, Guangxue Chen, and Weili Zhang Study on Preparation and Printing Evaluation of Composite Aromatic Expansion Microsphere Ink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646 Qingqing Zhang, Zhicheng Sun, Furong Li, Jinyue Wen, and Shuyi Huang Preparation and Thermal Storage Performance of Paraffin Phase Change Microcapsule Ink . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651 Furong Li, Zhicheng Sun, Qingqing Zhang, Jinyue Wen, Xiaoyang Du, and Ruping Liu Preparation and Application of Microdroplet Array . . . . . . . . . . . . . . . . . . 657 Rong Cao, Wenjuan He, Yu Ding, Beiqing Huang, Xianfu Wei, and Lijuan Liang Affection to the Properties of Paper by Different Kinds of Pulp and Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 Zuguang Shen, Banzou You, Guocheng Han, Rui Guo, and Zhaohui Yu
Contents
xxv
Film and Related Material Technology Preparation of Chitosan-Based Composite Response Membrane and Its CO2 Response Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673 Xiaofang Wan, Xinying Wang, Mengzhen Liu, Guangxue Chen, and Wei Chen Preparation and Characterizations of Apple Pomace Polyphenols Modified Cellulose/Starch Edible Packaging Films . . . . . . . . . . . . . . . . . . . . 681 Wenxiong Wang, Qifeng Chen, Guangxue Chen, Yinghan Shi, Xue Li, Jieyi Xiong, and Yanfeng Li Study of Preparation and Film-Forming Performance of Carbon Nanotube-PDMS Composite Film . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692 Dan Zhao, Guijie Deng, Ruping Liu, Lingya Gu, and Ye Li Preparation of Freshness Indicator Based on Electrospun Fiber Membrane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698 Na Wei, Wen Zhang, Yunling Zhou, and Xing Yin Preparation of Paper-Based PEDOT: PSS Conductive Film Using Gravure Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707 Ling Zheng, Guodong Liu, and Yu Liu Influence of Processing Parameters on the Impact Property of Polyethylene Hot Shrinkage Film . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712 Qingbin Cui and ShuBao Zhou Study on the Static Compression of EPP Packaging Materials . . . . . . . . . 720 Lei Chen and Shibao Wen Numerical Simulation of Chemical Migration from PET Bottle to Beverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727 Qiankun Zhang, Peisen Sang, and Guangxue Chen Novel Functional Material Technology Aggregation-Induced Emission of the Hyperbranched Cationic Polymer Based on Bisazopyridines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735 Lulu Xue, Ying Pan, Ruixin Li, Wenguan Zhang, Yinjie Chen, and Luhai Li Review: High-Performance Wearable Flexible Capacitive Pressure Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742 Rubai Luo, Yating Wu, Bin Du, Shisheng Zhou, Haibin Li, Longfei Jiang, and Ling Wu Synthesis of a Novel Electron Donor-Acceptor Discotic Liquid Crystal Dyad and Its Liquid Crystal Properties . . . . . . . . . . . . . . . . . . . . . . 749 Xinyue Zhao, Chunxiu Zhang, Chenhui Wei, Mengfei Wang, Zhengran Wang, Lina Zhang, Li An, and Chunmei Zhang
xxvi
Contents
Photoelectric Properties of TCNQ/Triphenylene Liquid Crystal System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754 Chenhui Wei, Chunxiu Zhang, Xinyue Zhao, Mengfei Wang, Zhengran Wang, Lina Zhang, Li An, and Chunmei Zhang Photopolymerization and Characterization of Tetraphenylethene-Doped Luminescent Polymer . . . . . . . . . . . . . . . . . . . 759 Hui Wang, Zhigang Wang, Xinyuan Wang, Mingxia Zhou, Yan Li, Mingjun Zhu, Beiqing Huang, and Xianfu Wei Preparation of Novel Cholesteric Liquid Crystal and Its Application in Structural Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766 Qi Zhu, Shuhai Tang, Junfei Tian, and Guangxue Chen Study on Factors Influencing Luminescence Intensity of Rare Earth Complexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771 Xiaoxiu Hao and Dandan Wang Preparation and Performance of Perovskite Oxides Lax Sr1−x Co0.2 Fe0.8 O3−δ for Air Electrode Catalyst of New Type of Energy Storage Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780 Zhuang Wang, Zhuang Liu, Jiang Chang, Wenping Cao, and Lin Zhu Research Overview of Electroless Plating Technology for Carbon Nanotubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786 Bin Du, Daodao Xue, Rubai Luo, Shisheng Zhou, Huailin Li, Guirong Dong, Kenan Yang, and Wanrong Li Electrochromic Properties of α-MoO3 Nanorods Fabricated by Hydrothermal Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793 Jing Wang, Zhuang Liu, and Wenping Cao Preparation and Properties of Polyethylene Glycol Modified Nanocellulose/Polyvinyl Alcohol Composite Gel . . . . . . . . . . . . . . . . . . . . . . 801 Xiaoling Sun, Huayang Zhu, Banzou You, Kangmei Tan, Hongbing Cao, and Ting Chen Preparation and Properties of Hydrogel for 3D-Printed Cartilage Repair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809 Guijuan Yang, Kun Hu, Jitao Zhang, Jundong Wang, Weiwei Sun, Haibo Wang, LinXinzheng Guo, Fan Zhang, Guangqin Lin, HanPing Yi, Yen Wei, and Luhai Li Overview of Degradable Polymer Materials Suitable for 3D Printing Bio-stent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815 Haibo Wang, Kun Hu, Weiwei Sun, Jundong Wang, Guijuan Yang, Linxinzheng Guo, Kunlan Wang, Fan Zhang, Guangqin Lin, HanPing Yi, Yen Wei, and Luhai Li
Contents
xxvii
Advances in Research on Sustained-Release Chlorine Dioxide Solid Preparations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 822 Weiwei Sun, Kun Hu, Haibo Wang, Jundong Wang, Guijuan Yang, Kunlan Wang, Linxinzheng Guo, Fan Zhang, Guangqin Lin, Hanping Yi, Yen Wei, and Luhai Li Research of Electroluminescent Dielectric Layer of Packaging Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 830 Zhongmin Jiang, Yingmei Zhou, and Lingjun Kong
Color Science and Technology
Development of a Uniform Colour Space for Evaluating Colour Differences Under HDR Viewing Conditions Qiang Xu and Ming Ronnier Luo(B) State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou, China [email protected]
Abstract. Two experiments were carried out for evaluating colour differences under a large luminance range using the printed colours in a spectrum tunable viewing cabinet and self-luminous colours on a display respectively. For the surface mode experiment, pairs of samples were assessed under 9 luminance levels ranged from 0.25 to 1128 cd/m2 . For the self-luminous mode experiment, it was conducted under 6 luminance levels ranged from 0.25 to 279 cd/m2 . There were 140 and 42 pairs of samples judged by 20 observers using a six-category scales under each luminance level for surface and self-luminous colours respectively. The results were used to test the performance of 7 uniform colour spaces and colour difference equations. Jz az bz was extended to improve the fit to the present data sets. Keywords: Large Luminance Range · Colour Space · Colour Difference Equation · Surface Colour · Self-luminous Colour
1 Introduction In recent years, high dynamic range (HDR) displays have become commonplace in the display industry. For human vision system, the luminance range is from 10–6 to 108 cd/m2 . The luminance of traditional standard dynamic range (SDR) display is from 0.1 to several hundred cd/m2 . But for HDR display, it can be from 0.001 to several thousand cd/m2 . HDR means a larger range of luminance levels. The conventional uniform colour spaces and colour-difference equations like CIELAB [1, 2], CIEDE2000 [3], CIECAM02 [4–7], CAM16 [8–11] cannot be used to evaluate colour reproduction in high dynamic range applications. The new ones specially derived for HDR/WCG applications such as ICT CP [12] and Jz az bz [13] need visual data to verify their performance. The goals of the research are to provide data to extent the conventional metrics and to test the models’ performance in HDR applications.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_1
4
Q. Xu and M. R. Luo
2 Methods 2.1 Experiment 1: Surface Colour Experiment 1 was conducted in a viewing cabinet placed in a dark room. L * of the background was 65. A spectrum tunable LED viewing cabinet was used. The light in the cabinet was set to CIE D65 and 1931 standard colorimetric observers. The experiment was divided into nine phases to investigate the colour difference at different luminance levels from very dark to very bright (0.25, 0.6, 1.1, 1.9, 3.3, 32, 111, 407 and 1128 cd/m2 ). Neutral density filters were employed to obtain dark luminance levels lower than 3.3 cd/m2 . 140 pairs of printed samples were selected from our previous study [14]. They were distributed to surround seven colour centres. Colour pairs in each colour centre included two colour difference magnitudes (2 and 4 CIELAB units). For each magnitude of each centre, there were 2, 3 and 5 pairs in L * b* , L * a* and a* b* planes respectively. These were printed in the colour of seven centres and corresponding samples with no hair-line between them. Six categories including ‘1’ for ‘no difference’, ‘2’ for ‘just noticeable difference’, ‘3’ for ‘small difference’, ‘4’ for ‘acceptable difference’, ‘5’ for ‘large difference’ and ‘6’ for ‘extremely large difference’ were employed for visual assessment of colour difference. Twenty normal colour vision observers (ten males and ten females) took part in the experiment. Their ages ranged from 18 to 25 years. It took each observer 3 h to finish the experiment. These nine luminance levels were arranged in a random order for each observer. The sample pairs had a field of view of 3.5°. The illumination: observation geometry was 0°: 45°. Observers adapted to the viewing conditions for one minute in each phase. Observers viewed the sample pairs following a random order. The mean category for each pair was calculated to represent the visual data (ΔV ). Twenty sample pairs of grey colour centre were repeated in the formal experiment to test the intra-observer variability. In total, 28,800 observations were accumulated, i.e., (140 + 20) pairs × 9 luminance levels × 20 observers. 2.2 Experiment 2: Self-luminous Colour Experiment 2 was conducted on a 30-inch calibrated NEC PA302W display placed in a dark room with a peak white luminance set at 279 cd/m2 and CCT at approximate 6500 K. The display was characterised using the Gain-Offset-Gamma (GOG) model. The experiment was divided into six phases to investigate the colour difference at different luminance levels from very dark to bright (0.25, 2.9, 28, 58, 116 and 279 cd/m2 ). Neutral density filters were also employed to obtain dark luminance levels. 42 pairs of samples were selected from the surface data set. The same 6-point category method was used. The experiment took each observer about one hour. These six luminance levels arranged in a random order. The sample pairs had a field of view of 2.5°. The illumination: observation geometry was 0°: 0°. Each phase started with dark adaptation for one minute. Observers viewed the sample pairs following a random order. Six sample pairs of grey colour centre were repeated in the formal experiment to test the intra-observer variability. In total, 5,760 observations were accumulated, i.e., (42 + 6) pairs × 6 luminance levels × 20 observers.
Development of a Uniform Colour Space for Evaluating Colour Differences …
5
3 Results and Discussion 3.1 Observer Variability The standard residual sum of squares (STRESS) value [15] calculated from Eq. (1) was used to indicate the disagreement between two sets of data compared. STRESS =
with F =
n i=1
A2i /
n
n 2 i=1 (Ai − FBi ) n 2 2 i=1 F Bi
1/2 × 100
(1)
Ai Bi .
i=1
where n is the number of sample pairs and F is a scaling factor to adjust A and B data sets on to the same scale. The percent STRESS values are always between 0 and 100. Values of STRESS near to zero indicate better agreement between two sets of data. In colour-difference studies, a STRESS value exceeding 35 is typically an indicator of the poor performance of the colour-difference formula [15]. Inter-observer variability was expressed as STRESS calculated between mean of all the observers and each individual observer’s results. In Experiment 1, inter observer variability ranged from 15 (at the brightest level) to 25 (at the darkest level) with a mean of 19. In Experiment 2, inter observer variability ranged from 19 (at the brightest level) to 26 (at the darkest level) with a mean of 21. Mean intra observer variability for Experiments 1 and 2 were 12 and 10, respectively. A clear trend can be found that observers are less consistent under dark luminance levels. 3.2 Testing Colour Models’ Performance The STRESS was again calculated between the predicted ΔE values and ΔV values to indicate 7 colour models’ performance, i.e., CIELAB, CIEDE2000, CAM02-UCS, CAM16-UCS, ICT CP , ICT CP and Jz az bz . All data at different luminance levels were combined in the calculation. Comparing all models for both surface and self-luminous colours, Jz az bz performed the worst (STRESS of 67, 50), followed by ICT CP (49, 37), ICT CP (47, 36), CIEDE2000 (42, 38), CIELAB (43, 34), CAM16-UCS (32, 36) and CAM02-UCS (32, 35), respectively. The newer ICT CP colour difference equation performed better than the original ICT CP , indicating the effectiveness to introduce the weighting factor of 0.5 to CT term. 3.3 Improving the Performance of Jz az bz Further efforts were made to introduce c, k L and γ parametric factors to improve the performance of Jz az bz . The new formula, named Jz az bz -n is written as Eq. 2. γ J 2 2 EJz az bz −n = + C1 − C2 + H 2 (2) kL
6
Q. Xu and M. R. Luo
where k L is lightness parametric factor; γ is an exponential factor [16]; and Ci = 1/c · ln(1 + c · Ci ), (i = 1, 2 samples) and Ci is the chroma in az bz plane. To consider both surface and self-luminous modes, optimize c, k L and γ parametric factors to minimize the STRESS between the predicted ΔE values and ΔV values. The optimized factors c = 10, k L = 0.5, γ = 0.2.The performance of Jz az bz -n improved a lot for both surface (STRESS from 67 to 27) and self-luminous (STRESS from 50 to 20) colours. The new formula Jz az bz -n is significantly better than Jz az bz due to the statistical F-test.
4 Conclusions Two psychophysical experiments were carried out to investigate the evaluation of colour difference from very dark to very bright luminance level using surface and self-luminous colours. The experimental data were used to test the performance of 7 colour models, i.e., CIELAB, CIEDE2000, CAM02-UCS, CAM16-UCS, ICT CP , ICT CP and Jz az bz . The results showed that CAM02-UCS and CAM16-UCS performed the best, CIELAB, CIEDE2000, ICT CP and ICT CP performed mediocre, and Jz az bz performed the worst. By introducing the chroma parametric factor (c), lightness parametric factor (k L ) and the exponential factor (γ ) to Jz az bz to form Jz az bz -n, its performance was greatly improved for both surface and self-luminous datasets. Acknowledgements. The work is supported by National Natural Science Foundation of China (Grant number: 61775190).
References 1. CIE 116–1995 (1995) Industrial Colour-Difference Evaluation, Vienna: CIE Central Bureau 2. CIE142–2001 (2001) Improvement to Industrial Colour Difference Evaluation, Vienna: CIE Central Bureau 3. Luo MR, Cui G, Rigg B (2001) The development of The CIE 2000 colour-difference formula: CIEDE2000. Colour Res Appl 26(5):340–350 4. Moroney N, Fairchild MD, Hunt RWG, Li C, Luo MR (2002) The CIECAM02 colour appearance models. In: Proceeding of Tenth Colour Imaging Conference (CIC10). Scottsdale, AZ, IS&T, pp 23–27 5. Li C, Luo MR, Hunt RWG, Moroney N, Fairchild MD, Newman T (2002) The Performance of CIECAM02. In: Proceeding of Tenth Colour Imaging Conference (CIC10). Scottsdale, Arizona, pp 28–32 6. CIE Publication 159:2004 (2004) A colour appearance model for colour man- agement systems: CIECAM02. Vienna: CIE Central Bureau 7. Luo MR, Cui G, Li C (2006) Uniform colour spaces based on CIECAM02 colour appearance model. Colour Res Appl 31:320–330 8. Li C, Li Z, Wang Z, Xu Y, Luo MR, Cui G (2017) Manuel Melgosa, Michael H. Brill and Michael Pointer, Comprehensive colour solutions: CAM16, CAT16 and CAM16-UCS. Colour Res Appl 42:703–718 9. Withouck M, Smet KAG, Ryckaert WR, Hanselaer P (2015) Experimental driven modelling of the colour appearance of unrelated self-luminous stimuli: CAM15u. Opt Express 23:12045– 12064
Development of a Uniform Colour Space for Evaluating Colour Differences …
7
10. Huang WJ, Yang Y, Luo MR (2017) Verification of the CAM15u colour appearance model and the QUGR glare model. Lighting Res Technol. doi.org/https://doi.org/10.1177/147715 3517734402 11. Li C, Liu X, Xiao K, Cho YJ, Luo MR (2018) An extension of CAM16 for predicting size effect and new colour appearance perceptions. In: 26th colour and imaging conference, November 12–16. Vancouver, Canada, pp 264–267 12. Boher P, Leroux T, Blanc P (2018) New ICtCp and Jz az bz colour spaces to analyze the colour viewing-angle dependence of HDR and WCG displays. SID Symp Digest Tech Papers 49:169– 172. https://doi.org/10.1002/sdtp.12511 13. Safdar M, Cui G, Kim YJ, Luo MR (2017) Perceptually uniform colour space for image signals including high dynamic range and wide gamut. Opt Express 25:15131. https://doi. org/10.1364/OE.25.015131 14. Mirjalili F, Luo MR, Cui G, Morovic J (2018) A parametric colour difference equation to evaluate colour difference magnitude effect for gapless printed stimuli. Colour Imaging Conf 123–127. https://doi.org/10.2352/ISSN.2169-2629.2018.26.123 15. García PAR, Melgosa HM, Cui G (2007) Measurement of the relationship between perceived and computed colour differences. J Opt Soc Am A 24(7):1823–1829 16. Huang M, Cui G, Melgosa M, Manuel S, Li C, Luo MR, Liu H (2015) Power functions improving the performance of Colour -difference formuls. Optical Express 597–610
Effects of Lighting Method on Atmosphere and Color Perception in Children’s Clothing Stores Ze Liu1 , Jing Liang1(B) , Zhisheng Wang1 , Nianyu Zou1 , and Yusheng Lian2 1 School of Information Science and Engineering, Dalian Polytechnic University, Liaoning,
China [email protected] 2 School of Printing and Packaging Engineering, Beijing Institute of Graphic Communication, Beijing, China
Abstract. In this paper LED light sources were used to simulate the lighting environment of three different physical parameters in children’s clothing stores. 38 observers were invited to simulate consumers. The semantic difference scale method of psychophysical experiment was used to measure the corresponding lighting environment atmosphere and four perceptions (preference, attractiveness, color fidelity and softness) of five colors children’s clothing (green, yellow, flower pattern, white and red). The results of principal component analysis showed that the two main dimensions of the lighting atmosphere were discrimination and coziness. The use of complex mixed lighting method to illuminate yellow, white and green children’s clothing can increase the positive response of consumers. Consumers’ emotional responses to five colors of clothing in children’s clothing stores were generally low under the general lighting method. Under the accent lighting method, the color of the red children’s clothing was more real, which can produce some positive impact on the consumer’s psychology and consumers believed that the lighting of flower-patterned children’s clothing was softer. Keywords: Lighting method · Children’s clothing store · Principal component analysis · Atmosphere perception · Color perception
1 Introduction LED lighting is playing a role in many fields. People’s psychology and emotions could change when the atmosphere of the environment was changed by lighting factors. Vogels et al. concluded that the coziness and liveliness were the two main dimensions of atmosphere [1]. Li et al. extracted four atmosphere factors: coziness, spaciousness, liveliness and warmth [2].Changes in clothing colors and environmental atmosphere affect consumers’ buying behavior. It is particularly important to create a lighting atmosphere for the clothing retail environment [3, 4]. Children’s clothing stores pay more attention to atmosphere and color. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_2
Effects of Lighting Method on Atmosphere and Color Perception …
9
This paper invited 38 college students (13 males and 25 females) to simulate consumers, with an average age of 22. They were asked to evaluate the effect of lighting methods on the perception of the atmosphere and children’s clothing color.
2 Experiment 2.1 Lighting Environment for Children’s Clothing Store This experiment was completed in the dark room of the lighting environment laboratory. The size of the laboratory space is 5.5m * 2m * 3m (L * W * H).The three lighting methods were composed of LED down lights, LED spotlights and LED T5 bracket lights individually or in combination.SPIC-200 spectral color illuminometer was used to measure the correlated color temperature(CCT) illumination, color rendering index and relative spectral power distribution. Konica Minolta CS-2000A (Spectroradiometer) was used to measure the chromaticity coordinates. The basic parameters and relative spectral power distribution of the three light environments were shown in Table 1 and Fig. 1. Table 1 Basic parameters of lighting environment Lighting environment
Lighting method
CCT/K
Illumination/lx
Color coordinates
Ra
x
y
I
Mixed lighting
3841
1903
0.3870
0.3790
86.6
II
General lighting
6663
653
0.3105
0.3249
85.5
III
Accent lighting
3150
2331
0.4545
0.3969
86.9
2.2 Psychophysical Experiment This paper used the semantic difference scale method of psychophysical experiment to study consumers’ perception of atmosphere and color. Observers were asked to evaluate the lighting environment atmosphere and the color perception of children’s clothing based on five colors (green, yellow, flower pattern, white and red) under three lighting methods. The clothing was selected for children’s favorite colors. The 11 sets of quantifiers in the atmosphere perception questionnaire were from the research of Liu et al. [5]. Both questionnaires used a six-point evaluation form. Each observer was tested for Ishihara color blindness. Observers were asked to adapt the environment for 2 min. Then adjusting three lighting methods in turn. Observers evaluated the lighting environment and filled in the corresponding scores based on the questionnaire.
10
Z. Liu et al.
Fig. 1 Relative spectral power distribution
2.3 Observer Data Stability Analysis In this experiment, the coefficient of variation (CV) was used to compare the observer’s subjective evaluation data stability. A total of 4674 sets of subjective evaluation data were obtained. It includes atmosphere perception for 11 pairs of quantifiers of 38 observers and 4 subjective data of 5 colors of children’s clothing. The stability between observers is represented by CV1.The maximum is 36.012. While the stability of 38 observers is represented by CV2.The maximum is 37.983. When the coefficient of variation is less than 40, the experimental data is considered stable [6].
3 Experimental Results and Discussion The experimental results were obtained through dimensionality reduction. The principal factors were extracted by principal component analysis (PCA). Table 2 showed the two extracted factors. The load according to each factor is greater than or equal to 0.5 [5].This experiment named each factor according to this standard. In the first factor, dim-bright, blurred-clear and low-high fidelity to colour loads account for a large proportion. The three descriptive words are quantifiers that describe the lighting performance of the atmosphere. It means the discrimination of the overall environment of the children’s clothing store. Therefore, the first basic dimension was named discrimination. In the second factor, the afflictivecosy and dazzling-soft loads account for a large proportion. It showed that the atmosphere greatly affects the observer’s feeling of comfort. Therefore, the second basic dimension was named coziness. In the children’s clothing store lighting environment, The first dimension (discrimination) is different from the basic dimension(liveliness) obtained by Vogels, and the second is the same as the basic dimension (coziness) of Vogels. [1].
Effects of Lighting Method on Atmosphere and Color Perception …
11
Table 2 Principal component analysis data of two factors of quantifier Total variance = 53.828%
Factor 1 Factor 2
Quantifier Dim-bright
0.764
0.137
Blurred-clear
0.728
0.083
Low-high fidelity to colour 0.716
0.306
Unattractive-attractive
0.651
0.448
Oppressive-active
0.560
0.400
Afflictive-cosy
0.088
0.736
Detached-close
0.322
0.701
Glazing-soft
−0.540 0.645
Cool-warm
0.203
0.631
Tense-relaxed
0.254
0.587
Boring-funny
0.339
0.469
The mean evaluation preference was shown in Fig. 2. Observers had a higher preference for yellow and white children’s clothing in mixed lighting environment and preferred white and red clothing in accent lighting environment.
Fig. 2 Mean comparison of observer preference
The mean evaluation attractiveness was shown in Fig. 3. Observers were more attracted by children’s clothing in green, yellow and flower patterns than others in mixed
12
Z. Liu et al.
Fig. 3 Mean comparison of observer attractiveness
lighting environment. Observers were more susceptible to red children’s clothing in accent lighting environment.
Fig. 4 Mean comparison of observer color fidelity
The mean evaluation color fidelity was shown in Fig. 4. Green, yellow, flower patterns and white children’s clothing gave observers a higher color fidelity in mixed lighting
Effects of Lighting Method on Atmosphere and Color Perception …
13
Fig. 5 Mean comparison of observer softness
environment. Red children’s clothing was most real in accent lighting environment. The mean evaluation softness was shown in Fig. 5. Yellow children’s clothing was softer in mixed lighting environment. Flower-patterned children’s clothing was softer in accent lighting environment.
4 Conclusions This paper used psychophysical experiment to obtain the subjective evaluation data. The research found that the basic dimensions used to evaluate the lighting environment of children’s clothing stores were discrimination and coziness. In children’s clothing stores, it was found that the use of more complex mixed lighting can increase consumers’ positive emotions in color perception. It is recommended to use mixed lighting for yellow, white and green children’s clothing. Consumers’ emotional responses to five colors of clothing were generally low under the general lighting method. The color of red children’s clothing was more real under the accent lighting method. Flower-patterned children’s clothing was softer in accent lighting environment. Acknowledgements. This research is supported by National Natural Science Foundation of China, grant number 61605012, and QianBaiHui Research Fund Project (2017–228195).
14
Z. Liu et al.
References 1. Vogels F (2008) Atmosphere metrics: a tool to quantify perceived atmosphere. International Symposium Creating an Atmosphere, Grenoble, France 2. Li BF, Zhai QS, Hutchings JT, Luo MF, Ying FF (2019) Atmosphere perception of dynamic LED lighting over different hue ranges. Light Res Technol 51(5):682–703 3. Kwon YHF (1991) The influence of the perception of mood and self-consciousness on the selection of clothing. Cloth Textiles Res J 9(4):41–46 4. Sun MJF (2014) LED Applications in lighting environments of fashion shops. Dalian: Dalian Polytechnic University 5. Liu XY, Luo MF, Li HST (2015) A study of atmosphere perceptions in a living room. Light Res Technol 47(5):581–594 6. Garcia PF, Huertas RS, Melgosa MT (2007) Measurement of the relationship between perceived and computed color differences. J Opt Soc America A Optics Image Science Vision 24:1823– 1829 (2007)
Influencing Factors of Color Prediction of Cellular Neugebauer Model Huimin Jiao1,2 and Ming Zhu2(B) 1 School of Light Industry and Engineering, Qilu University of Technology, Jinan, China 2 College of Materials and Chemical Engineering, Henan University of Engineering,
Zhengzhou, China [email protected]
Abstract. The cellular Neugebauer model at present is a research focus in the field of color reproduction. Aiming at the research of color reproduction, this paper analyzes the multiple influencing factors on the color prediction of cellular Neugebauer modelfor the parameter optimization scheme. The cellular Neugebauer models based on different cell-levels, different numbers and locations of test samples, and the different cellular correction schemes were simulated on MATLAB. The experimental result indicates that the model accuracy increases with the increase of cell-levels, but no longer changes significantly at 4 or 5cell-level. In addition, the numbers and locations of test samples and the cellular correction schemes have no effective influence on the model accuracy. This paper finally determined to use the five-level cellular division, the cell-center sampling and the uniform correction scheme (all cells share one correction index) as the optimal parameter scheme. The color prediction accuracy of the optimal cellular Neugebauer model was evaluated by comparing with the i1 Profiler software and cellular neural network model. Keywords: Model accuracy · Cellular neugebauer model · Correction index
1 Introduction The accurate color reproduction is significant to improve the printing quality. In digital printing process, the color conversion accuracy directly decides the color reproduction accuracy [1]. According to the principle of algorithm, the existing color conversion model can be divided intoNeugebauer model, the neural network model [2, 3] and the polynomial regression model [4]. Neugebauer model can also be divided into the model based on Tristimulus values [5] and Spectrum [6]. These models have different the number of samples, modeling algorithm and computing speed. With high color conversion accuracy, the cellular Neugebauer model at present is a research focus in the field of color reproduction. This paper studies the multiple influencing factors on the color prediction of the cellular Neugebauer model, and provides the optimal parameter scheme. With the output device calibration software and the CMYK raster imaging processors, the cellular © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_3
16
H. Jiao and M. Zhu
Neugebauer models based on different cell-levels, different numbers and locations of test samples and the different cellular correction schemes are simulated on MATLAB. The three influencing factors is evaluated by experiments, and the optimal parameter scheme is determined. The color prediction accuracy of the optimal cellular Neugebauer model is evaluated by comparing with the i1 Profiler software andthe neural network model.
2 Experimental Method 2.1 Experimental Principle and Method The CMYK cellular Neugebauer equation is corrected with the Yule Nielsen Index. Within a certain range, through incremental and iterative calculation of the correction index, the color difference between the measured values and the calculated values (predicted values) of the printing samples is compared, and the index corresponding to the minimum color difference is determined as the optimal correction index of the Neugebauer equation [7]. The all color samples of the experiment include training samples, test samples and evaluating samples. The training samples refer to the vertices of each cell, and the CIEXYZ values of all the vertices of a cell are used to construct the Neugebauer equation of the cell. The test samples are mainly used to test the color prediction accuracy of the Neugebauer equation with different correction index, so as to determine the optimal correction index. Then, the optimal correction index is evaluated by test the accuracy of the corrected cellular Neugebauer equation. In this paper, the CMYK color target ECI 2002 and it8. 7–3 are used as evaluating samples. The mean color difference between the measured value and the predicted value of the evaluating samples is used to indicate the color prediction accuracy of models. Experiment 1: Determining the optimal cell-level. In theory, the Neugebauer equation of a small cell has higher accuracy than that of big cell, but the accuracy maybe doesn’t increase linearly with the cell-levels. In the experiment, all the cells share one index (uniform correction scheme), the center point of each cell is used as test sample. The accuracy of different cell-levels are tested and evaluated. Experiment 2: Determining the numbers and locations of test samples. The best accuracy maybe can’t be obtained by the cell-center sampling because each cell has only one test sample. In order to obtain higher accuracy, we design the other two sampling schemes, as shown in Fig. 1. The accuracy of the three schemes are tested and evaluated in the experiment. Since the four-dimensional CMYK space is not easy to display, the paper used the three-dimensional CMY space to show the three different test-sampling schemes. Scheme (1): Taking the unique center point “o” of the cell as the test sample; Scheme (2): Dividing the cell into four sub-cells, the five(22 + 1) center points (“o”, “a”, “b”, “c”, “d”) of the cell and its sub-cells were used as the test samples, as shown in Fig. 1. (1). Scheme (3): Dividing the cell into eight sub-cells, the nine(23 + 1) center points(“o”, “a”, “b”, “c”, “d”, “e”, “f”, “g”, “h”) of the cell and its sub-cells were used as test samples, as shown in Fig. 1. (2). It is noted that there will be seventeen samples (24 + 1)
Influencing Factors of Color Prediction of Cellular …
17
Fig. 1 The experimental schemes for test-sampling
in case of CMYK 4D cell. We can pick up nine samples (including “o”) randomly from the seventeen samples. Experiment 3: Determining the optimal cellular correction scheme. The uniform correction scheme is maybe inaccurate because all cells share only one correction index. In another correction scheme, each cell maybe has different correction index because they are individually corrected. The scheme maybe has high accuracy, but it maybe bring a large amount of calculation. 2.2 Experimental Preparation The main devices, instruments, software and consumable material used in the paper are listed as below: ➀ Printing device and consumable material: EPSON Pro 7880 Digital Inkjet Printer, High-gloss paper(230g/m2 ); ➁ Colorimeter: X-Rite i1 ISIS 2 Spectrophotometer;➂ Color Management Software: X-Rite i1 Profiler; ➃ Printer Correction Software: MtColor-PRO; ➄ CMYK Rasterizing Processor (Used to control the printing device for CMYK four-color output): Mt RIP V6.0. Before the experiment, the repetition accuracy of printing and measuring device need to be measured. According to the test results, it is necessary that all samples were placed in a closed, dark dry environment for more than 63 h and printed all at once for avoiding repeated errors and having stable and accurate measurement results.
3 Analysis and Evaluation of Experimental Results 3.1 Influencing Analysis of the Cell-Levels Figure 2 shows that the trend of model accuracy with cell-levels. We can see that the color prediction accuracy of the model is significantly improved with the increase of cell-levels. When the cell-level is 1, 2, 3 and 4, the color difference vary considerably
H. Jiao and M. Zhu
CIEDE2000
18 8 7 6 5 4 3 2 1 0
IT8.7-3
7.19 7.22
ECI2002 2.90 3.06
1
1.55 1.54 3
2
0.92 0.94 5
1.04 0.96 4
Cell-level
0.79 0.79 6
Fig. 2 The trend of model accuracy with cell-levels
(0.5 to 4). However, at 4, 5 and 6 cell-level, its variation range is less than 0.2, which is within the range of systematic error. It indicates that the color prediction accuracy has been gradually stabilized after 4 or 5 cell-level. Considering the model accuracy and the number of samples, it is more suitable to use the five-level cellular Neugebauer model to calculate the predicted color value.
CIEDE2000
3.2 Influencing Analysis of the Test-Sampling 2.87
2.90 1.55
1.04
2.90 1.58
1.07
1.54
2 cell-level 1.13
3 cell-level 4 cell-level
Scheme 1
Scheme 2
Scheme 3
The sampling schemes Fig. 3 The accuracy data of different sampling schemes
Figure 3 shows the accuracy data of the three sampling schemes in case of the uniform correction index. It can be seen that the color differences among the three schemes are less than 0.1 in case of the same cell-level. It indicates that the other two sampling schemes have little impact on the model accuracy, so we determine to use the center point of each cell as the optimal test samples. 3.3 Influencing Analysis of the Cellular Correction Schemes The prediction accuracy of the two different cellular correction schemes is shown in Fig. 4. On the basis of experiment 2, the color difference between the uniform correction and individual correction is less than 0.2, which is within the system error range. It indicates that the color prediction accuracy of the cellular Neugebauer model has not been improved significantly even if each cell is corrected individually. 3.4 Evaluation of Model Accuracy Based on the above experimental analysis, this paper determined the five cell-level, the cell-center sampling and the uniform correction index as the optimal parameters. In
Influencing Factors of Color Prediction of Cellular …
CIEDE2000
2.90 2.84
2.87 2.80
1.55 1.37 1.04 1.01
2.90 2.84
19
2 cell-level (uniform)
2 cell-level (individual) 1.54 3 cell-level (uniform) 1.43 1.131.06 3 cell-level (individual)
1.58 1.44 1.071.03
4 cell-level (uniform) Scheme 1
Scheme 2
Scheme 3
The sampling schemes
4 cell-level (individual)
Fig. 4 The color prediction accuracy of the two cellular correction schemes
order to enhance the persuasiveness of the evaluation, under the condition of the same number of test samples, the paper simulated the ICC color conversion of “i1 Profiler” (professional color management software) and Neural Network Modelon MATLAB. On the other hand, we calculated the average value of the maximum color difference of top 4% to evaluate the stability of color prediction. 2.86 2.93
CIEDE2000
2.77 2.83
0.92 0.94
2.32 2.37
0.86 0.90
Cellular Neugebauer Model
i1 Profiler Software
Models
0.80 0.88
Neural Network Model
ECI 2002 The mean color difference IT8.7-3 The mean color difference ECI 2002 Maximum mean color difference (4%) IT8.7-3 Maximum mean color difference (4%)
Fig. 5 The color prediction accuracy on different models
Figure 5 shows the accuracy comparison of the three different models. The mean color difference of all models are less than 1 CIEDE2000 color units, and the difference among them are less than 0.15, which is within the system error range. The models have good stability because they have the similar maximum mean color difference.
4 Conclusions In order to study the above influencing factors on color prediction accuracy of the cellular Neugebauer model, this paper takes the CMYK ink-jet printer as an example, and simulates the cellular Neugebauer model with different parameters on MATLAB. The test results show that the model accuracy can be improved with the increase of cell-levels, but the accuracy becomes stable gradually after the 4 or 5 cell-level. Then, as the other two influencing factors, both the test samples and the cellular correction scheme have no significant effect on the model accuracy. Based on the above experimental analysis, this paper determined the five-level cellular division, the cell-center sampling and the uniform
20
H. Jiao and M. Zhu
correction index as the optimal parameters. By comparing with i1 profiler software and cellular neural network model, it can be proved the optimal cellular Neugebauer model can meet the requirements of printing color reproduction. Acknowledgement. This research is supported by Lab of Green Platemaking and Standardization for Flexographic printing (No. ZBKT202005).
References 1. Zhu M, Tian Z (2018) A spatial gamut mapping algorithm based on adaptive detail preservation. J Imaging Sci Technol 62(1):010505 2. Liang HONG, Yu LI, Gao-li CHU et al (2014) Application of RBF neural network in color space conversion of display. Package Engineering 35(23):134–137 3. Nie P, Kong L-J (2016) Color separation algorithm of multi-color printer based on space subdivided with interpolation. Chinese J Liquid Crystals Displays 31(3): 317–323 4. Zhang Z-Ji, Liu Z, Wu M-Gu (2013) Polynomial regression multi-color separation method based on subspace partition. Packaging Eng 34(7): 65–76 5. Pan LIU, Zhen LIU, Ming ZHU (2015) The channel-signal reproduction of multi-color printer based on gamut partition iterative subdivision. Acta Electronica Sinica 43(1):69–73 6. Wu GY, Shen XY, Liu Z et al (2016) Reflectance spectra recovery from tristimulus values by extraction of color feature match. Opt Quant Electron 48(1):1–13 7. Liu Q, Huang Z, Pointer M R, et al (2019) Optimizing the spectral characterisation of a CMYK printer with embedded CMY printer modelling. Applied Sciences 9(24): 5308
Preferred Skin Tones on Displays Under Different Ambient Lighting Conditions Mingkai Cao and Ming Ronnier Luo(B) State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou, China [email protected]
Abstract. The goal of this experiment was to study the influence of lighting environment CCT on the preferred skin tone colours. The skin tone pictures of models from 4 different ethnic groups were captured under the lighting environments with 5 different correlated colour temperatures (CCTs). The facial skin tones of each model were adjusted to 25 skin tone colours which were not related to the background. 28 Chinese observers were asked to observe the 100 facial skin tone images under the same image capturing environments, and judged the skin tones either ’like’ or ‘dislike’. The results revealed that the preferred skin tone region had CCTs between 6500 and 8000K around −0.005 Duv. Furthermore, it was found that chroma of the preferred skin tones will slightly increase as the ambient lighting CCT decrease, and the preferred skin tones under all the 5 lightings had good agreement, i.e. to give mean values of L * , C ab * , hab * of [41.3, 27.7, 47.9°], [66.7, 23.4, 42.8°], [68.4, 22.6, 44.5°], [59.0, 25.7, 45.7°] for African, Caucasian, Oriental, South Asian, respectively. Also, the prefer gamut boundary in terms of ellipse in a*b* diagram for each ethnic group was established. Keywords: Preferred skin tone centre · Preferredskin tone boundary · Chromatic adaptation
1 Introduction Nowadays, the main objects in colour images are usually people, especially facial patterns. Reproducing them with high image quality is crucial in photographic colour reproduction. Many scholars believe that there is a preferred memory colour for human visual perception. Skin tones were moved toward the preferred centre to improve the colour preference in colour reproduction [1]. Since people usually judge the colour reproduction quality of facial objects depends on their preference, it is critical to know the preferred skin tone range. Bartleson [2] studied the preferred colour reproduction of memory colours including skin tone, found that the preferred skin tone was more yellowish and saturated than the real skin tone. His further work with Bray [3] found that the preferred skin tone of Caucasian skin had the same chroma as the mean memory colour of the flesh tone. Sanders [4] also found that the preferred Caucasian face colour was more saturated. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_4
22
M. Cao and M. R. Luo
Sanger et al. [5] found that the preferred hues among the three different ethnic groups were about the same, and the dominant wavelength is about 590 nm. Hunt et al. [6] studied skin tone in photography, and proved the preferred Caucasian skin tone of reflection prints had about the same purity as the real skin tone but was a little yellower. Yano and Hashimoto [7] found that the preferred skin tone of Caucasian women was more colourful than that of Japanese women. Kuang et al. [8] believed no significant culture difference among different ethnic observers. They found that capturing illuminants had significant influence on skin tone preference. From above different studies, a conclusion can be summed up that the preferred skin tone was different from the actual skin tone. However, various studies conducted their experiments on outdated display devices in the dark or under a single light condition. In order to have a deep investigation of skin tone preference for reliable skin tone enhancement on mobile display images, psychophysical experiments were conducted to verify preferred skin tones, to determine preferred skin tone regions, and to study the influence of surrounding light environment.
2 Experimental Design 2.1 Image Rendering Fourfemale models come from Caucasian, Oriental, South Asian and African, respectively were invited to take the photo shoots. The capturing was conducted under a CIE D65 simulator lighting condition at 1000 lx. As shows in Fig. 1, all models stood in front of a gray fabric background and wore a black jacket to avoid the influence on the judgment of skin tone.
Fig. 1 Four captured images of different ethnic groups
As shows in Fig. 2, the captured original images were cropped into two parts. The background part was adjusted from the original colours into 5 CCTs (3000K, 4000K, 5000K, 6500K, 8000K) as same as the viewing lighting condition sets to simulate the effect of photos taken in the corresponding light environment. The skin part was transformed pixel by pixel via CAT02 chromatic adaptation transform [9] into 25 skin tones. The skin part image white was transformed from D65 into 25 different white points,
Preferred Skin Tones on Displays Under Different Ambient …
23
Fig. 2 Image processing flow: the gray background was transferred to 5white pointswith the same CCT as the viewing lighting conditions, and theskin tones were transferred to 25 skin tone centres
specified by 5 CCTs levels (4000K, 5000K, 6500K, 8000K, 10000K) and 5 Duv levels (−0.01, −0.005, 0, 0.005, 0.01). CAT02 had large predictive errors as found by Zhai and Luo [10], it performed inaccurately for viewing display colours under the ambient lighting, especially having low CCTs. This equation can greatly improve by modifying its incomplete adaptation factor (D) as given in Eq. (1). Fig. 3 shows the 25 rendered average skin tones (open circles) of four images. The colored filled circles on the blackbody locus were the chromaticity of the lightings at 5 CCT levels. There were 400 stimuli, i.e. 4 ethnic groups × 5 background × 25 skin tones in total. 49266 · Duv 1116 + 8.64 · Duv − D = 0.723 · 1 − CCT CCT CCT > 2000K, Duv ∈ [−0.03, 0.03] (1)
2.2 Lighting Conditions Five ambient illuminations were used in the experiment, their chromaticity coordinates were at the blackbody locus but varied at CCT of 3000K, 4000K, 5000K, 6500K, respectively. A spectrum tunable LED illumination system, LEDCube, which included twelve identical light units were used. Table 1 lists the measured illuminance, chromaticity coordinates, CCT, Duv and CRI (colour rendering index) Ra [11] of the ambient lighting conditions. In order to study the impact of the ambient lighting environment more accurately, only the skin tone images of the same background CCT were observed under each lighting conditions. 2.3 Observer and Procedure Twenty-eight Chinese observers participated in the experiment. They were all students of Zhejiang University. There were 15 female and 13 male observers with a mean age of 23.07 ranged from 20 to 27 years old. All observers passed the Ishihara colour vision test and had normal colour vision. They were asked to sit around a table covered by a grey cloth and sit in front of a accurately calibrated smart phone to do the experiment.
24
M. Cao and M. R. Luo
Fig. 3 The transformed skin tone plotted in CIE 1976 u’v’ diagram together with the lightings used in blackbody locus
Table 1 Measured parameters of 5 ambient lightings No [cd/m2] [lx]
u’
v’
CCT[K] Duv
Ra
1
308
1055 0.2482 0.5199 3070
−0.0001
2
302
1035 0.2254 0.5018 3985
−0.00004 97.7
3
315
1079 0.2108 0.4836 5069
−0.00006 96.5
4
316
1083 0.2012 0.4661 6410
−0.00042 97.5
5
312
1069 0.1944 0.4513 8109
−0.00006 97.4
91.6
Observers were required to evaluate 100 random colour stimuli under each of the five ambient lighting conditions in terms of preferred skin tone. The evaluation had a forced choice method to judge whether they preferred the human facial skin tone or not (either ‘like’ or ‘dislike’). In total, 180,000 judgments were obtained in the experiment: 4 (images) × 25 (display stimuli) × 5 (ambient illuminants) × 28 (observers) × 2 (repeat). The order of stimuli and ambient illuminants for each observers was randomized to eliminate any sequential effect. When the ambient illuminant changed, the observers were asked to adapt in the room for 1 min. The whole experiment lasted for approximately 50 min for each observer.
Preferred Skin Tones on Displays Under Different Ambient …
25
3 Results and Discussion 3.1 Observer Variations The results were in the form of ‘like’ and ‘dislike’. They were arranged in the form of preferred rate P%, which was calculated by the number of the liked decisions divided by the total number of observers multiplying by 100. The standardized residual sum of squares (STRESS) given in Eq. (2) has been used [12] to evaluate intra-observer and inter-observer variations. ( P1 − f P2 )2 (2) STRESS = 100 P12 and f =
P2 P1 /
P22
(3)
where the P1 and P2 were all observers averaged and individual observer P% of each image respectively, for the evaluation of observer variation. When the inter-observer variations were calculated, the P1 and P2 will be the P% obtained from two sets of repeated judgments under the same lighting condition. For a perfect agreement between two data sets, STRESS will be zero. Each observer had evaluated 500 stimuli twice. The intra-observer variations STRESS of each observer ranged from 8.4 to 13.8 at the mean of 11.0. The experimental results of most of the subjects were quite consistent. The inter-observer variations were calculated from the evaluation scores of each observer and the mean results of all observers. The inter-observer variations of each observer ranged from 15.8 to 28.5 at the mean of 20.3. This indicated the observers had a larger variation and the results were influenced by their own personal preference and characteristics. 3.2 Preferred Rendered Images Fig. 4a, b show the preference score (0–1000) plotted against different CCTs on displays under different ambient lightings in terms of CCT and Duv, respectively, the results are summarized below: (1) Observers preferred bluish images having chromaticity between CCTs of 6500 to 8000K, and Duv between 0.000 to −0.005 (see Fig. 4a, b respectively), regardless of the ambient illuminations. (2) There is an impact of ambient lightings on preference, i.e. for the bluish images (CCTs above 6500K), a higher CCT of lighting is preferred. Conversely, for the yellowish images (CCT less than 5000K), a lower CCT of lighting is preferred (see Fig. 4a). 3.3 Preferred Facial Skin Tone Ellipse A preferred colour can be defined by its colourcentre and colour gamut, for which an ellipse or ellipsoid is frequently used. From the earlier studies, the preferred skin tones
26
M. Cao and M. R. Luo
Fig. 4 Preferred decision plotted against the Ambient CCT and a the Display CCT and b the Display Duv. The tatal score is 4 images * 28 observers * 2 times * 5 (CCT/Duv) = 1120
were described in CIELAB colour space. Equation (3) shows a logistic function to transform between the model predicted probability (Pp) and DE’ calculated from an ellipse equation in CIELAB a*b* diagram. The ellipse equation defined the boundary corresponding to Pp equals to 50%, i.e. half of the observers like the stimulus and the other half disliked it. An optimisation process was established to obtain the 6 coefficients
Preferred Skin Tones on Displays Under Different Ambient …
27
in Eq. (3), i.e. k1, k2, k3, a0, b0 and α by maximizing the correlation coefficient between the Pp and Pv, which is the preference percentage from the visual results. Note that α is the colour difference calculated from the ellipse equation corresponding to 50% ellipse boundary. 1 Pp = (4) −α) (E 1+e where
E = k1 (a∗ − a0 )2 + k2 (b∗ − b0 )2 + k3 (a∗ − a0 )(b∗ − b0 )
(5)
Fig. 5 Plot of tolerance ellipses transformed from their original lighting condition in CIELAB a*b* diagram
Fig. 5 plots the preferred skin tone ellipses from the four skin types in CIELAB a* −b* plane together with its each centre. Red, orange, yellow, green and blue ellipses
28
M. Cao and M. R. Luo
are the preference skin range ellipses under 3000, 4000, 5000,5000K and 8000K ambient lightings, respectively. Each ellipse covers the preferred skin tone region judged by observers for each of the 4 original images. Skin tones inside the ellipse were preferred. Conversely, those skin tones outside the ellipse were disliked. Note that these colours from all the ambient lightings were transformed to 6500K via CAT02 with D incomplete adaptation function given in Eq. (1). The ellipse equations were then optimised as described above. We did not optimise ellipses in CIELAB using its own reference white due to its poor performance on chromatic adaptation in predicting corresponding colours [13]. The following conclusions can be drawn: (1) The preferred skin tone centres are close to each other under different ambient illuminations. (2) The preferred skin tones are close to each other regardless the ethnic groups. (3) The gamut boundaries are the largest for the Caucasian, followed by South Asian, Oriental and African images the smallest. The larger the ellipse means the bigger preferred region, or greater tolerance to accept skin tones.
4 Conclusions A psychophysical experiment was conducted to evaluate the preferred skin tones under different ambient lightings defined by CCT and Duv. The results showed that: (1) all observers preferred skin tone images captured under the lightings with high colour temperature and negative Duv; (2) the four types of preferred skin tones agreed well with each other, about mean values of L* , Cab * , hab * of [41.3, 29.2, 48.7°], [66.7, 23.7, 44.0°], [68.4, 23.8, 44.2°], [59.0, 26.9, 46.8°] for African, Caucasian, Oriental, South Asian, respectively, within 4 CIELAB units under D65/10° conditions; (3) The CCT of lightings did affect colour appearance, i.e. skin appears to be less colourful under a higher CCT illumination. These findings could help to guide to achieve preferred colour reproduction on mobile displays. Acknowledgements. The authors like to thank the support from the Chinese Government’s National Science Foundation (Project Number on 61775190). Compliance with ethical standards Conflict of Interest The authors stated that they had no conflict of interest with the experimental subjects.. Ethical Approval All the procedures carried out in the research involving human participants were in line with the Academic Rules of Engineering Graduates of Zhejiang University and 1964 Helsinki declaration and its subsequent revisions or similar ethical standards. Informed consent All individual participants included in the experiment were informed and agreed that the results of this study were used for academic research and publication of the paper.
Preferred Skin Tones on Displays Under Different Ambient …
29
References 1. Park DS, Kwak Y, Ok H, Kim CY (2006) Preferred skin color reproduction on the display. Electronic Imaging 15:041203 2. Bartleson CJ (1961) Color in memory in relation to photographic reproduction. Photographic Sci Eng 5:327–331 3. Bartleson CJ, Bray CP (1962) On the preferred reproduction of flesh, blue-sky, and green-grass colors. Photographic Sci. and Eng 6 4. Sanders CL (1959) Color preferences for natural objects. Illuminance Eng. 54:452–456 5. Sanger D, Asada T, Haneishi H, Miyake Y (1994) Facial pattern detection and its preferred color reproduction. IS&T/SID 2nd Color Imaging Conference, pp 149–153 6. Hunt RWG, Pitt IT, Winter LM (1974) The preferred reproduction of blue ski, green grass and caucasian skin in colour photography. J Photographic Sci 22:144–148 7. Yano T, Hashimoto K (1997) Preference for Japanese complexion color under illumination. Color Res. Appl 22:269–274 8. Kuang J, Jiang X, Quan S, Chiu A (2005) A psychophysical study on the influence factors of color preference in photographic color reproduction. Proc. SPIE/IS&T Electronic Imaging: Image Quality and System Performance II 5668:12–19 9. CIE Publication 160:2004 (2004) A review of chromatic adaptation transforms. CIE Central Bureau, Vienna 10. Zhai QY, Luo MR (2018) Study of chromatic adaptation via neutral white matches on different viewing media. Opt. Express 26:7724–7739 11. Davis W, Ohno Y (2005) Toward an improved color rendering metric. In: Proceedings of SPIE - The International Society for Optical Engineering, vol 5941, 59411G-59411G-8. 12. Melgosa M, Huertas R, Berns RS (2008) Performance of recent advanced color-difference formulas using the standardized residual sum of squares index. J Opt Soc Am A 25:1828–1834 13. Ronnier Luo M (2008) A review of chromatic adaptation transforms. Rev Progr Coloration Related Top 30 (1):77–92
Models to Predict Naturalness of Images Including Skin Colours Dalin Tian1 , Rui Peng1 , Chunkai Chang2 , and Ming Ronnier Luo1(B) 1 State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou,
China [email protected] 2 OPPO Camera Group, Shenzhen, China
Abstract. Naturalness is an important attribute to judge image colour quality, especially for images with memory colours, such as skin, grass, etc. Many scientists intended to establish naturalness model, but it has been difficult to develop a satisfactory general skin naturalness model for people in different skin groups. In this study, a skin image database including four skin types was firstly built. The skin colours were precisely extracted from a number of images. The skin colour data were then divided into training set and testing set. Subsequently, two skin naturalness models, skin-types dependent and independent, were derived from the training dataset, and their performance was evaluated using the testing dataset. The results showed that these models based on ellipsoid and ellipse equation could predict naturalness of skin colour image accurately. Keywords: Naturalness Model · Skin Colour · Colour Difference
1 Introduction High quality colour image is highly desired for the development of photography technology of intelligent cellphone and high-resolution camera, especially in most cases the skin colours play the leading role in pictures due to the popularization of portrait and selfie in social network. Many scientists have concentrated on image quality and have conducted researches to build a well performed image quality model. In our previous study, the image quality was evaluated on an OLED display with the image rendered in different attributes [1]. The results showed that compared with lightness related attributes, such as lightness and blackness, the colour related attributes affect the overall image quality more, including shadow texture, colourfulness and reality, etc. Gong et al. Developed a comprehensive image quality model of smart phone [2]. The model included naturalness, colourfulness, brightness and clearness as evaluated attributes. Their results showed that for natural scenes, naturalness and clearness has the largest correlation coefficient with image quality. Choi et al. conducted psychophysical experiments of image quality evaluation between a test display and a reference display [3]. They then developed a reference image quality model, including naturalness, colourfulness and contrast. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_5
Models to Predict Naturalness of Images Including Skin Colours
31
From above studies, naturalness was found to be a key image quality component. And in those studies, varies naturalness models were proposed. Gong established a naturalness model based on the theory that naturalness had negative relationship with memory colour shifting. The sky, skin and grass memory colours were chosen as common natural objects, and their colour centers were used to calculate the colour difference of those objects in test image. An exponential equation was derived to fit the colour difference and visual naturalness. The equation gave good prediction to the data. Choi et al. Built a new naturalness model [4], including three components, i.e. colourfulness, shadow details and sharpness. The model included not only memory colours, but the whole image contents. It was found that the model was accurate, however, it included reference image as input, which is not suitable to predict naturalness of individual images. Above all, there has not been a satisfactory model to predict image naturalness. Therefore, the goal of the present study is to develop naturalness models based on a skin database.
2 Development of Database A database was developed at Colour Engineering Lab, College of Optical Science and Engineering, Zhejiang University. Firstly the original images were captured in a professional LED lighting room in the lab. The lighting condition of this room, such as illuminance, correlated colour temperature (CCT) and uniformity of light source, can be precisely controlled. In this experiment, the illuminance of the room was set to 550 lx, and CCT was set to 6500 K. Eight volunteers, including 4 skin types (Caucasian, Chinese, South Asian, African) and 2 genders (male and female), participated in the image capturing. Subsequently, original images were rendered to form an image database. The skin colour pixels in each image including face and arm areas were cropped, and the background was not processed. The centers of these skin pixels in CIELAB were moved into different target points, which were uniformly selected in L*- −a* , L* −b* , and a* −b* planes. For each plane, 16 targets were selected around the original center. Thus, in total, there were 392 images ((16 * 3 rendered image + 1 original image) * 4 skin types * 2 genders) images to be used in this experiment. Finally, a psychophysical experiment was conducted on a mobile phone. The rendered images were shown randomly on display. Observers were asked to assess whether the displayed image was ‘natural’ or ‘not natural’ using a force-choice method. 20 Chinese observers consisted of 10 males and 10 females took part in this experiment. Overall, 7,840 estimations were made, i.e. 392 images * 20 observers. The visual results were transformed to percentage naturalness N% for each image. They will be used in the following data analysis.
32
D. Tian et al.
3 Data Analysis 3.1 Skin Type Independent Model The first model was developed respectively for each skin type. To train the skin naturalness models and test their performance, the processed images were divided into two groups: training and testing sets. For each original image, its 49 processed images were randomly divided into two parts: 24 training images and 25 testing images. Therefore, for each skin type there were 48 images (2 original * 24) in training dataset, and 50 images (2 original * 25) in testing dataset, respectively. The skin colours of each image were selected, averaged and transformed into CIELAB colour space, as mentioned before. The naturalness percentage N% was related with the modified colour difference (E’), which was calculated using the L* , a* , b* of each image with the ellipse equation. Equation 1 shows the calculation procedure. where L0 , a0 , b0 represent the most natural colour center in CIELAB colour space. This modified colour difference (E’) could be regarded as the colour shift from the most natural colour center, representing the level of unnaturalness. The 8 coefficients in Eq. 1, including colour center [L0 , a0 , b0 ] and coefficients ki (i = 1, 2, 3, 4) together withαwere optimised by maximizing the correlation coefficient between the E’ values in Eq. 1 and N% from the visual results, the probability of the image to be evaluated as natural.
Fig. 1 Ellipses from the 4 skin types in a* -b* plane
Models to Predict Naturalness of Images Including Skin Colours
33
Fig. 1 shows the ellipses from the 4 skin types. It can be seen that their colourcentres are quite close to each other and their ellipses are quite similar in a* -b* plane. Their ellipses had an orientation around the 68° from the a* axis, and there was a ratio between the semi-long and semi-short axes of about 2, indicating observers allow for more chroma changes and less hue changes for assessing naturalness. Fig. 2a, b plot the visual results and the models’ predictions for the training and testing sets respectively. They have correlation coefficients of 0.96 and 0.95 respectively. This indicates that the models based on each skin type gave an excellent prediction to the data. 3.2 Generic Skin Type Model From Fig. 1, comparing the colour centers in the models of four skin types, the difference in a* -b* plane is in a very limited range, while the main difference between different skin groups is in lightness distribution of the colour centers. As a consequence, the models are complex and not general for all types. To simplify the progress and build a generic model for all types, a lightness independent model was developed.
Fig. 2 Visual results and models’ predictions of: a training set. b Testing set
Similarly, the database was divided into training set and testing set. For each set, all skin types images were equally selected in, i.e. 192 training images and 200 testing images, respectively. To build a lightness independent model, a newly modified colour difference (E’) equation which was only described in a* -b* plane, was proposed. It had a similar equation to Eq. 1, only the lightness term was deleted. To describe the modified colour difference (E’) equation directly, Fig. 3 shows the naturalness boundary in a* -b* plane when the percentage naturalness N% is equal to 50%.
34
D. Tian et al.
Fig. 3 Ellipse of generic model in a* -b* plane
Fig. 4 Visual result and generic model’s prediction of: a training set. B testing set
The training dataset was used to train the model, same with previous method. Fig. 4a shows the relation of prediction and visual data. The correlation coefficient is 0.83. The model accuracy was tested with testing dataset mentioned before. Fig. 4b shows the relation between visual naturalness and predicted naturalness calculated with testing data. The correlation coefficient is 0.79. Compared with the skin type dependent model, the accuracy of this model is slightly worse because of the exclusion of lightness, but it is a generic model which is suitable for all skin types images. The fitted a0 , b0 value of skin naturalness center in this model was [17.8, 17.8] to have about a CIELAB hue angle of 45°, which is close to the previous study [5].
Models to Predict Naturalness of Images Including Skin Colours
35
4 Conclusions In this study, a skin colour database was used to develop naturalness models for skin images. Two models were developed, skin-type dependent and skin-type independent. The latter model performed worse than the former model as expected. However, it still gave reasonable accurate prediction and also simpler to be capable to consider all skin-types images. Acknowledgement. The authors like to thank the support from Guangdong OPPO Mobile Telecommunications Corp., Ltd.
References 1. Tian D, Xu L, Lu oMR (2019) The image quality evaluation of HDR OLED display. Electronic Imaging 309-1. https://doi.org/10.2352/ISSN.2470-1173.2019.10.IQSP-309 2. Gong R, Xu H, Luo MR (2015) Comprehensive model for predicting perceptual image quality of smart mobile devices[J]. Appl Opt 54(1):85–95 3. Choi SY, Luo MR, Pointer MR, Rhodes PA (2008) Investigation of large display color image appearance I: Important factors affecting perceived quality. J Imaging Sci Technol 52:040904 4. Choi SY, Luo MR, Pointer MR, Rhodes PA (2009) Investigation of large display colour image appearance. III: Modelling Image Naturalness. J Imaging Sci Tech 53:031104–1–12 5. Zeng HZ, Luo MR (2011) Skin colour modelling of digital photographic images. J Imag Sci Technol 55(3): 030201–1–12
Research of Natural Lip Colour and Preferred Lip Colour Liyi Zhang, Jiapei Chen, and Mengmeng Wang(B) School of Design, Jiangnan University, Wuxi, Jiangsu, China [email protected] Abstract. Lip is an important character of the face. Many previous researches studied the colour of the lip. They found the lip colour and perception of the face healthy and attractiveness have significant connection. The lip colour, as a key feature, are widely interested by the cosmetic industry and imaging industry, especially the preferred lip colour. In this study, the natural colour of the lips and the preferred colour of the lips were investigated to further quantify preferred lip colour. The natural lip colour gamut has small variation in chroma. The lightness was darker than the average skin colour of the same ethnicities. A psychophysics experiment was carried out to determine the colour properties of this preference. Fifty participants’ data were collected. The result shows that the lip colour that with similar hue angle to the natural lip were preferred. Keywords: Lip Colour · Colour Preference · Psychophysics Experiment
1 Introduction Human face is an attention calling element of the image. It’s colour, as a vital factor of the image, frequently used to evaluate the quality of the products in the imaging related industry, such as display manufacture, printing industry. Many pervious researches focused on preferred facial image and skin colour at reproductions in these areas. Skin colour databases were built to further investigated this topic [1]. They found that the change of the hue and whiteness of the skin colour impacted the perceived attractiveness, healthy and youth of the face significantly [2]. Lips’ colour is also one of the four key colour features, which are skin, lips, eyes and hair, of a facial image. People use cosmetic product, such as lipstick and lip gloss, to change the colour of the lips, so the perceived perception of their faces can be re-defined. In cosmetic beauty industry, the lip colour is described as “the colour that define the face” [3]. Studies were carried out to determine the relationship between lips colour and the perception of the face. The lips colour was found related to the perceived healthy and attractiveness closely. And the reddish or darkish of the lip colour can affect the perceived whiteness of the skin, which is an important factor in the Asian beauty judgement [4]. The previous studies showed that the lip colour have significant impact on the facial perception, but limited research further quantified these impacts. This study aims to determine the preferred lip colour of the Chinese women. The relationship between the preferred lips colour and the natural lips colour were investigated. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_6
Research of Natural Lip Colour and Preferred Lip Colour
37
2 Methodology The facial images from Leeds Liverpool skin colour database (LLSD) was used to extract natural lip colour in this study. Ninety lips of Caucasian and Chinese females were extracted from the images which were randomly selected from the database. All the teeth and skin area of the mouth section were removed and a white background was use for colour information extraction. The RGB of each pixel was first converted into CIELAB (D65, 2 degree).Here, the mathematicmatrix which was developed in the previous study was use to complete the conversion [2]. Then, the average CIELAB values of all pixels were calculated and used to represent the colour of each lip. A Munsell colour chart was used to select the popular lipstick colours. Five female students with normal colour vision were participated. Twenty-three colours were selected and the 11 most saturated colour were choice to use in the experiment, as listed in Table 1. Table 1 Selected 11 Munsell colour Numbered
Munsell code
lip colour 1
5YR 7/14
lip colour 2
2.5YR 6/14
lip colour 3
10R 6/16
lip colour 4
7.5R 5/18
lip colour 5
5R 5/18
lip colour 6
2.5R 5/16
lip colour 7
10RP 6/16
lip colour 8
10RP 5/16
lip colour 9
7.5RP 5/18
lip colour 10 5RP 6/18 lip colour 11 5RP 5/18
A force-choice psychophysics experiment was carried out to investigate the preferred lip colour. The experimental interface was showed in Fig. 1. The illustrated face was filled with average skin colour of the Caucasian and Chinese skin colour data in LLSD. This face is used to assisted the participant to select the preferred lip colour. Three scales related to the preference were questioned, including attractiveness, healthy and youth. Fifty female students from Jiangnan University age between 18 and 30 participated this experiment voluntarily.
38
L. Zhang et al.
Fig. 1 The experimental interface
3 Result and Analysis 3.1 Caucasian and Chinese Lip Colour Distribution The distribution of the lip colour in L* C* ab and L* hab were showed in Fig. 2.
Fig. 2 The distribution of natural lip colour of 90 Caucasian and Chinese female
The variation of the chroma of the natural lip colour is small. The chroma values are between 11.6 and 22.1. The lightness of the nature lips colour distributed between 30 and 40 mostly. Compare to the lightness range of the Caucasians and Chinses skin colour, which between 50 and 70 mostly, the lip colour is slightly darker. The mean of
Research of Natural Lip Colour and Preferred Lip Colour
39
the hue angle of the lip colour is 35°, which is redder than the hue angle of the skin (average hue angle equal to 56°). The distribution of the lips colour show that the natural lip colour of Caucasian and Chinese females is red-yellowish colour and is darker their skin colour. In conclusion, the natural lip colour of the Caucasian and Chinese females are. 3.2 Preferred Lips Colours The experimental results were showed in Fig. 3.
Fig. 3 Fifty participants experimental results of attractiveness, healthy and youth
The experimental results showed that lip colour 4 (7.5R 5/18) is the most preferred lip colour, which had highest percentage of selection at the attractive, healthy and youth looking selection. There are also high percentage of participants select lip colour 5 (5R 5/18), especially at the healthy look selection. For the youth looking selection, lip colour 11 (5RP 5/18) was also have high percentage of selection. This can because of the colour fashion trend of the cosmetic. Cross compare the colour of these three, lip colour 4 and 5 are red-yellowish colour. The hue angle of these two colours are close to the hue angle of the natural lip colour. Lip colour 11 is the red purplish colour. The hue angle of this lips colour has larger difference, compare to the previous two. Here can infer that the preferred lip colour is the one that have close hue angle to the natural lip colour. The experimental results showed a clear trend of lip colour preference where the colour that have similar hue angle to the natural lips colours were preferred.
4 Conclusions An investigation of natural lip colour and preferred lip colour were carried out in this study. The natural lip colour is darker than the skin colour of the same ethnicities. The variation of the chroma value is small. The hue angle values of the lips are mostly at 30–40°, which are red-yellowish colour. The natural lip colours of the Caucasian and Chinese females were found have small variation and the hue angle is constant. The experimental results showed that the preferred lip colour has similar hue angle, compare to the natural lip colour. The most preferred lip colour is the red yellowish colour with Munsell code 7.5R 5/18.
40
L. Zhang et al.
Acknowledgements. This study is partly funded by Jiangnan university fundamental research funding (JUSRP119100). Conflict of Interest The authors declare that they have no conflict of interest. Ethical Approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the school of digital media, Jiangnan University and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The images and skin colour information are from Leeds Liverpool skin colour database. Informed consent Informed consent was obtained from all individual participants included in the study.
References 1. Wang M, Xiao K, Luo MR, Pointer M, Cheung V, Wuerger S (2018) An investigation into the variability of skin colour measurements. Color Res Appl 43(4):458–470 2. Wang M, Luo MR, Xiao K (2017) The impact of skin colour on facial impressions. AIC 2017 Jeju, pp 89–89 3. McEvoy T, Boyes M (2003) Trish McEvoy: the power of makeup: looking your level best at every age. Simon and Schuster, USA 4. Kobayashi Y, Matsushita S, & Morikawa K (2017) Effects of lip color on perceived lightness of human facial skin. i-Perception 8(4)
Spectrum Optimization for Surgical Lighting by Enhancing Visual Clarity Lihao Xu1(B) and Ming Ronnier Luo2 1 School of Digital Media and Art Design, Hangzhou Dianzi University, Hangzhou, China
[email protected] 2 State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou,
China
Abstract. This paper mainly investigated the spectrum design method to enhance visual clarity for surgical lighting. Three methods were proposed and their results were compared together with two commonly used CIE illuminants, i.e. illuminant A and D65. Present results showed that all the optimized spectra gave an outstanding performance and their spectra had similar peaks at around 430nm and 525nm. Keywords: Surgical Lighting · Spectrum Optimization · LED Lighting
1 Introduction Surgical lighting has drawn much attention for a long time. It starts from incandescent lamp, going through the period of fluorescent lamp and now comes to a new age of LED lighting. A tremendous amount of advantages can be achieved by the LED since it offers a stable, energy-saving, and high-quality lighting [1, 2]. The most important characteristic, from our point of view, might be attributed to its tunable spectrum. By changing the spectrum, our visual clarity can be enhanced largely, which is of vital consequence in the field of surgical lighting [3–7]. Higher visibility will greatly assist surgeons to distinguish different textures of tissues and will improve efficiency and safety during the operation. Hence, a specially designed illuminant with high discrimination capability is quite preferred. In this paper, three optimization methods to design the surgical lighting spectrum were proposed. Their procedures were similar and can be briefly expressed as follows. Firstly, a multi-spectral imaging system (MSIS) [8] was applied to obtain the spectral reflectance functions of an organ sample. Subsequently, the organ’s characteristic colors were extracted using a color clustering method and were further adopted to perform the spectrum optimization. The optimization objective was specially designed to represent visual clarity when viewing the organ sample under a specific illuminant spectrum. Finally, all the optimized spectra were compared and conclusions were derived. Present results showed that all the optimization methods were effective and outperformed the other CIE illuminants tested. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_7
42
L. Xu and M. R. Luo
2 Method The spectrum optimization workflow was summarized in Fig. 1, from multi-spectral image acquisition to the final spectrum obtainment.
Organ Sample
MSIS
ReproducƟon Image
R% Image
Spectrum OpƟmizaƟon
Color clustering
CharacterisƟc Color
Fig. 1. The workflow of the spectrum optimization
The first step is to obtain a multi-spectral image of the test organ sample. The MSIS in this experiment is composed of an achromatic 14-bit digital camera and 16 narrow band optical filters [9]. The spectral range for the filters is from 400 to 700 nm with an ∗ unit when applied to fabrics. interval of 10 nm. Its imaging precision is less than a E ab For the spatial dimension, this MSIS offers a resolution of 1040 by 1392 pixels, which is typical in this kind of application. In this study, the test organ sample, i.e. a pig’s liver, was captured to represent human’s organ. The second step is to obtain the characteristic colors of the test sample. Once acquired, the multi-spectral reflectance image was transformed into a CIEL* a* b* image under D65. In this stage, pixels in the reflectance image were divided into several groupsusing a color clustering algorithm. Each group represents a characteristic color of the organ sample. In our previous study, the K-Means clustering method [10] was adopted to perform color clustering. However, the number of cluster centers (K) was unknown for the specific organ sample. Hence, different numbers should be tried until no obvious color difference [11] can be observed between the original image and its corresponding color-clustered reproduction. In this study, a more robust color clustering method, i.e. the Mean-Shift (MS) clustering [12], was adopted. The MS method does not need the number K but we shall assign a bandwidth, i.e. the color difference among each cluster, to it. Hence, the problem is now to find a just noticeable color difference for an organ image, which was roughly 2.5 according to a previous study [13]. In this study, both the K-Means method and the Mean-Shift method were adopted and their results will be compared in the later stage.
3 Spectrum Optimization As discussed in Sect. 1, different LEDs can be combined to form different illuminant spectra. In this study, eleven LEDs consisting of 10 narrow band LEDs and a white LED (3000K) were built. Their spectra were illustrated in Fig. 2. As is shown, most of the visible spectrum was well covered.
Spectrum Optimization for Surgical Lighting …
43
Relative spectra power
1 0.8 0.6 0.4 0.2 0 400
450
500
550
600
650
700
750
Wavelength /nm
Fig. 2 The eleven LEDs adopted in this study
The combined spectral power distribution (SPD) can be defined as SPD = SPDLED,i i=1,2...11
Where i represents the serial number of the LED light. By adopting different sets of LEDs, different illuminant spectra can be achieved. It is generally believed that visual clarity requires that visual response to distinct surface colors should be different and it can be enhanced by the availability of the illuminant adopted. In this study, the visual clarity is defined as E i,j Clarity = i,j=1,2...n
where i and j represent the extracted characteristic colors. In other words, the visual clarity was evaluated by the color difference among those characteristic colors under different illuminants. The larger calculated color difference, the higher visual clarity. Hence, the whole optimization procedure is clear. Two major steps are necessary in the workflow. One is to find the characteristic colors of the organ sample. The other is to perform spectrum optimization based on the clarity measure.
4 Optimization Results As is discussed in Sect. 2, two different color clustering methods, i.e. K-Means clustering and Mean-Shift clustering, were adopted. They will generate different characteristic colors for the same organ sample and thus different optimized spectra. Besides, since the liver has distinct subface texture, two characteristic colors were simply picked out to form the third datasets as illustrated. If the color difference between color 1 and color 2 were maximized, the texture then should be clear. As a result, three kinds of datasets were
44
L. Xu and M. R. Luo
obtained in total. Dataset 1was acquired using the K-Means clustering method. Dataset 2was acquired using the Mean-Shift clustering method and Dataset 3 was acquired by selecting 2 characteristic colors from the organ smple as shown the Fig. 3.
Fig. 3 The selection of characteristic colors for dataset 3
For a realistic illuminant, the optimized spectrum should not be too colorful. Therefore, a lighting quality parameter, i.e. Duv, was adopted to control the colorfulness of the optimized spectrum. In this study, the absolute Duv value was set within 0.01 to avoid too large departure. Figure 4 illustrates the optimized spectra using all the three datasets. There are 3 optimized spectra using training datasets and they were named as SPD1 (using only 2 colors), SPD2 (using the K-means algorithm) and SPD3 (using the Mean-Shift algorithm). As is shown, SPD2 andSPD3are similar except for some slight difference at long-wavelength part. Both had peaks at around 430 and 520 nm. The only difference lies at around 660 nm, at which SPD2had more power. The SPD1 is quite different from the above two results, giving more weights to peaks of 475 nm and 660 nm. 1 0.9
Normalized SPD
0.8 0.7 0.6 0.5 0.4 0.3 Illuminant A Illuminant D65 SPD1 SPD2 SPD3
0.2 0.1 0
400
450
500
550
600
650
700
Wavelength /nm
Fig.4 The optimized spectra and illuminant A and D65
750
Spectrum Optimization for Surgical Lighting …
45
To validate the effectiveness of these methods, the image entropy was calculated for each reproduction using the method proposed by Shen [14] together with two often used illuminants, i.e. illuminant A and D65. As is known, the image entropy is a common measure to represent how much information that an image carries and is often used as a tool to evaluate the image clarity. Figure 5 summarizes all the reproductions and their corresponding image entropy was illustrated in Fig. 6. As is expected, the optimized spectrum using mean-shift algorithm gave the highest image entropy, indicating the effectiveness of this method. The result using K-means algorithm was just slightly worse, followed by that using the two-color dataset. And all the optimized SPDs outperformed the illuminant D65 and A.
Fig. 5 Organ reproductions under illuminant A, D65 and the three optimized SPDs
0.12
Image Entropy
0.1 0.08 0.06 0.04 0.02 0
A
D65
SPD1
SPD2
SPD3
Illuminants
Fig. 6 The bar chart of the image entropies for each reproduction
46
L. Xu and M. R. Luo
5 Conclusions Three optimization methods were proposed in this paper and they were proved to outperform the traditional CIE illuminants. Three sets of datasets were extracted by adoption of the clustering methods or by selection. Each dataset led to its corresponding optimized spectrum. The present results showed that all the optimized spectraoutperformed the traditional CIE illuminants.
References 1. Holonyak N (2000) Is the light emitting diode (LED) an ultimate lamp? Am J Phys 68(9):864– 866 2. Bergh A et al (2001) The promise and challenge of solid-state lighting. Phys Today 54(12):42– 47 3. Kazuhiro G et al (2004) Appearance of enhanced tissue features in narrow-band endoscopic imaging. J Biomed Opt 9(3):568–577 4. Lee M et al (2009) Optimal illumination for discriminating objects with different spectra. Opt Lett 34(17):2664–2666 5. Neil TC et al (2012) Development and evaluation of a light-emitting diode endoscopic light source. In Proc SPIE 6. Wang H et al (2017) Optimising the illumination spectrum for tissue texture visibility. Lighting Research & Technology 50(5):757–771 7. Xu L, WH, LM (2015) An LED based spectra design for surgical lighting, CIE. pp 1473–1446 8. Guolan L, Baowei F (2014) Medical hyperspectral imaging: A review. J Biomed Opt 19(1):1– 24 9. Shen H et al (2014) Channel selection for multispectral color imaging using binary differential evolution. Appl Opt 53(4):634–642 10. Hartigan JA, Wong MA (1979) A K-Means Clustering Algorithm. J Roy Stat Soc: Ser C (Appl Stat) 28(1):100–108 11. CIE, Colorimetry. 2nd ed, in CIE Publ. No. 15.2. 1986: Vienna 12. Fukunaga K, Hostetler L (1975) The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Trans Inf Theory 21(1):32–40 13. Liu HX et al (2014) Color-difference threshold for printed images Appl. Mech Mater 469(236):239 14. Shen J et al (2017) Optimising the illumination spectrum for enhancing tissue visualisation. Lighting Research & Technology 51(1):99–110
Observer Metamerism for Assessing Neutrality on Displays Hui Fan, Yu Hu, and Ming Ronnier Luo(B) State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou, China [email protected]
Abstract. Observer metamerism (OM) is considered the disagreement of color matches due to different color vision between observers. It is a source of uncertainty for color specification and is essential for the design of imaging products. Many experiments have been conducted to quantify the degree of OM. In this paper, two color matching experiments were conducted to study the neutral white on two displays. Based on experiments, 95% ellipses of different SPs and observers were drawn on a*b* plane. Neutrality chromaticity of 2 displays was reported in terms of u’v’. The results of intra-observer and inter-observer variations were reported using mean color difference from a set of mean (MCDM) in terms of CIEDE2000 color differences. Intra-observer variation was calculated using MCDM between 8 repeat matchings. It was found that intra-observer MCDM was 3.9 ranged from 3.1 to 4.4. Inter-observer variation was calculated between observers under each luminance level or SP. It was found that interobserver MCDM was 5.5 ranged from 4.1 to 7.3. Also, observer variation slightly increased when different SPs were involved. Keywords: Observer Metamerism · Color Matching · MCDM
1 Introduction Observer metamerism refers to the phenomenon that under identical viewing conditions, a metameric pair of colors, that match for one observer, can be mismatched for another observer [1]. In other words, normal color vision observers could still have quite different color perceptions. The color received by human eyes is affected by color matching function (CMF). CMF varies from person to person and is affected by many physiological parameters. [2] The difference between CMFs is one of the reasons for observer metamerism. In 2006, CIE proposed CIEPO06 model [3] to generate CMFs of observers with different ages and field size, and it can be used to evaluate observer metamerism. However, observers with the same age and field size still have different color vision. It is necessary to carry out psychophysics experiments to study and measure the color visional differences between observers. Various studies were carried out [4–6] and their results were reported in terms of MCDM. Each experiment recruited a number of observers to perform color matching © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_8
48
H. Fan et al.
experiments. The results were reported in terms of inter- and intra-observer variations. In general, they found that the inter- observer variations (MCDM)were around 2.0–4.0 CIEDE2000 units and intra-observer variation is smaller by a factor of about 1.5. The above experiments performed color matching used a white as the external reference. The present work differs from them and is aimed to study the memory color of white perception. In other words, color matching was performed against an internal reference, i.e. each observer’s own white perception. Many studies have been carried out to investigate the chromaticity of neutral white. Huang et al. [7] studied white appearance of a tablet display under 17 lighting conditions. 63 observers were asked to judge whether the stimulus can be classified as white and estimate the whiteness percentage. It was found that neutrality centers (using u’v’) under dark and D65 conditions were (0.2001, 0.446) and (0.1976, 0.4451) respectively. Smet et al. [8, 9] studied chromaticity of unique white viewed in object mode and illumination mode under dark adapted conditions. 13 observers conducted unique white setting and unique white rating experiments. Average of neutrality centers in 3 luminance levels were (0.2086, 0.4655) and (0.2035, 0.4605) for object mode and illumination mode respectively. In this study, the chromaticity of neutral white was further investigated. Overall, the goals of this study were to find the neutral chromaticity, to investigate the observer metamerism, including intra-observer and inter-observer variations, and to study the impact of different starting points on observer metamerism.
2 Experimental The experiment for finding neutrality color was divided into two parts using two displays assessed by two independent groups of observers. The aim was to get a general conclusion of the range of observer variations and to compare the similarities and differences between the two experiments. 2.1 Experiment 1 Experiment 1 was carried out at Zhejiang University on a NEC display, using sRGB to display. Color patterns were presented on a 10-bit ‘NEC MultiSync PA272W’ LCD display. Observers sat 60 cm in front of the display. After chromatic adaptation for 1 min, a color patch with black background appeared on the center of screen, which subtended a 4° field of view (FOV). The experiment was carried out in a complete dark room. 25 observers with normal color vision participated in the experiment. Observers used the keyboard to adjust the test stimulus along CCT and Duv directions. There were 2 luminance levels (18.42 and 40.75 cd/m2 ) and 4 starting points. The two parts of Experiment 1 were named Experiment 1L and Experiment 1H for low and high luminance levels, respectively. 2.2 Experiment 2 Experiment 2 was conducted at home of the first author due to the COVID19 outbreak. The experimental conditions were closely followed those of Experiment 1. A laptop
Observer Metamerism for Assessing Neutrality on Displays
49
computer by assuming an sRGB display was applied. Ten observers with normal color vision participated in the experiment. When matching neutral white, CIELAB a* and b* were adjusted and lightness was fixed at L* of 61. Each match was started at one of the 4 starting points, saturated red, yellow, green and blue. For each starting point, observers repeated 8 times. In total, 320 matches were made, i.e. 10 observers × 4 SPs × 8 repeats.
3 Results and Discussions The observer metamerism was analysed in 2 parts: intra- and inter- observer variations. 3.1 Intra-observer Variation Only Experiment 2 results were used to investigate intra-observer variation, because observers in Experiment 1 did not perform the repeated assessments. Table 1 shows the MCDM results for each starting point and its mean together with the overall mean, which was calculated to include all the available 320 data. It can be seen that different SPs could impact on observer variation, i.e. observer performed more consistently for the blue SP (3.1) and least consistent for the Yellow SP (4.4) with a mean of 3.9 MCDM units. Also, when considering the inclusion of all available data, the observer variation became 4.5, increased by a factor about 1.2. Table 1 Intra-observer variations for experiment 2 Starting point Experiment 2 SPR
4.3
SPY
4.4
SPG
3.8
SPB
3.1
Mean
3.9
Overall mean 4.5
3.2 Inter-observer Variation Three sets of experimental results were used to investigate inter-observer variation. Table 2 shows the MCDM results for each starting point and its mean together with the overall mean, which was calculated to include all the matching data. Again, it can be found that different SPs seem to impact on inter-observer variation. For example, in Experiment 2, observers performed more consistently for the green and blue SPs than the yellow and red SPs. The results from Experiment 1L and Experiment 2 performed more consistently
50
H. Fan et al. Table 2 Inter-observer variations for each experiment Experiment 1L Experiment 1H Experiment 2 SP1
5.3
6.0
5.7
SP2
5.2
7.3
5.7
SP3
4.9
4.1
5.3
SP4
4.6
7.0
4.6
Mean
5.0
6.1
5.3
Overall mean 5.5
7.1
5.5
than those in Experiment 1H. Also, when considering the inclusion of all available data (see overall mean), the observer variation increased by a factor about 1.1. Observer variations can also be expressed by plotting 95% confidence ellipses in a*b* plane. Figures 1a, b show ellipses for Experiments 1L and 1H respectively. It can be seen that the ellipses in Fig. 1b are in general larger and more scatter than those in Fig. 1a. This means less observer consistency in Experiment 1H than Experiment 1L.
Fig. 1 95% confidence ellipses of the four starting points plotted in a*b* diagram for a Experiment 1L and b Experiment 1H, respectively. Note that the green, blue, pink, red ellipses represent SP1 ~ SP4, respectively. Black ellipse represents the overall results
Similar to Fig. 1, Fig. 2 plots the 95% ellipses for the 4 SPs for the Experiment 2 results. Figure 3 plots the ellipses for all 10 observers in Experiment 2. Figure 2 showed that all ellipses were very similar in terms of all 3 ellipse parameters, size, shape and orientation, meaning that there was little impact of SPs. It also implies that the results were more consistent in Experiment 2 than those of Experiment 1.
Observer Metamerism for Assessing Neutrality on Displays
51
Fig. 2 Experiment 2 ellipses and centers for the 4 starting points plotted in a*b* plane. The red, yellow, green, blue ellipses represent the 4 starting points. Black ellipse represents the overall result
Fig. 3 Experiment 2 ellipses of 10 observers. The red ellipse represents the overall results from all observers
52
H. Fan et al.
3.3 Neutrality Chromaticity The goal of the two experiments was to find the neutral chromaticity. For each experiment, all observers’ neutrality results were averaged. Figure 4 shows the ellipse and neutral center for 2 displays in u’v’ chromaticity diagram, and their neutrality chromaticity are listed in Table 3. It can be seen in Fig. 4 that the color centers are reasonably close and the ellipses from the two experiments are different but they all located along the blackbody locus. This implies that observers are more tolerable in yellow & blue direction. The difference between the two centers was about 0.019 in CIE 1976 u’v’ units. This discrepancy is due to the observer metamerism together with different display primary sets as reported by Hu et al. [10].
Fig. 4 Ellipses and neutral centers of 2 displays in u’v’ plane. Neutral centers were marked as ‘ +’.The areas plotted using dash lines represent the color gamut of 2 displays
Table 3 Neutrality chromaticity in u’v’ diagram for Exp 1 and Exp 2, respectively u’
v’
Experiment 1L 0.1990 0.4536 Experiment 1H 0.1989 0.4553 Experiment 2
0.1887 0.4386
Observer Metamerism for Assessing Neutrality on Displays
53
4 Conclusions In this paper, experiments were conducted to find neutrality chromaticity from 2 displays. The chromatic difference between the two neutrality points are about 0.019 u’v’ units but the ellipses were orientated along the blackbody locus. It was found that there were differences in observer metamerism between the 2 experiments studied, which were measured to quantify the degree of observer metamerism. The results in terms of intra- and inter-observer variations in terms of MCDM CIEDE2000 units were 3.9 and 5.5 CIEDE2000 units, respectively. It could provide reference for the design of displays.The display with less observer metamerism will make observers’ color perception more consistent. Acknowledgements. The work is supported by National Natural Science Foundation of China (Grant number: 61775190).
References 1. Xie H, Farnand SP, Murdoch MJ (2020) Observer metamerism in commercial displays. J Opt Soc Am A 37:A61–A69 2. Asano Y, Fairchild MD, Blondé L (2016) Individual colorimetric observer model. Plos One 11 3. CIE, Fundamental Chromaticity Diagram with Physiological Axes–Part I, CIE 170–1:2006, Central Bureau of the CIE, Vienna, 2006 4. Cho YJ, Cui G, Luo MR et al (2019) The impact of viewing conditions on observer variability for cross-media colour reproduction. Color Technol 135(3):234–243 5. Asano Y, Fairchild MD, Blond L et al (2014) Observer variability in color image matching on a LCD monitor and a laser projector. In: Color and Imaging Conference, 22nd Color and Imaging Conference Final Program and Proceedings, pp 1~6 6. Stiles WS, Burch JM et al (1959) Colour-matching investigation: final report. J Modern Optics 6(1):1–26 7. Huang HP, Wei M, Ou LC (2018) White appearance of a tablet display under different ambient lighting conditions. Opt Express 26(4):5018–5030 8. Smet K, Deconinck G, Hanselaer P (2014) Chromaticity of unique white in object mode. Opt Express 22:25830–25841 9. Smet K, Deconinck G, Hanselaer P (2015) Chromaticity of unique white in illumination mode. Opt Express 23:12488–12495 10. Hu Y, Wei M, Luo MR (2020) Observer metamerism to display white point using different primary sets. Opt Express 28:20305–20323
A Study of Incomplete Chromatic Adaptation of Display Under Different Ambient Lightings Rui Peng, Qiyan Zhai, and Ming Ronnier Luo(B) State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou, China [email protected]
Abstract. Earlier studies found that the chromaticity of ambient illuminations have different degrees of chromatic adaptation for viewing displays. This resulted in the weakness of chromatic adaptation transforms in predicting colour appearance on mobile displays. This study is aimed to collect experimental data and to propose an incomplete chromatic adaptation function based on the chromaticities of illuminants. An image including black text and white background was rendered by means of the chromatic adaptation transform (CAT02), into 42 different white stimuli displaying under ambient lightings of 5 CCTs (Correlated Color Temperature) and 2 illuminance levels. Twenty observers assessed the neutral white for each colour stimulus. Our previous study results were also combined to fit a model with different scaling factors for the incomplete adaptation factor (D) in CAT02. The optimization based on the neutral white stimulus under each ambient lighting condition suggested that the different degrees of chromatic adaptation were jointly affected by the illuminance and chromaticity of ambient illuminations. Keywords: Chromatic Adaptation · Neutral White · Chromatic Adaptation Transform
1 Introduction Chromatic adaptation (CA) is the human visual system’s ability to adjust to changes in illumination resulting in preserved appearance of object colours [1]. Chromatic adaptation transforms (CAT) are the models to predict these chromatic shifts, such as the most widely used CAT02 [2] in CIECAM02 [3] and the most recent CAT16 [4] in CIECAM16 [4], which are used to predict corresponding colours (CC) [5], two colours under two different illuminants to have same appearance. CAT02 includes an incomplete adaptation factor (D) varying from 0 (no adaptation) to 1 (full adaptation). Several studies have found that D should be affected jointly by the illuminance and chromaticity of the illuminant [1, 6]. Smet et al. [1, 6] conducted an experiment to obtain CC data using memory colour matching of familiar objects [7] as internal references under both neutral and coloured illuminants. It was found that the more chromatic the illuminant is, the less the observer will adapt to it. At Zhejiang University, Zhai and Luo © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_9
A Study of Incomplete Chromatic Adaptation of Display Under Different …
55
[8] investigated the chromatic adaptation via neutral white matching using self-luminous colours and optimized D values of each corresponding colours pair to derive model of the effective degree of chromatic adaptation based on the u’v’ chromaticities of illuminants. This is given in Eq. (1): D = a1 u2 + a2 u v + a3 v2 + a4 u + a5 v + a6
(1)
where D was the modeled effective degree of adaptation, a1 to a6 are the optimized model parameters; u’ and v’ as the CIE1976 chromaticities of the adapting illuminants respectively. In practical use, a scaling factor of D values might be added in the equation according to different luminance level, viewing conditions and time of adaptation. In this study, a psychophysical experiment was carried out to evaluate neutral white colour stimuli on a display under various ambient lighting conditions with different adapting illuminance and CCT levels. The results were used to verify our previously developed incomplete adaptation model based on the chromaticities of illuminants plus a scaling factor.
2 Experimental Datasets 2.1 ZJU-Peng Dataset In this study, an image composed of black text on a white background was displayed on a monitor, an NEC PA302W LED-backlit IPS LCD display. The tristimulus values of the original image were transformed by the CAT02 chromatic adaptation transform to produce images with each of 42 different chromaticities corresponding to 6 CCT values (3000 K, 4000 K, 5000 K, 6500 K, 8000 K, 10000 K) at 7 Duv levels (−0.015, −0.010, −0.005, 0.000, 0.005, 0.010, 0.015). The white background had the same luminance of 100 cd/m2 for all images. Ten ambient conditions were designed to cover the majority of real-world lighting conditions, which had five chromaticities with 5 CCT levels all on the Planckian locus (3000 K, 4000 K, 5000 K, 6000 K, 8000 K), each having two illuminance levels (500 and 1000 lx). The experiment was conducted in a windowless room, representing a typical office condition. In total, 20 observers (10 males, 10 females) from Zhejiang University participated in the experiment, with an average age of 22.4 years and a standard deviation of 1.84 years. The observers were divided into two groups (10 observers each), low adapting illuminance (500 lx) and high adapting illuminance (1000 lx). Their results will be defined as ‘ZJU-Peng 500’ and ‘ZJU-Peng 1000’ in the following. Each observer was required to evaluate 48 colour stimuli (42 display stimuli + 6 repeated stimuli on the Planckian locus for evaluating intra-observer variation) under five ambient illuminants. They were asked to make a forced choice decision whether the colour of the stimulus can be classified as ‘white’ or ‘not a white’. 2.2 ZJU-Zhai Dataset Zhai and Luo [8] carried out an experiment to match neutral whites on a mobile phone display in a viewing cabinet illuminated by a wide range of adapting illuminants including 58 test lighting conditions, i.e. 9 CCT levels of 3100K, 3600K, 4100K, 4700K,
56
R. Peng et al.
5200K, 6200K, 6800K, 10000K and 16000K, and 7 Duv levels of −0.0240, −0.0165, − 0.0125, −0.0075, 0, 0.0085 and 0.0165. A spectrum tunable LED illumination system was used to control each light in great precision. Observers navigated CIELAB a* and b* controls with constant lightness to match their internal neutral white. Table 1 summarizes the experiment conditions of all datasets.Note that for a Lambertian body luminance level in cd/m2 equals to illuminance times π. The parameter to define the surround condition, surround ratio [9], SR was computed by Eq. (2): SR = LSW /LDW
(2)
where LSW is the luminance of the surround white and LDW is the luminance of the device white. The luminance values are measured in cd/m2 . Table 1 The experiment conditions of all datasets ZJU-Zhai
ZJU-Peng 500
ZJU-Peng 1000
Condition
viewing booth
room
room
Display luminance(cd/m2 )
300
100
100
Light illuminance(lx)
920
500
1000
SR
0.97
1.59
3.18
View distance(cm)
50
75
75
FOV (target)
3°
25°
25°
Background (L * )
80
63
63
Psychophysical method
matching
rating
rating
3 Results 3.1 Degree of Chromatic Adaptation Smet et al. [1] proposed a new concept of two-step CAT in order to define D more clearly. A two-step CAT involves an illuminant representing the baseline states between the test and reference illuminants for the calculation [1]. Degrees of adaptation under the other illuminants should be calculated relative to the adaptation under the baseline illuminant (BI). D values were iterated from 1 to 0 with step of 0.01 to find the lowest error in terms of CIEDE2000 chromatic difference between the BI and the transformed neutral whites under the test illuminant by the CAT02. When different chromaticities were used as BI to optimize Di , mean errors of CAT were different. The 8000 K ambient lightingcondition was selected as BI for investigating the degree of chromatic adaptation under other lighting conditions, which achieves the valley value of the mean errors of CAT with Di . Figure 1 shows the optimized D values of three datasets (ZJU-Zhai, ZJU-Peng 500 and ZJU-Peng 1000). Regardless the magnitude of the optimized D values in each dataset,
A Study of Incomplete Chromatic Adaptation of Display Under Different …
57
there is a clear trend can be found, i.e. the D value is increased from 3000K until 6000K and starts to fall. Note that a smaller D value means a more incomplete chromatic adaption. It was found that a higher CCT level introducing a more complete chromatic adaptation and D has a peak value according to Zhai’s data.
Fig. 1 The optimized D values of three datasets under different ambient lightings (Duv = 0)
3.2 Models of Incomplete Adaptation Factor (D) Equation (1) as an incomplete adaptation transform based on CAT02, can be used to fit the optimized D values from each dataset. It was first derived to fit ZJU-Zhai dataset. The function was then tested using ZJU-Peng datasets. Table 2 gives the coefficients of the D function, its performance in terms of correlation coefficient (r), together with a scaling factor for each dataset. Figure 2 plots the optimized and the predicted D values. The r values between models and the training samples had a mean of 0.885 ranged from 0.812 to 0.976, suggesting that the effect of incomplete adaptation for display was accurately modeled. The scaling factor is associated with the viewing conditions involved, such as ambient lighting conditions, luminance level of displays, adaptation period, psychophysical technique used. The results indicated that ZJU-Zhai dataset has a more complete adaptation than ZJU-Peng datasets. This could be caused by the psychophysical methods used, i.e. more time to perform matching tasks than rating tasks, leading to a more complete adaptation. Also, it may be due to a bright light to illuminate a dim display to produce less complete adaptation (ZJU-Peng-500 condition). Figure 3 shows the model characteristics in u’v’ chromaticity diagram.
58
R. Peng et al.
Fig. 2 Correlation between the optimized D values and the modelled D values
Table 2 Fitting parameters and the Pearson correlation coefficient between the optimized D values and the modelled D values for each dataset
ZJU-Zhai
a1
a2
a3
a4
a5
a6
factor
r
11.85
−23.45
−65.08
5.55
66.15
−15.47
1.000
0.866
ZJU-Peng 500
0.713
0.812
ZJU-Peng 1000
0.529
0.976
4 Conclusions Three sets of experimental data to describe neutral white appearance were used to investigate the effects of ambient illuminant on chromatic adaptation for a display. The results were used to obtain a function (D) to predict the degree of adaptation for CAT02, by minimizing the colour difference under each illuminant condition. The results indicated that chromatic adaptation is highly affected by the ambient CCT and luminance levels, i.e. chromatic adaptation is incomplete under lower CCT and dimmer ambient conditions. The results were used to verify our earlier developed D function based on CIE 1976 u’v’chromaticities. The function performed well for all the available datasets.
A Study of Incomplete Chromatic Adaptation of Display Under Different …
59
Fig. 3 The D-u’v’ model’s characteristics in u’v’ chromaticity diagram
Acknowledgement. The authors like to thank the funding from the National Science Foundation (61775190).
References 1. Smet KAG, Zhai Q, Luo MR, Hanselaer P (2017) Study of chromatic adaptation using memory color matches, Part I: Neutral illuminants. Opt Express 25(7):7732–7748 2. CIE Publication160:2004, “A review of chromatic adaptation transforms” (CIE, Vienna, 2004) 3. CIE Publication 159:2004, “A color appearance model for color management systems: CIECAM02” (CIE,Vienna, 2004). 4. Li C, Li Z, Wang Z, Xu Y, Luo MR, Cui G, Melgosa M, Brill MH, Pointer M (2017) Comprehensive colorsolutions: CAM16, CAT16 and CAM16-UCS. Color Res Appl 42:703–718 5. Li C, Luo MR, Rigg B, Hunt RWG (2002) CMC 2000 chromatic adaptation transform: CMCCAT2000. Color Res Appl 27(1):49–58 6. Smet KAG, Zhai Q, Luo MR, Hanselaer P (2017) Study of chromatic adaptation using memory color matches, Part II: Colored illuminants. Opt Express 25(7):8350–8365 7. Smet KAG, Ryckaert WR, Pointer MR, Deconinck G, Hanselaer P (2011) Colour appearance rating offamiliar real objects. Color Res Appl 36(3):192–200 8. Zhai Q, Luo MR (2018) Study of chromatic adaptation via neutral white matches on different viewing media. Opt Express 26(6):7724–7739 9. Fairchild MD (1998) Color appearance models. Addison WesleyLongman, Inc.
Newcolour Appearance Scales Under High Dynamic Range Conditions Xi Lv and Ming Ronnier Luo(B) State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou, China [email protected]
Abstract. New colour appearance scales close to daily experience and image quality enhancement are highly desired including whiteness, blackness, vividness and depth. This article describes a new experiment to accumulate the data under HDR (high dynamic range) conditions. The data were then used to test the performance of different colour appearance scales such as CIELAB and CAM16UCS plus the recent extension by Berns’ Vab * , Dab * . The results showed those Berns’ scales gave reasonable performance. However, there was no scale capable of predicting colour appearance data covering a wide dynamic range. New scales were developed based on the absolute scales of brightness and colourfulness of CAM16-UCS and gave accurate prediction to the data. Keywords: Colour Appearance · HDR · Vividness · Whiteness
1 Introduction The conventional colour appearance scales based on Munsell colour order system [1, 2]have been widely used including lightness, chroma and hue. Almost all colour spaces and appearance models include these scales. Colour appearance models, such as CIECAM02 [3] and CAM16[4], also include absolute scales brightness and colourfulness to reflect the viewing conditions. However, these scales only correspond to one dimension. In real life, colour appearance of objects changes according to different lighting intensities. This effect can be modelled by vividness scale,Vab * proposed by Berns [5]. When doing painting, it is typical by mixing solid pigment with white base. The more coloured pigment is added, the colour becomes darker and more colourful. This is a scale of saturation or depth, Dab * proposed by Berns [5]. Cho et al. carried out experiments to assess colour samples by the British and Korean observers [6] and the visual results were used to test different colour models and to develop new scales based on CIELAB [7] and CAM02-UCS [3].With the demand for high image quality, these colour scales should be valuable to enhance image quality. An experiment was conducted here to (1) prepare stimuli of high dynamic luminance range from 4500 to 100 cd/m2 , (2) perform visual assessments using Chinese observers, (3) compare the scales with those developed by Berns and Cho et al., and (4) develop new scales to consider HDR luminance range. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_10
Newcolour Appearance Scales Under High Dynamic Range Conditions
61
2 Methods 2.1 Light Settings Fig. 1 shows the experimental situation. The experiment was carried out in a viewing cabinet. The background colour had a L* of 44. Two multi-channel LED illumination systems supplied by Thouslite Ltd. Were used. One was used in the viewing cabinet having a CCT of 6500K. Its spectral power distribution (SPD) is given in Fig. 1. A test chart included 24 coloured filterswhich were illuminated by a backlit LED lighting transmitter. A mask having the same background colour had the same size as each stimulus with 2° field of view. Each colour filter had 5 luminance levels ranged from 93, 255, 580, 1400 to 4616cd/m2 defined by measuring the reference white in the bottom left corner of the chart.
Fig. 1 The experiment’s condition (left) and the SPD of 5 light conditions (right)
2.2 Samples The stimuli used in the experiment were 16 out of 24 colours on the chart. The lightning box had a total of five luminance levels, so there was a total of 80 colours. Fig. 2 shows the distribution of all samples in CIE a*-b* and CIE L*- Cab * planes under 4616cd/m2 . Distribution of other degrees are basically the same. It is under D65/10° conditions and all stimuli were measured using a JETI 1211 tele-spectroradiometer, whose spectral range was from 250 to 1000 nm at a 5-nm intervals. 2.3 Observers Twenty observers (10 females and 10 males) took part in the experiment. All of them had normal colour vision tested by Ishihara colour vision test and received a training session to explain each impression.
62
X. Lv and M. R. Luo
Fig. 2 The distribution of samples in a*-b* (left) and L*- Cab* plane (right) under 4616cd/m2
2.4 Experimental Observers were asked to adapt in the viewing environment for one minute looking around the inside of viewing cabinet, such as the fruits, the background prior to the experiment. Fourcolour appearance scales were assessed: whiteness, blackness, vividness and depth. During the experiment, observers assessed stimulus separately one at a time. The force choice method was used to ask observers to judge the stimulus in question to be close to white or not a white, black or not a black, vivid or not vivid, deep or not deep respectively.
3 Results The experimental results were processed, totransform the raw data in terms of "yes" and "no" to percentage. Those were than transformed to z-score. 3.1 Testing Performance of Color Appearance Scales The visual resultsof whiteness, blackness, vividness and depth at 5 luminance levels were used to test 8 different scales: CIELAB L* , Berns’ vividness and depth, and saturation (s), colourfulness (M), lightness (J) and brightness (Q) of CAM16-UCS. Figs. 3 show the plot of the 8 scales against the visual whiteness, blackness, vividness and depth results respectively. In each plot, each model had 5 luminance levels. The correlation coefficient between visual data and calculated formula data was used to indicate the performance of each scale. A better scale should have a larger (positive) or a smaller (negative) correlation coefficient. The results are summarized below: • For estimating whiteness results, Dab * (−0.82) performed the best, followed by s scale (−0.68). • For predicting blackness results, lightness scales (L* , J) and brightness scale (Q) performed better (−0.72 to −0.77), followed by Vab * (−0.67). • For vividness results, M performed the best (0.81), followed by Cab * (0.76) and Vab * (0.76).
Newcolour Appearance Scales Under High Dynamic Range Conditions
63
• Finally, for predicting depth results, the lightness L* and brightness Q performed the best (−0.84), followed by J (−0.82). • Note that Berns’ depth and vividness only gave reasonable accurate prediction. All scales could not give an overall good performance across all luminance levels. In general, the luminance levels for 580 and 255 cd/m2 gave better performance than the others. Because the model scales were mainly derived from the data accumulated in this range.
Fig. 3 Correlation coefficients (r) of results (a Whiteness, b Blackness, c Vividness, d Depth) versus different models
3.2 New Ellipsoid Model Similar to the work by Cho et al. [7], the ellipsoid-based models (Vividness (V), Depth (D), blackness (K), whiteness (W)) were developed based on CAM16-UCS using distance metric as given in Eq. (1). 2 2 (V , D, K or M ) scales = kM + kL (Q − Q0 )2 + kA a − a0 + kB b − b0 (1) Note that Cho et al. used lightness scale in Eq. (1) and CAM16-UCS brightness and colourfulness scales were used here. This would allow changes in colour appearance at different luminance levels to be considered. In other words, this would consider HDR viewing conditions. Fig. 4 plots the new models’ predictions against visual results. The results in term of correlation coefficient from the new scales are summarized in Fig. 5 arranged in the same way as Fig. 3. As shown in Fig. 4, the correlation coefficients of visual data versus V, D, K, M scales look good. This proved that new models about colour appearance scales which can cover a wide dynamic range can fit visual data covering a wide range very well.
64
X. Lv and M. R. Luo
Fig. 4 The plot of visual data (z-scores) versus the predictions from the new vividness (V), Depth (D), Blackness (K) and whiteness (W) scales
Fig. 5 Correlation coefficients (r) of V, D, K, M results versus V, D, K, M scales respectively
Newcolour Appearance Scales Under High Dynamic Range Conditions
65
4 Conclusions The studies carried out the physics experiment on the colour appearance of the high dynamic range and the results were used to test CIELAB Cab * , CIELAB L* , Berns’ vividness, depth and CAM16-UCS. All of these models can’t predict the colour perception under HDR conditions very well. Based on it, the studies develop new models about colour appearance attributes which can cover high dynamic range.
References 1. Munsell AH (1905) A Color Notation, 1st–4th edition. Boston: ElLis; 5th edition. Baltimore, Maryland: Munsell Color Company, pp 1–89 2. Munsell AH (1969) A grammar of color, a basic treatise on the color system of Albert H. Munsell, edited and with an introduction by FaberBirren. New York: Van Nostrand Reinhold, pp 1–96 3. CIE (2004) CIE Publication 159–2004, a colour appearancemodel for colour management systems: CIECAM02.Vienna, Austria: Central Bureau of the CIE 4. Li C, Li Z, Wang Z et al (2017) Comprehensive color solutions: CAM16, CAT16, and CAM16UCS[J]. Color Res Appl 42(6):703–718 5. Berns RS (2014) Extending CIELAB: Vividness, Vab * , Depth, Dab * , and Clarity, Tab * . Color Res Appl 39:322–330 6. Cho YJ, Ou L, Luo MR (2016) A cross-cultural comparison of saturation, vividness, blackness and whiteness scales. Color Res Appl 42:1–13. https://doi.org/10.1002/col.22065 7. CIE (2004) 15:2004, Colorimetry. Vienna: Commission Internationale deL’Eclairage, pp 6–20
Testing the Performance for Unrelated Colour Appearance Models Keyu Shi1 , Changjun Li2 , Cheng Gao2 , and Ming Ronnier Luo1(B) 1 State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Zhejiang,
China [email protected] 2 School of Electronics and Information Engineering, University of Science and Technology Liaoning, Anshan, China
Abstract. A comprehensive colour appearance model should be capable of predicting a wide range of viewing conditions including related and unrelated colours under a wide range of illuminance levels from mesopic to photopic region. In the past, colour appearance models can only predict the colour appearance for related colours in different viewing conditions. More recently, colour appearance models to predict unrelated colours have been developed and being improved constantly. This paper investigates the performance of different colour appearance models for unrelated colours. The three colour appearance modelswere compared including CAM15u, CAMFu and the latest CAM20u. An experimental dataset carefully accumulated by Fu et al. was used for evaluation. The results showed that the latest CAM20u performed best among the three models. CAMFu, as the oldest model, whose performance was much worse than other two models in predicting brightness and colourfulness, and performed well in predicting hue composition, while CAM15u gave a moderate performance in predicting brightness and colourfulness, but performed a little worse than the others for hue composition. Keyword: Unrelated Colours · Colour Appearance Models
1 Introduction Unrelated colours are perceived areas as a target been observed in isolation from any other colours [1]. A typical example of an unrelated colour is a self-luminous stimulus surrounded by a dark background, such as signal light, traffic lights, and street lights viewed in a dark night. Colour appearance models aim to extend basic colourimetry to specify the perceived colour of stimuli in a wide variety of viewing conditions [2]. Almost all models for colour appearance have been developed to consider only related colours before [3–5], but Hunt [6, 7] proposed a model to predict the colour appearance for both related and unrelated colours in different viewing conditions, and he redined the original model to be used for unrelated colours, known as CAM97u [7]. Since that, many colour appearance © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_11
Testing the Performance for Unrelated Colour Appearance Models
67
models to predict unrelated colours has been developed, such as CAMFu, CAM15u, and the latest CAM20u [8]. To verify the performance of these models, various datasets were generated over the years and only Fu dataset is used in this paper. The three models to be tested are: CAMFu, CAM15u, CAM20u. Their performance will be evaluated by calculating the difference between the predictions and visual data. For the three colour appearance attributes studied: brightness (Q), colourfulness (M), and hue composition (H), the coefficient of variation (CV) values are calculated between results predicted by models and results of visual data.
2 Unrelated Colour Datasets In 2012, Fu et al. investigated the colour appearance for unrelated colours, and set up a dataset named Fu dataset to evaluate the performance of existing colour appearance models [9]. In the experiment, a CRT monitor was used to display colour stimuli, whose peak white was set to have chromaticity coordinates of CIE Illuminant D65 with a luminance of 60 cd/m2 . Experiments were conducted to obtain visualdata by a panel of 9 normal colour vision observers using themagnitude estimation method, they were asked to assess each stimulus in a darkened room in terms of colour appearance attributes of brightness, coloufulness and hue composition using the magnitude estimation method. Before the experiment, observers were first asked to adapt in the dark room for 20 min, to ensure that they had adapted to the dark room fully. The experiment was divided into 10 phases.Each phase included 50 stimuli on the display.Theirluminance levels were ranged from 0.1 cd/m2 to 60 cd/m2 . A piece of black cardboard was used to mask the rest of screen colour with a hole in the middle to control the actual field of view (FOV), 0.5, 1, 2 and 10°.
3 Unrelated Colour Appearance Models Three colour appearance models were compared in this paper, including CAM15u, CAMFu and the latest CAM20u. The CAMFu model [9] was developed based on the CIECAM02 [10] for predicting the related colours based on the Fu et al. dataset. In CAMFu model, the rod contribution is considered when computing brightness and colourfulness, and the coefficient CA , CM depends on the logarithms of the Y value of the input sample and viewing angle θ respectively.The CAMFu model gave a better performance than the CAM97u, the best unrelated colour appearance model at the time. CAM20u [8] can be considered as a refinement of CAMFu. The brightness (Qun ) in CAM20u is computed based on the computed colourfulness (Mun ) to predict the Helmholtz and Krauscher effect, i.e. two colours having the same luminance, a more colourful one will appear brighter. The calculation of Hue quadrature and hue angle is same as CAM16 [11]. The CAM15u was developed based on the Ghent dataset [12] and the CIE 10° [13] cone fundamental. The input to the model is the spectral radiance L(λ) and attributes predicted are the brightness, whiteness, saturation, and hue composition.
68
K. Shi et al.
Note that in the CAM15u, the input can be tristimulus values in absolute luminance scale and a matrix transforming the XYZ to cone response signal is given [14].
4 Results Fu et al. dataset was used to test performance of the three models: CAMFu, CAM15u, CAM20u. The performance of each model to predict the visual brightness (Q), colourfulness (M) and hue composition (H)is reported in terms of CV (coefficient of variation) as shown in Eq. (1). n − (Pi − Vi) 2 /n (1) CV = (100/ V) 1
Table 1 summarises the CV values between the model predictions and the experimental visual results. The results were summarized in terms of mean from individual phases and total from the combined dataset. From Table 1, the CAM20u gave the most accurate predictions to brightness, colourfulness, and hue composition for the Fu-combined dataset. While CAMFu performed worst for colourfulness and brightness and CAM15u performed badly for hue composition. Table 1 Performances of CAM15u, CAMFu and CAM20u models using the Fu dataset in term of CV values for each of the 10 phases and the mean CV values CAM15u
CAMFu
CAM20u
Attributes Q
M H Q
M H Q
M H
Mean
32
34 10 16
45 10 23
34 10
T (Total)
34
53 11 37
71 10 26
38 10
From Table 1, CAM20u outperformed the other 2 models in all three attributes, especially the colourfulness data. Figures 1a–c show the plots of the predicted CAMFu, CAM15u and CAM20u brightness results against the visual results including all 10 phases. CAM20u markedly outperformed CAMFu and CAM15u, i.e. CAM20u had a smallest scatter and a strongest positive relationship with the visual data. Figures 1d–f show the plots of the predicted CAMFu, CAM15u and CAM20u colourfulness results against the visual results from all 10 phases. CAM20u markedly outperformed CAMFu and CAM15u, i.e. CAM20u had a smallest scatter among all models tested. Figures 1g–i show the plots of the predictions of CAMFu, CAM15u and CAM20u hue results against the visual results from all 10 phases. CAM20u had a smaller scatter than CAMFu, and both outperformed CAM15u with a great margin. CAM15u fitted badly to the results in 2 phases having the darkest luminance (0.1 cd/m2 ) and FOVs of 2° and 1°.
Testing the Performance for Unrelated Colour Appearance Models
69
Fig. 1 Comparisons between predictions of three models and visual data in brightness (top), colourfulness (middle) and hue quadrature (bottom) using the Fu et al. dataset. Models arranged from left to right: CAMFu, CAM15u, CAM20u
5 Conclusions The aim of this study is to investigate the performance of different models for unrelated colours, to test colour appearance models, CAMFu, CAM15u and CAM20u using the Fu dataset. By evaluating the difference between predictions of brightness, colourfulness, hue compositionfrom models and visual data, the conclusion can be drawn that CAM20u showed the best performance among the three models tested using the Fu-combined dataset. But colourfulness was badly predicted by all three models, it was necessary to consider FOV effect and luminance effect played what role in predicting unrelated colours, then improve the models.
70
K. Shi et al.
References 1. Fu C, Li C, Luo MR, Hunt RWG, Michael R, Pointer P (2007) Quantifying colour appearance for unrelated colour under photopic and mesopic vision.Final Program and Proceedings—IS and T/SID Color Imaging Conference 2007:319-324 2. Fairchild MD (2013) Color appearance models[M]. John Wiley & Sons, Iinc 3. Luo MR, Hunt RWG (1998) The structures of the CIE 1997 colorappearance model (CIECAM97s). Color Res Appl 23 138–146 4. CIE, Publication 131 (1998) The CIE 1997 Interim Colour AppearanceModel (Simple Version) CIECAM97s.CIE Central Bureau, Vienna, Austria 5. Fairchild MD (2005) The color appearance models. 2nd Edn, WileyPress, USA 6. Hunt RWG (1991) Revised colour–appearance model for related and unrelated colours. Res Appl 14(3):146-165 7. Hunt RWG (1995) The reproduction of colour. 5th Edition, Fountain Press, U. K. 8. Gao C, Li CJ, Luo MR (2020) CAM20u: an extension of CAM16 for predicting colour appearance for unrelated colour, in preparation 9. Fu C, Li C, Cui G, Luo MR, Hunt RWG, Pointer MR (2012) An investigation of colour appearance for unrelated colours under photopic and mesopic vision. Color Res Appl 37(4): 238–254 10. Moroney N, Fairchild MD, Hunt RWG, Li C, Luo MR, Newman T (2002) The CIECAM02 color appearance model. The tenth color imaging conference, IS&T and SID, Scottsdale, Arizona 2002:23–27 11. Li C, Li Z, Wang Z, Xu Y, Luo MR, Cui G, Melgosa M, Brill MH, Pointer M (2017) Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS. Color Res Appl 42 (6):703–718 12. Withouck M, Smet KAG, Ryckaert WR, Hanselaer P (2015) Experimental driven modelling of the color appearance of unrelated self-luminous stimuli: CAM15u. Optics Express 23 (9):12045-120624-36. 13. CIE (2006) Fundamental chromaticity diagram with physiological axes—part 1 14. Huang W, Yang Y, Luo MR (2019) Verification of the CAM15u colour appearance model and the QUGR glare model. Light Res Technol 51 (1):24–36
Modelling Simultaneous Contrast Effect on Chroma Based on CAM16 Colour Appearance Model Yuechen Zhu and Ming Ronnier Luo(B) State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou, China [email protected]
Abstract. Existing colour appearance models could barely predict simultaneous colour contrast effect and our previous studies were intended to focus on hue and lightness contrast. Thus, there is a lack of study on the chroma contrast effect. The overall goal is to develop a comprehensive colour contrast model to consider all the target/background colour combinations. In this study, a psychophysical experiment was carried out to investigate the simultaneous contrast effect on chroma using Albers’ pattern via colour matching method on a self-luminous display. Five coloured targets were studied. A total of 49 test/background combinations were presented, 1,078 matches were accumulated. The results clearly showed that the chroma of a target considered moved toward the opposite direction of the surrounding colours, which indicated the chroma contrast effect should be integrated into the colour appearance model. A chroma contrast function based on CAM16 colour appearance model was derived based on the data with high accuracy. Keywords: Chroma contrast effect · Colour appearance model · Colour matching method
1 Motivation, Specific Objective Colour appearance models are used to predict various visual phenomena under different viewing conditions, such as illuminant, surround and background [1]. The CIE recommended CIECAM02 in 2002 as a universal colour model, which is widely used in scientific researches and industrial applications to date [2]. But there are some mathematic problems in CIECAM02, a new CIE technical committee, JTC10 was formed to recommend the CAM16 to replace CIECAM02 [3]. CIECAM02 can predict many colour appearance phenomena, which depend on the structure and context of the stimuli. However, the model could barely predict the simultaneous colour contrast effect, for which the appearance of a colour is affected by the surrounding colour. The appearance of colour considered moves toward the opposite colour of the surrounding colours, that is the colour shifts arising from simultaneous contrast follow the opponent colour theory [4]. Over the decades, most of studies about contrast effect tended to focus on hue and lightness contrast because the former studies © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_12
72
Y. Zhu and M. R. Luo
concluded that the magnitude of chroma effect is much smaller than lightness and hue effects. And there is no correction function of chroma contrast effect in CIECAM02 and CAM16. Thus, there is a lack of study on the effect [5, 6]. In authors’ previous studies, Albers’ pattern was used to investigate the effects, and a revised CAM16 model was proposed to predict contrast effects on hue and lightness with a high accuracy [7, 8]. Thus, in this study, a psychophysical experiment was carried out to investigate the simultaneous chroma contrast effect using Albers’ pattern via colour matching method on a self-luminous display. The goal was to investigate the magnitude of chroma effect and derive chroma contrast function based on CAM16 colour appearance model to make the model complete.
2 Methods The experiment was conducted on an EIZO-CG243W display (size: 24.1 , luminance level: 124 cd/m2 ) in a darkened room. The characteristics of the display were investigated. The results showed that the display had a uniformity of 0.91 E * ab between the middle and the other 8 surrounding regions on the display. It was characterized using a GOG model, which gave a mean E * ab of 0.72 to predict the 24 colours on X-rite 24 ColorChecker target [9]. Observers sat at a distance of 80 cm away from the monitor. They were asked to adapt in a darkened room for one minute viewing a grey background (CCT: 6500K, luminance level: 23 cd/m2 , L * : 50) on the display. A double crosses image (Albers’ pattern) was presented on the display and the observers were then asked to adjust the lightness, chroma and hue of the test cross on the right via a keyboard in CIELAB colour space to match the colour of the reference cross on the left until they looked the same (see Fig. 1). Starting bias was minimized by setting starting point at grey (C * ab : 0, hab : random value between 0 and 360). Once the observer confirmed the satisfaction of the match, the next randomly generated configuration was presented. The background on the left was fixed (CCT: 6500K, luminance level: 23 cd/m2 , L * : 50), and the colour of the background on the right changed.
Fig. 1 Experimental setting, a Experimental situation; b Operational interface
Modelling Simultaneous Contrast Effect on Chroma Based on CAM16 Colour …
73
Fig. 2 The target and background colours plotted in CIELAB a*b* plane (×: Target colours, •: Background colours)
Fig. 2 shows the coordinates of target and background colours in CIELAB a* b* plane. Five target colours (Red, Yellow, Green, Blue, Magenta) were selected to cover the hue circle. The test cross of each coloured target cross was against 5 to 9 backgrounds differed in chroma (the hue and lightness were the same as each target) considering the gamut of the display, i.e., ‘Red’ against 9 backgrounds (C * ab of 10, 20, 30, 40, 50, 60, 70, 80 together with neutral grey), ‘Yellow’ against 7 backgrounds (C * ab of 10, 20, 30, 40, 50, 60 together with neutral grey), ‘Green’ against 9 backgrounds (C * ab of 10, 20, 30, 40, 50, 60, 70, 80 together with neutral grey), ‘Blue’ against 5 backgrounds (C * ab of 10, 20, 30, 40 together with neutral grey), ‘Magenta’ against 9 backgrounds (C * ab of 10, 20, 30, 40, 50,60, 70, 80 together with neutral grey). Twenty-two normal colour vision observers (12 males and 10 females) took part in the experiment. In total, 1,078 matches were accumulated, i.e., (39 combinations + 10 repeats) × 22 observers.
3 Results Mean Colour Difference from the Mean (MCDM) were calculated to represent the observer variation of the result. All colour differences were calculated using CAM16UCS colour space [3]. The overall MCDM values of inter- and intra- observer variation were found to be 1.8 and 0.6 of E’Jab units respectively. To reveal the chroma contrast
74
Y. Zhu and M. R. Luo
effect, the visual results were expressed as each test/background combination. Taking the result of C * ab = 30 of each target as a reference, the differences in chroma ΔC’v , representing the visual colour shifts between the results of test/background and the target/reference (target/C * ab = 30), were calculated. The differences in chroma ΔC’b-t were the colour differences between background and target colour (the result of target/reference). All the parameters were calculated using CAM16-UCS. Fig. 3 shows the plot of C’b-t versus C’v for all the five series of target colours. It can be found that the chroma effect had a clear relation of sigmoidal function, i.e., a saturated background will make centre desaturated. Although the magnitude of chroma effect is slightly smaller than that of lightness and hue contrast referring to previous studies [7], the effect should also be added into colour appearance model. Because the results of five series of target colours are mutually consistent, chroma simultaneous colour contrast was modelled without considering the hue angle, as given in Eq. (1). The model was fitted by minimizing the colour difference between visual colour shift (C’v ) and predicted colour shift (C’p ) using a root mean square. The model fitted well to the data having a correlation coefficient of 0.99. And the revised model reduced the prediction error of CAM16 from 5.24 to 0.72 C’ab units.The fitted sigmoidal function is also plotted in Fig. 3. Cp =
9.7536
0.4396 + e−0.1232 × Jb−t
− 4.3430
(1)
4 Conclusions Visual experiments were carried out to investigate the simultaneous chroma contrast effect on a self-luminous display using colour matching method. The goal was to accumulate a new visual dataset and the data can be used to extend CAM16 to accurately predict the simultaneous chroma contrast effect. Five coloured targets were studied. A total of 49 test/background combinations were presented, 1,078 matches were accumulated. The results clearly showed that the chroma of the test colour decreased with the chroma of the background colour. A sigmoidal function was well fitted to consider the chroma of the background, since the error of prediction ΔC’ab was 0.72 and correlation coefficient is 0.99. The current results indicated the chroma contrast effect should be integrated into the colour appearance model. And together with the earlier developed hue and lightness contrast effects, a comprehensive colour contrast model based on CAM16 was developed.
Modelling Simultaneous Contrast Effect on Chroma Based on CAM16 Colour …
75
Fig. 3 The plot between the visual chroma difference, C’b-t plotted versus C’v the between the test background and target colour for all colour series
Acknowledgements. The authors like to thank the support from the Chinese Government’s National Science Foundation (Project Number on 61775190).
References 1. Hunt RWG (1982) A model of colour vision for predicting colour appearance. J. Color Research & Application 7(2):95–112 2. CIE 159:2004, A colour appearance model for colour management systems: CIECAM02, (CIE, Vienna, 2004). 3. C. J. Li. Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. H. Cui, M. Melgosa, M. H. Brill, M. R. Pointer, “Comprehensive color solutions, CAM16, CAT16 and CAM16-UCS”, J. Color Research & Application, 42(6), 703–718 (2017). 4. Luo MR, Gao XW (1995) Quantifying colour appearance. part V. simultaneous contrast. J. Color Research & Application 20(1):18–28 5. Wu RC, Wardman RH (2010) Proposed modification to the CIECAM02 colour appearance model to include the simultaneous contrast effects. Color Research & Application 32(2):121– 129 6. Gao XW, Wang Y, Qian Y, Gao A (2015) Modelling of chromatic contrast for retrieval of wallpaper images. J. Color Research & Application 40(4):361–373
76
Y. Zhu and M. R. Luo
7. Y. Zhu, M. R. Luo, “Proposed modification to the CAM16 colour appearance model to predict the simultaneous colour contrast effect”, 27th Color and Imaging Conference Final Program and Proceedings, pp. 190–194(5). 8. J. Albers, Interaction of color, (Yale University Press, 1975). 9. L. Xu, M. R. Luo, “Comparison Between Models, Measurement Instruments, and Testing Datasets for Display Characterization”, Lecture Notes in Electrical Engineering, 13–20 (2018).
Developing HDR Tone Mapping Operators Based on Uniform Colour Spaces Imran Mehmood and Ming Ronnier Luo(B) State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou, China [email protected]
Abstract. To perform tone mapping from high dynamic range images to standard dynamic range, scene luminance is typically used. The lightness or brightness from uniform colour spaces (UCSs) are hardly used. In this study, the role of the UCSs was evaluated by transforming the HDR radiance map to a particular colour space, followed by tone mapping of brightness, lightness or intensity channel of that colour space. Four UCSs, CIELAB, CIECAM02, IPT and recently developed Jz az bz were used in tone mapping. The image was first decomposed into large scale variation base layer and detail layer. The tone curve was applied on the base layer while detail layer was enhanced by local adaptation. The author’s newly developed method to evaluate TMOs was used to develop new TMOs by minimizing the CIELAB(2:1) colour difference between the TMO and the reference images. Keywords: High Dynamic Range · Tone Mapping · Image Quality · Uniform Colour Space
1 Introduction In our previous study [1], a method for the evaluation of the image quality of the tone mapping operators (TMOs) was proposed. A set of high quality reference images was obtained from a psychophysical experiment. Ten HDR images from RIT database [2] were chosen to be tone mapped to preserve the details in the highlight and shadow regions. Three image attributes, contrast, sharpness and colourfulness were rendered in five intensity levels. A psychophysical experiment was then conducted to assess the images as low or high quality by 20 observers. The raw data were analysed and preference score of each image was calculated. The rendered image with the highest preference score for each scene was selected as reference image. It was found that all the reference images to have high contrast, sharpness and colourfulness compared to the original images. Two colour difference metrics, CIELAB (2:1) [3] and S-CIELAB [4] were used as objective image quality metrics for evaluation of TMOs. It was found that, both metrics agreed well with the viual results, having coefficient of determination (R2 ) of 0.84 and d0.85 respectivly. For tone mapping, the operators usually preserve the colour information and reduce the dynamic range by compressing the luminance channel [5–9]. In this study, a new tone © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_13
78
I. Mehmood and M. R. Luo
mapping model based on bilateral filtering was used to tone map the image in four UCSs: CIELAB, CIECAM02 [10], IPT [11] and Jz az bz [12]. The performance was evaluated using the earlier established reference images and CIELAB (2:1) colour difference (i.e., Eab (2 : 1)).
2 Methodology The same ten HDR images from RIT database were used in this study which were used to develop the reference images. Each HDR image radiance map [13] was transformed to the particular colour space. The lightness channel L∗ of CIELAB was used to tone map while keeping the a∗ and b∗ unchanged. In IPT colour space, the intensity channel I was used and P and T were preserved as colour information. While brightness channels Q of CIECAM02 and Jz of Jz az bz were used. 2.1 Image Decomposition The channel to be tone mapped, was first decomposed into large scale variations i.e., base layer, and texture layer i.e., the detail layer. The tone mapping was applied to the base layer only. Before decomposition, the image was transformed to log domain, as the human vision system agrees well in log domain. The bilateral filter proposed by Durand et. al. [14] was used to decompose the image. The output of the image bilateral filter for an image pixel x is given by Eq. (1). 1 f (x, ℵ)g(Iℵ − Ix )Iℵ (1) A(x) = k(x) ℵ k(x) = f (x, ℵ)g(Iℵ − Ix ) (2) ℵ
where f() is a Gaussian function used in the spatial domain with the kernel scale σx set to empirical value of 2% of the image size and g() is another Guassian in the intensity domain with its scale σr which is set to constant value of 0.35. Ix is the L∗, J or I depending upon the particular colour space. To speed up the filter nearest neighbour down sampling and piece-wise-linear approximation was used [17]. The detail layer is obtained by subtracting the base layer from the original image L∗, J , Q or I . Both layers are transformed from log domain to the linear domain for the subsequent processing. 2.2 Tone Mapping Curve For the simplicity, the tone mapping function for the base layer was defined as γ
Jc = AJ i
(3)
where Ji is the normalized input obtained by L∗ ,Jz , Q orI . Jc is the compressed output for the particular input channel. A is the scaling factor which was set to 1 to obtain the output in the range of [0,1]. γ , a tone shaping parameter, is used to change the contrast of the base layer.
Developing HDR Tone Mapping Operators Based on Uniform …
79
2.3 Details Enhancement For the true comparison, the details of the images must be manipulated to improve texture of the images and to reduce the difference from the reference images. The details enhancement is carried out by the following equation. DetailsEnhanced = Details(FL +s)
0.25
2 FL = o.2k 4 (5LA ) + 0.1 1 − k 4 (5LA )1/3 k=
1 5LA + 1
(4) (5) (6)
The FL function is brightness, lightness or intensity dependent appearance effect based on the local adaptation LA . s is the details alteration parameter. Increasing s increases the texture in the detail image. The details of the image is multiplied with the compressed base layer to get the corresponding tone mapped channel. 2.4 Colour Saturation Compensation The tone mapping reduces the saturation of the colours. To minimize the colour difference with reference images and to improve the appealing effect of the images, saturation of the tone mapped images was compensated.
3 Results and Discussion To evaluate the performance of the colour spaces, reference images were used from the previous study and CIELAB (2:1) was used. The curve shaping parameter (γ), the details enhancement parameter (s) and the saturation compensation was optimized to achieve minimum Eab (2 : 1). Fig. 1 compares two images tone mapped in the four mentioned colour spaces with the reference images. The Eab (2 : 1) with the reference images is also presented in the parenthesis. It is clear that the images which are visually closer to the reference images have less Eab (2 : 1). The saturation and lightness in the images appears to be similar. The luminance contrast of the image in Jz az bz appears to be bit higher than the other colour spaces. Fig. 2 shows the comparison of UCSs when the three parameters were optimized using CIELAB (2:1) and achieved lowest colour difference. Image 6 has lowest colour difference as compared to the other images while four colour spaces have similar performance for the Images 3 and 7. The image 6 is also depicted on the right side, in Fig. 1. Table 1 draws the mean CIELAB (2:1) of the ten images in three different cases. In the first case, only tone compression was applied. In the second case, tone compression parameter and saturation parameter were optimized while details were same as the original HDR image. Lastly, all the three parameters were considered for optimization. In each case, trend was similar and Jz az bz performed slightly best, followed by CIELAB, IPT and CIECAM02.
80
I. Mehmood and M. R. Luo
Fig. 1 The comparison of four UCSs with saturatoin and Details Ehancement
Fig. 2 The comparison of four UCSs with saturatoin and Details Ehancement
Table 1 Mean ranking of the uniform colour spaces Mean colour space
Only tone compression
Saturation correction
Saturation and details enhancement
CIELAB
10.63
9.11
7.60
IPT
10.51
9.01
7.51
CIECAM02
11.24
9.64
8.03
9.99
8.57
7.14
Jz az bz
Developing HDR Tone Mapping Operators Based on Uniform …
81
4 Conclusions In this study, new TMOs were developed from four UCSs. The HDR images were first transformed to the particular colour space and brightness, lightness or intensity channel was used for tone mapping and details enhancement. The specified channel was first decomposed into base layer and detail layer using bilateral filter. The tone curve was applied on the base layer while detail layer was enhance using local adaptation. The CIELAB(2:1) was used to optimize two parameters for achieving minimum colour difference. It was found that when compensation was made in colours and details, the mapped image is close to the reference image. By the end, all TMOs gave similar performance in terms of CIELAB(2:1) units.
References 1. Mehmood I, Khan MU, Mughal MF, Luo MR (2020) Reference images development for evaluation of tone mapping operators. In: Advanced graphic communication, printing and packaging technology. Springer, Singapore. pp 129–135 2. Fairchild MD (2007) The HDR photographic survey. In: Color and imaging conference. vol 2007, pp 233–238,Society for Imaging Science and Technology 3. Luo MR (2002) CIE Division 8: a servant for the imaging industry. In: Color science and imaging technologies. vol 4922, pp 51–55, International Society for Optics and Photonics 4. Zhang X, Wandell BA (1996) A spatial extension of CIELAB for digital color image reproduction. In. SID international symposium digest of technical papers. vol 27, pp 731–734, Citeseer 5. Reinhard E, Stark M, Shirley P, Ferwerda J (2002) Photographic tone reproduction for digital images. In: Proceedings of the 29th annual conference on Computer graphics and interactive techniques. pp. 267–276 6. Duan J, Qiu G (2004) Fast tone mapping for high dynamic range images. In: Proceedings of the 17th international conference on pattern recognition, 2004. ICPR 2004. vol 2, pp 847–850,IEEE 7. Ma K, Yeganeh H, Zeng K, Wang Z (2015) High dynamic range image compression by optimizing tone mapped image quality index. IEEE Trans Image Process 10(24):3086–3097 8. Mantiuk R, Daly S, Kerofsky L (2008) Display adaptive tone mapping. In: ACM SIGGRAPH 2008 papers. P 10 9. Lézoray O (2020) Hierarchical morphological graph signal multi-layer decomposition for editing applications. IET Image Processing 10. Moroney N, Fairchild MD, Hunt RW, Li C, Luo MR, Newman T (2002) The CIECAM02 color appearance model. In. Color and Imaging Conference. Vol 2002, pp 23–27,Society for Imaging Science and Technology 11. Ebner F, Fairchild MD (1998) Development and testing of a color space (IPT) with improved hue uniformity. In. Color and imaging conference. Vol 1998, pp 8–13, Society for Imaging Science and Technology 12. Safdar M, Cui G, Kim YJ, Luo MR (2017) Perceptually uniform color space for image signals including high dynamic range and wide gamut. Opt Express 13(25):15131–15151 13. Debevec PE, Malik J (2008) Recovering high dynamic range radiance maps from photographs. In: ACM SIGGRAPH 2008 classes. p 10 14. Durand F, Dorsey J (2002) Fast bilateral filtering for the display of high-dynamic-range images. In: Proceedings of the 29th annual conference on Computer graphics and interactive techniques. Pp 257–266
Effects of Cone Response Function on Multispectral Data Compression Qian Cao1(B) , Xiaozhou Li2 , and Junfeng Li3 1 Department of Printing and Packaging Engineering, Shanghai Publishing and Printing
College, Shanghai, China [email protected] 2 School of Light Industry Science and Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan, Shandong, China 3 School of Packaging and Printing Engineering, Henan University of Animal Husbandry and Economy, Zhengzhou, Henan, China
Abstract. This paper proposed a weighted principal component analysis method based on cone response function to reserve more color information. To verify the advantages of the proposed method, Munsell spectra were used as training samples to determine the conversion model between high-dimensional spectral space and low-dimensional space, Munsell, ISO SOCS spectra and two multispectral images were used as test samples. Compared with the principal component analysis method, the proposed method can significantly improve the colorimetric accuracy at the expense of a small amount of spectral accuracy. In addition, compared with the other three weighted principal component analysis methods based on cone response function, this method has a lot of improvement in colorimetric accuracy. Keywords: Spectral Color Reproduction · Spectral Data Compression · Spectral Dimensionality Reduction · Cone Response Function
1 Introduction In recent years, spectral color reproduction, which can reduce or even avoid the problem of "metamerism" in traditional color reproduction, has become a research hotspot in color science [1, 2]. However, using high-dimensional multispectral data directly will take up a lot of storage space, and the subsequent processing is complex. The common solution is to use multivariate statistical methods such as principal component analysis (PCA for short) [3] to reduce the dimension. However, PCA treats all wavelengths in the visible range equally, and the reconstructed spectrum is only a mathematical approximation of the original spectrum, which often leads to smaller spectral error and larger color difference. Some researchers have proposed weighted PCA to compress spectral data. Maloney [4] is the first scholar to use weighted PCA in the study of spectral data. He used the visual efficiency function v (λ) of the light spectrum of the bright vision as © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_14
Effects of Cone Response Function on Multispectral Data …
83
the weight function.He Songhua et al. [5] proposed two weight functions based on cone response function in 2015, which effectively improved the colorimetric accuracy of multispectral compression algorithm. Liu Shiwei et al. [6] proposed a weight function based on cone response functions in 2017. However, when the test samples are changed or other spectral images are used, the colorimetric error of the weighted PCA methods mentioned above is still very high.
2 Method In 2015, He Songhua et al. [5] proposed two weight functions named W1 and W2 respectively, which can be expressed as Eqs. (1) and (2). Liu Shiwei et al. [6] proposed a weight function W3, as shown in Eq. (3). W1 = (L(λ) + M(λ) + S(λ))
(1)
W2 = ( L(λ)2 + M(λ)2 + S(λ)2 )
(2)
W3 = ( L(λ) + M(λ) + S(λ))
(3)
Among them, L(λ)M(λ) and S(λ) denote three kinds of cone response functions, i.e., red sensitive, green sensitive and blue sensitive, respectively. In order to minimize the colorimetric error, it is very meaningful to find an optimal combination of cone response functions. Therefore, we tried to use the sum of the square roots of three cone response functions as the weight function, and found that the weight function has significantly improved the colorimetric accuracy compared with the other weight functions. In this paper, a weight function W4 based on cone response function is proposed. Its expression is shown in Eq. (8). (4) W4 = ( L(λ) + M(λ) + S(λ))
3 Experiments and Process 3.1 Experimental Data Munsell [7] and ISO SOCS [8] are used in this experiment as shown in Fig. 1. A multispectral hair image with a resolution of 512 × 512 pixels (named image-1) [9] and a multispectral landscape image with a resolution of 1024 × 1344 pixels [10] (named image-2) were also used as shown in Fig. 2. 3.2 Experimental Process The test samples were compressed to 6-dimensional space by compression algorithm, and then restored to 31 dimensional space. There are some errors between the original and the reconstructed spectra. The errors of original and reconstructed spectra are evaluated by spectral accuracy, colorimetric accuracy and mean value of color difference under different light sources.
84
Q. Cao et al.
Fig. 1 Colorimetric coordinates a*b* (D50/2°): a Munsell Glossy, b ISO SOCS
Fig. 2 Two multispectral images a image-1, b image-2
4 Results and Discussion Munsell spectral data were used as training samples to determine principal components, Munsell glossy, ISO SOCS spectra and two multispectral images were used as test samples. 4.1 Spectral Accuracy The root mean square errors (RMSE) [11] are employed to evaluate spectral accuracy between the original and reconstructed samples. The RMSE is expressed asequation (5). n 1 2 (S(λi ) − S(λi )) (5) RMSE = n
i=1
Effects of Cone Response Function on Multispectral Data …
85
Where S(λi ) and S(λi ) are the original and reconstructed spectra, respectively, and n is the number of test samples.The average values of spectral errors between original and reconstructed spectra of Munsell and ISO SOCS of test samples are shown in Table 1, and those of multispectral images are shown in Table 2. Table 1 Spectral accuracy (RMSE) of Munsell and ISO SOCS PCA Munsell
W1PCA W2PCA W3PCA W4PCA
0.0086 0.0145
0.0139
0.0116
0.0136
ISO SOCS 0.0132 0.0210
0.0207
0.0186
0.0203
Table 2 Spectral accuracy (RMSE) of multispectral images PCA
W1PCA W2PCA W3PCA W4PCA
Image-1 0.0043 0.0056
0.0057
0.0045
0.0056
Image-2 0.0041 0.0048
0.0046
0.0045
0.0045
As shown in Table 1, when the test sample is Munsell, the order of spectral accuracy from good to poor is PCA, W3PCA, W4PCA, W2PCA, W1PCA; when the test samples are ISO SOCS, the multispectral hair image and themultispectral landscape image, the conclusion of spectral accuracy is similar. 4.2 Colorimetric Accuracy The CIELAB color difference formula [12] recommended by the International Commission on illumination in 1976 is used to calculate the colorimetric accuracy. (6) E∗ab = (L2∗ − L1∗ )2 + (a2∗ − a1∗ )2 + (b∗2 − b∗1 )2 The average values of color difference between original and reconstructed spectra of Munsell and ISO SOCS are shown in Table 3. The average values of color difference between original and reconstructed spectra of multispectral images are shown in Table 4. Table 3 Colorimetric accuracy of Munsell and ISO SOCS PCA Munsell
W1PCA W2PCA W3PCA W4PCA
0.8988 0.5563
0.6720
0.3607
0.2882
ISO SOCS 1.4675 0.9975
1.2937
0.8906
0.5582
86
Q. Cao et al. Table 4 Colorimetric accuracy of multispectral images PCA
W1PCA W2PCA W3PCA W4PCA
Image-1 0.9672 0.4533
0.5354
0.3678
0.2783
Image-2 1.3470 0.7527
0.7775
0.5026
0.2314
As shown in Table 3, when the test sample is Munsell, the colorimetric accuracy of weighted principal component analysis method is significantly better than that of principal component analysis method, and the order from good to poor is: W4PCA, W3PCA, W1PCA, W2PCA, PCA. When the test samples are ISO SOCS, multispectral hair image and multispectral landscape image, the conclusion of colorimetric accuracy is similar. 4.3 Colorimetric Error Under Various Illuminants The main purpose of spectral color reproduction is to reduce or even avoid the phenomenon of “metamerism”, so as to ensure that the original object and the replica have the same color sense under any observer and light source. In this paper, representative light sources D65, A, F2, and D50 were selected to evaluate the color difference of the original and reconstructed spectra. Table 5 shows the average values of color differences under the CIE standard illuminant D50, D65, A and F2 between the original and reconstructed reflectance spectra of Munsell. Table 6 shows the average values of color differences under the CIE standard illuminant D50, D65, A and F2 between the original and reconstructed reflectance spectra of ISO SOCS. Table 5 Colorimetric accuracy between original and reconstructed spectra of Munsell under the CIE standard illuminant D50, D65, A and F2 PCA
W1PCA W2PCA W3PCA W4PCA
D50
0.8988 0.5563
0.6720
0.3607
0.2882
D65
0.9235 0.4572
0.5696
0.2911
0.2403
A
0.7771 0.7459
0.8312
0.4136
0.3870
F2
0.9054 0.3004
0.1889
0.2366
0.3013
Mean 0.8762 0.5150
0.5654
0.3255
0.3042
As shown in Table 5, the order of colorimetric accuracy between original and reconstructed spectra of Munsell under the CIE standard illuminant D50, D65, A and F2 is W4PCA, W3PCA, W1PCA, W2PCA, PCA.As shown in Table 6, the order of colorimetric accuracy between original and reconstructed spectra of ISO SOCS under the CIE standard illuminant D50,D65,A and F2 is W4PCA, W3PCA, W1PCA, W2PCA, PCA.
Effects of Cone Response Function on Multispectral Data …
87
Table 6 Colorimetric accuracy between original and reconstructed spectra of ISO SOCS under the CIE standard illuminant D50, D65, A and F2 PCA
W1PCA W2PCA W3PCA W4PCA
D50
1.4675 0.9975
1.2937
0.8906
0.5582
D65
1.4717 0.8309
1.1260
0.7798
0.4988
A
1.3029 1.2721
1.5039
0.9066
0.6681
F2
1.4396 0.5823
0.4623
0.6401
0.6434
Mean 1.4204 0.9207
1.0965
0.8043
0.5921
5 Conclusions Compared with the PCA method, this method can significantly improve the colorimetric accuracy at the expense of a small amount of spectral accuracy. In addition, compared with the other three weighted PCA methods based on cone response function, this method has a lot of improvement in colorimetric accuracy. Acknowledgments. This study is funded by Lab of Green Platemaking and Standardization for Flexographic Printing (LGPSFP-02, ZBKT201905). This work is also supported by Key Research and Development Program of Shandong Province(No. 2019GGX105016).
References 1. Cao Q, Wan X, Li J et al (2016) Updated version of an interim connection space LabPQR for spectral color reproduction: LabLab[J]. J Opt Soc Am A Opt Image, Vis 33(9):1860–1871 2. Taplin LA, Berns RS (2001) Spectral color reproduction based on six-color inkjet output system. In: Color and Imaging Conference 3. Wang Y, Zhai S, Liu J (2013) Dimensionality reduction of multi-spectral images for color reproduction [J]. J Softw 8(5):1180–1185 4. Maloney LT (1986) Evaluation of linear models of surface spectral reflectance with small numbers of parameters. J Opt Soc America A Opt Image Sci 3(10):1673–1683 5. He SH, Chen Q, Duan J (2015) The research of spectral dimension reduction method based on human visual characteristics [J]. Spectroscopy Spectral Anal 35(6):1459–1463 (in Chinese) 6. Shi-Wei L, Zhen L, Quan-Hui T, et al (2017) Spectral dimension reduction model research based on human visual characteristics and residual error compensation [J]. Spectroscopy Spectral Anal 7. University Of Eastern Finland, Spectral Color Research Group. https://www.uef.fi/spectral/ spectral-databas 8. ISO TR (2003) 16066–2003, Graphic technology—standard object colour spectra database for colour reproduction evaluation 9. Yasuma F, Mitsunaga T, Iso D et al (2010) Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum[J]. IEEE Trans Image Process A Publication of the IEEE Signal Process Soc 19(9):2241–2253 10. Nascimento SM, Amano K, Foster DH (2016) Spatial distributions of local illumination color in natural scenes[J]. Vision Res 120:39
88
Q. Cao et al.
11. Imai FH, Rosen MR, Berns RS (2012) Comparative study of metrics for spectral match quality. In: Conference on colour in graphics, imaging, and vision, 2002; Society for imaging science and technology pp 492–496 12. Robertson AR (1977) The CIE 1976 color-difference formulae [J]. Color Res Appl 2(1)
Study on Colorization Method of Grayscale Image Siyuan Zhang1(B) , Ruze Zhuang1 , Jing Cao1,2 , Siwei Lu3 , and Qiang Wang1 1 School of Media and Design, Hangzhou Dianzi University, Zhejiang, China
[email protected]
2 Institute of Creative Industries Design, National Taiwan University of Arts, Taiwan, China 3 Wenzhou Branch of China Mobile Group, Zhejiang Co. LTD, Zhejiang, China
Abstract. Contrapose the problem that the traditional grayscale image colorization results are not unique, this paper proposes a color conversion method based on luminance and Local Binary Patterns (LBP) texture features, which uses an improved Gaussian Mixture Model (GMM) clustering or image segmentation. The cascading feature matching method performs fast feature matching of sub-blocks after segmentation, which overcomes the low matching accuracy of the global color conversion algorithm, long feature matching processing time, and color conversion errors. The verification results of subjective and objective evaluation experiment shows that the method has obvious advantages. Keywords: Image Segmentation · Feature Matching · Color Conversion · Texture Features
1 Introduction Image colorization is an effective method to highlight image details and improve visual effects, which widely used in various research fields such as remote sensing, medicine, culture and art. In recent 10 years, grayscale image colorization begins. to rise in the fields of restoration of artistic works color fading and images in black and white [1]. Since the new century, domestic and foreign scholars have created more theoretical and technological innovations in the study of grayscale image colorization. In 2001, Rein hard proposed the color conversion algorithm between color images for the first time. In 2002, Welsh proposed a method for images colorization [2]. In 2010, Wang Shigang determined the optimal clustering number through the Bayesian YinYang machine model [3] and in 2016 Iizuka proposed the end-to-end convolutional neural network model that can directly output color images [4], but there are still some shortcomings such as low color transmission accuracy. Contrapose the problems existing in the colorization methods of grayscale image, this paper proposes a color conversion algorithm based on luminance and texture features, which achieves the effect of grayscale image colorization with high-efficiency and highquality based on improved GMM algorithm and cascade feature matching method. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_15
90
S. Zhang et al.
2 Theoretical Basis of Grayscale Image Colorization 2.1 Principle of Color Space Conversion The core of the grayscale image colorization is the change of color space and the transfer of color. Gray is a function of the three primary colors of color, which can be described by Eq. (1). Grey = f (RGB)
(1)
Where R + G + B = 1. In order to accurately control the accuracy of color transformation, the comparison of color differences is performed in Lab color space. Therefore, before the grayscale image colorization process beginning, the color reference image in the RGB color space needs to be converted into an image in the Lab color space. Then, segment the Lab color space by cluster segmentation method to realize the mapping and change of the color data of the gray image and the color reference image under the premise of keeping the luminance unchanged. Finally, the Lab color space is converted back to the RGB color space through inverse transformation, and then the colorized rendering of the grayscale image on the display is realized. 2.2 Colorization Method of Grayscale Image Due to the difference of subjective color perception among different groups of people, the colorization result of grayscale image is not unique. In this paper, according to the requirements of general grayscale image colorization, the color information of the color reference image is transferred to the grayscale image to realize the colorization of grayscale image under the constraints of luminance, standard deviation, texture features of color reference image, luminance of grayscale image and gray probability statistics by extracting the luminance and grayscale probability statistics of grayscale image, as shown in Fig. 1.
Fig. 1 Grayscale image colorization process
By analyzing mainstream clustering algorithms and combining weighted the Gaussian mixture model of multiple Gaussian models, this paper optimized the clustering algorithm to realize the goal that the number of sub-blocks of gray image and color reference image are equal and obtained a good simulation of the color distribution or color probability distribution of the image. Then, feature extraction and matching of
Study on Colorization Method of Grayscale Image
91
grayscale sub-blocks and color reference sub-blocks are performed [5] and matching and color conversion of sub-blocks are completed according to the requirements of global matching of pixel points to achieve colorization of grayscale images.
3 Algorithm Design of the Grayscale Image Colorization 3.1 Algorithm Process In this paper, the algorithm design of the grayscale image colorization method is divided into three steps: clustering segmentation, sub-block relationship establishment and color conversion. The algorithm flow is shown in Fig. 2.
Fig. 2 Flow chart of color conversion algorithm
It can be seen from Fig. 2 that the improved GMM clustering segmentation algorithm is used to achieve the clustering segmentation of grayscale image and the color reference image after color space conversion, and ensures that the number of sub-blocks after segmentation isequal. Then extract the luminance, standard deviation, Gabor texture features and Daisy features of the sub-blocks respectively, and use the cascade feature matching method to quickly match the sub-blocks to establish the correspondence between the sub-blocks. Finally, match pixels according to luminance and texture features and assign the color of pixels on the color reference image to the grayscale image to complete the colorization of the grayscale image. 3.2 Implementation of Grayscale Image Colorization Algorithm Implementation and Data Analysis. Image Segmentation. By combining penalty function and likelihood function, the parameters of GMM model can be evaluated effectively, where the penalty function gets the optimal solution by unconstrained problem, and the likelihood function evaluates the event parameters according to the results of known events [6]. Establish Sub-block Association. This paper adoptscascading feature matching method to establish sub-block correlation. This method selects luminance, standard deviation, Gabor texture features and Daisy features to describe sub-blocks, and automatically
92
S. Zhang et al.
finds the sub-blocks with the highest matching degree in order to match, ensuring the adequacy of spatial optimization and matching accuracy of sub-blocks. Color Conversion. Contrapose the limitation of using only luminance information in pixel matching of traditional color conversion algorithms, this paper proposes an algorithm for color conversion using texture features described by luminance and LBP operator. The color conversion steps are as follows: g
g
c • Calculate the average luminance value MY , MYc and texture average value MLBP , MLBP of each pixel in the sub-block that successfully matched, the neighborhood range is 3*3 pixels, and its formulas are shown in Eq. (2), (3):
LBPPR = 2p ×
p−1 s gp − gc
(2)
p=0
2π p 2π p Yc − Rsin gp = Xc + Rcos P P
(3)
Where gp is the grayscale value of pixel P, X C and Y C are the pixel coordinates, gC is the center pixel, and LBPPR is the LBP value of the center pixel. • Calculate the pixel points with the highest similarity in the sub-blocks of gray image and color reference image, as shown in Eq. (4). g 2 g 2 c MY − MYc + MLBP − MLBP (4) d= The smaller d represents the higher similarity of pixels, and the best matching pixel points can be obtained. • Assign the color value of the pixel point of the color reference image with the highest similarity to the pixel point of the corresponding grayscaleimage to complete the color conversion. Experimental Design In order to verify the advantages of the method proposed in this paper, an experiment is designed to compare colorization effect of Welsh method, Iizuka method and this method. Select 20 groups of animals and landscapes with high color contrast and saturation as the experimental samples. The sample pictures are colorized by 3 algorithms at the same time. In the selection of reference pictures, this article selects pictures similar to the grayscale image content and texture structure for color conversion. Finally, the subjective and objective evaluation methods were used to evaluate the colorization effects of the 3 methods. Comparison of Experimental Results. This paper selects two groups of different types of sample images for displaying; the contrast effect is shown in Table 1. What we can learn about from the comparison of experimental effects is that the color saturation of the image after colorization by the Welsh method is lowand the
Study on Colorization Method of Grayscale Image
93
Table 1 Contrast of grayscale color effect
colorization effect is poor while the colorization effect of the Iizuka method is better than the Welsh method except the overall color difference of the image is large. On the whole, the colorization effect and the colorof the method proposed in this paperare full and real, its overall perception is better. Colorized Image Quality Evaluation This paper designed subjective and objective image quality evaluation methodsfor comparative experiments [7]. In the subjective evaluation experiment, 10 professional volunteerswas selected to give scoresfor the image quality evaluation in the standard observer environment,whichadopts four dimensions (similarity, authenticity, saturation and aesthetics) and 5-point scale (5 points are the best and 1 point is the worst)as evaluation standard.The subjective evaluation results are shown in Table 2. Table 2 Subjective evaluation results of grayscale image colorization Colorization scheme
Evaluation standard
Welsh method
Iizuka method
This paper method
Group1
Similarity
1
3
5
Authenticity
3
3
4
Saturation
2
2
4
Group2
Aesthetics
2
4
5
Similarity
2
3
4
Authenticity
2
2
5
Saturation
1
4
5
Aesthetics
2
3
4
The objective evaluation experiment selects peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and color similarity (C) as evaluation indicators.
94
S. Zhang et al.
Peak Signal-to-Noise Ratio. As shown in Eq. (6): ⎛ ⎞
2 MSE = ⎝ fij − fij ⎠/(M × N )
(6)
0≤i≤Y 0≤j≤Z
Where M and N respectively represent the length and width of the image, fij represents the pixel value of the original color image, and fij represents the pixel value of the colorized image. The calculation formulaof PSNR is shown in Eq. (7). 2552 (7) PSNR = 10 × log MSE Structure Similarity.The formulas of structure similarity are shown in Eq. (8), (9), (10), (11), and (12). SSIM (x, y) = l(x, y)α · c(x, y)β · s(x, y)γ 2μx μy + c1 μ2x + μ2x + c1 2σx σy + c2 c(x, y) = 2 σx + σx2 + c2 σxy + c3 s(x, y) = σx σy + c3 l(x, y) =
1 (xi − μx ) yi − μy N −1
(8) (9) (10) (11)
N
σxy =
(12)
i=1
Where x and y respectively represent the reference image and the color image, and μx 、μy 、σx2 、σy2 、σxy respectively represent the mean, variance, and covariance of the images x and y. C1 , C2 , and C3 are small normal Numbers. The parameter αβγ, all greater than 0, are used to adjust the proportion of the three components in the model. Colorfulness. As shown in Eq. (13): 2 + σ 2 + 0.3 μ2 + μ2 C = σrg (13) rg yb yb Where rg = R − G, yb = 0.5(R + G) − B, μ and σ represent mean and variance respectively. The difference value C between the reference image’s color degree Cg and the color degree Cc of the colorized image represents the color similarity between the colorized image and the reference image, as shown in Eq. (14). (14) C = Cc − Cg The evaluation results are shown in Table 3. From the experimental data, we can see that the method in this paper has been unanimously approved. It has the highest scores of subjective evaluation, thebest PSNR and SSIM values andthe lowest C value. Therefore, it has fully proved that the grayscale image colorization method in this paper is more riable and the visual effect is more real.
Study on Colorization Method of Grayscale Image
95
Table 3 Objective evaluation results of grayscale image colorization Colorization scheme
Evaluation standard
Welsh method
Iizuka method
This paper method
Group1
PSNR
28.078
28.374
29.001
SSIM
0.512
0.666
0.935
Group2
C
16.4
15.4
PSNR
18.697
17.569
22.225
SSIM
0.423
0.588
0.833
C.
16.1
13.4
3.0
6.2
4 Conclusions Grayscale image colorization has become the focus of current study. Contrapose the insufficient of traditional grayscale image colorization method, this paper, on the basis of improved clustering segmentation algorithm and four feature combination extraction, adopts the optimized cascading feature matching method to quickly match sub-blocks and realizes the color conversion algorithm according to luminance and LBP operator to complete the grayscale image colorization. The experimental results show that this method has better clustering segmentation effect and color conversion accuracy, which provides a new way of thinking and method for fading restoration of artistic works and colorization of medical diagnosis and treatment grayscale images. Acknowledgements. This work is funded by Digital Imaging Theory- GK188800299016–054.
References 1. Teng S (2006) Research on colorization of black and white images [J] 2. He X, Qiao Y (2014) Study on the color transformation of grayscale images based on Welsh algorithm [J]. Computer application and software 31(12): 268–271 3. Wang S, Donghui L (2010) Image colorization method based on optimal clustering number and histogram matching [J]. J Comput Appl 30(2):351–353 4. Iizuka S, Simo-Serra E, Ishikawa H (2016) Let there be color: Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification [J]. ACM Trans Graphics (TOG) 35(4):110 5. Huang G, Han X, Gong X, et al (2019) Grayscale image colorization algorithm based on image segmentation and region matching [J]. Liquid Crystal Display 34(6): 619–626 6. Tai YW, Jia J, Tang CK (2005) Local color conversion via probabilistic segmentation by expectation-maximization. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). IEEE vol 1, pp 747–754 7. Li W, Yang F, Fu A (2012) Research on the evaluation of the effect of gray image coloriation[J]. Shanxi Electron Technol 2012(2):78–80
Research on Optimal Samples Selection Method for Digital Camera-Based Spectral Reflectance Estimation Jinxing Liang1,2(B) , Jia Chen1,2 , and Xinrong Hu1,2 1 College of Mathematics and Computer Science, Wuhan Textile University, Hubei, China
[email protected] 2 Engineering Research Center of Hubei Province for Clothing Information, Hubei, China
Abstract. Considering the spectral redundancy and the inconvenience of using large number of training sample in digital camera-based spectral reflectance estimation application, we proposed an optimal samples selection method. A representative simulation spectral estimation system is firstly constructed, and based on the constructed representative simulation system, the optimal sample subset is selected from the total sample set based on the minimization of spectral reflectance estimation error. The proposed optimal samples selection method is tested and compared with existing commonly used methods, result shows that the spectral reflectance estimation error tends to converge with the increasing of selected samples, and it is also outperforms the existing commonly used methods. Keywords: Digital Camera · Spectral Estimation · Optimal Samples Selection
1 Introduction Digital camera-based spectral estimation is an important way to acquire the surface spectral reflectance of object. For digital camera-based spectral estimation, training sample set is needed to calculate the spectral estimation matrix, with the calculated spectral estimation matrix, the surface spectral reflectance of the object can be estimated under the same imaging conditions [1]. The sample set that including a large number of samples is often used to as training samples, however, previous study showed that for the spectral estimation purpose, a large number of samples always have spectral redundancy and there is no necessary to use all samples as training sample [2–6]. Therefore, how to select the optimal samples from a big sample set has become an important research direction in this filed. Several methods for optimal samples selection have been proposed, such as the methods proposed by Hardeberg [3], Mohammadi [4],Cheung [5] and Shen [6]. Although these methods have good stability, such methods are only performed based on the analysis of spectral or color characteristic of total sample set itself, they do not consider the influence of the characteristics of the spectral reflectance estimation system.In this study, we proposed the optimal samples selection method for digital camera-based spectral © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_16
Research on Optimal Samples Selection Method for Digital Camera …
97
estimation based on representative simulation spectral estimation system and minimum spectral estimation error.
2 Proposed Method Different from the traditional multi-channel spectral estimation system where both the number of channels and the spectral sensitivity function (SSF) are often different from each other [7], the digital camera-based spectral estimation system all have three imaging channels to simulate the color perception of human visual system. Research carried out by Jiang et al. showed the SSF of different digital cameras have similar shape, and it is good enough to estimate the SSF of a digital camera use only the first two dimension of eigenvectors of a SSF database [8]. Therefore, it is reasonable to construct a representative simulation system for digital camera-based spectral estimation. In this study, we use the average result of a SSF database collected by Jiang et al. to construct the representative simulation spectral estimation system, and the representative camera SSF is shown in Fig. 1a. There presentative simulation spectral estimation system is constructed using the representative camera SSF, the polynomial-based spectral reflectance estimation algorithm [9] and the CIED65 standard illuminant data. After the representative simulation spectral estimation system constructed, the optimal samples are selected one-by-one from the total sample set based on minimum spectral estimation error (root-mean-square error, RMSE) of the selected sample subset to the total sample set. Details of the optimal sample selection strategy is illustrated as follows.
Fig. 1 a SSF used for representative simulation spectral estimation system construction, b the chromaticity distribution of pigment sample set, c SSF of Nikon D7200
From the select the first sample, the corresponding spectral estimation matrix Qi f Qi =ri · RLS(di ) or each sample in the total sample set is calculated as shown in Eq. (1), Qi =ri · RLS(di )
(1)
where RLS(*) represents the polynomial-based spectral estimation method [9], ri is the spectral vector of the ith sample in the total sample set, di is the corresponding simulation camera response vector of the ith sample, Qi is the corresponding spectral estimation
98
J. Liang et al.
matrix. Then, the spectral reflectance of the total sample set is estimated using Qi , as illustrated in Eq. (2), R,i = Qi · D
(2)
where D is simulation camera response matrix of the total sample set, R,i is estimated spectral reflectance matrix of total sample set. In the next step, the average RMSE between the estimated and measured spectral reflectance of total sample set is calculated as shown in Eq. (3), (3) RMSEi = E R,i − R where R is the ground-truth spectral reflectance matrix of total sample set, E{||*||} is the function to calculate the average RMSE of spectral reflectance, details to calculate RMSE between any two spectral reflectance can refer to Eq. (7), RMSE i is the corresponding spectral estimation error of using the ith sample as training sample. As indicated in Eq. (4), the sample with the smallest RMSE is selected as the first optimal sample, s1 = arg min RMSEi ri ∈
(4)
where s1 represent the first selected optimal sample, and the selected optimal sample subset1 get the first sample as shown in Eq. (5), 1 = {s1 }
(5)
Using the same rules as select the first optimal sample, when selecting the remaining 2nd to kth optimal samples, we just need to repeat the steps as shown in Eq. (6), where j indicates the jth sample in total sample set excluding the selected samples. Until the RMSE j becomes stable or tends to converge, the selection progress is complete and the final optimal sample subset k is acquired. ⎧ Qj = k−1 ∪ rj · RLS(d{k−1 ∪rj } ) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ R,j = Qj · D ⎪ ⎪ ⎨ RMSEj = E R,j − R (6) ⎪ ⎪ s = arg min RMSE ⎪ j k ⎪ ⎪ rj ∈ ⎪ ⎪ ⎪ ⎩ k = k−1 ∪ {sk }
3 Experimental The effectiveness of the proposed method is verified and compared with existing methods, the linear raw camera response is used in this study. In the experiment, a pigment dataset containing 784 samples and the Nikon D7200 digital camera are used to test
Research on Optimal Samples Selection Method for Digital Camera …
99
the effectiveness of the proposed method. The chromaticity distribution of the pigment sample set is shown in Fig. 1b, and the SSF of Nikon D7200 that estimated using Jiang’s method [10] is shown in Fig. 1c. Once the optimal sample subset is selected, its effectiveness as training sample for spectral reflectance estimation is tested and compared with existing methods. For the practical test, the digital images of the pigment samples set are captured by Nikon D7200, and the mean raw responses of each color patch of its central part with area of 35 × 35 pixels is extracted for spectral estimation testing. To evaluate the spectral reflectance estimation accuracy, we use both the RMSE and CIEL* a* b* color difference (E ab ) as evaluation metrics. Method to calculate the RMSE is shown in Eq. (7), where r1 is the estimated spectral of the testing sample, r2 is the ground-truth spectral, superscript T is the transpose operator, N is the sampling number of spectral in visible spectrum. Method to calculate CIEL* a* b* color difference is shown in Eq. (8), where L * , a* and b* are the colorimetric values of a sample in the CIEL* a* b* color space. 1 RMSE = (7) (r1 − r2 )T (r1 − r2 ) N
(8) Eab = (L∗1 − L∗2 )2 + (a1∗ − a2∗ )2 + (b∗1 − b∗2 )2
4 Results and Analysis Before we start to discuss the result of the experiment, it should be noted that since the digital camera Nikon D7200 used in the study is not included in the cameras that used to construct the spectral sensitivity function database in reference [8], therefore the testing of proposed optimal samples selection method with Nikon D7200 can be regard as a random verification. Fig. 2 shows the actual spectral estimation error distribution of the selected sample subset to the total sample set under different number of selected samples. The horizontal dotted line in Fig. 2 indicates the spectral estimation error when using the total sample set as the training samples, they are 3.15 and 1.65 for RMSE and E ab . It easy to infer from Fig. 2 that with the number of selected samples increase, the spectral estimation error for both RMSE and E ab are decrease rapidly, and when the selected samples reach to about 60, the spectral estimation accuracy is almost the same level to using total sample set as the training samples. As the selected samples continue to increase, the spectral estimation error for both RMSE and E ab are tends to stable, and the spectral estimation error is basically coincide with when using the total sample set as training samples. The trend described above is similar to the previous findings of Mohammadi and Shen et al. [6]. To further verify the performance of the proposed method, in the condition of selecting different number of optimal samples of 10, 30, 50, 70, 90, and 110, the proposed method is compared with the existing commonly used methods. The spectral estimation errors of different methods are summarized in Tables 1 and 2 respectively, where ‘Total’ indicates the spectral estimation result using the total sample set as training samples.
100
J. Liang et al.
Fig. 2 The actual spectral estimation error distribution of the selected sample subset to the total sample set under different number of selected samples: a RMSE, b E ab Table 1 The RMSE(%) of different methods in different number of selected samples RMSE(%)
Number of selected samples 10
30
50
70
90
110
Hardeberg
23.56
7.75
8.14
9.53
8.48
8.20
Mohammadi
56.46
6.71
4.73
4.57
4.24
3.97
Cheung
28.58
4.27
3.97
3.83
3.88
3.88
Shen
9.35
4.53
4.52
4.20
4.33
4.30
Proposed
5.47
4.17
3.20
3.09
3.06
3.04
Total
3.15
Table 2 The E ab of different methods in different number of selected samples E ab
Number of selected samples 10
Hardeberg
7.85
30
50
70
90
110
3.65
2.94
3.41
2.86
2.63
Mohammadi
17.52
3.00
2.54
2.65
2.43
1.98
Cheung
12.72
2.17
1.93
1.89
1.92
1.92
Shen
5.92
2.33
2.40
2.17
2.13
2.05
Proposed
3.45
4.07
1.90
1.98
1.87
1.82
Total
1.65
It can be seen from Tables 1 and 2 that in terms of RMSE, the proposed method outperformed the existing methods in different number of selected samples, and in terms of E ab , except the group of 30 and 70 selected samples, the proposed method still outperformed the existing methods in different number of selected samples. During the tested
Research on Optimal Samples Selection Method for Digital Camera …
101
methods, the method proposed by Hardeberg et al. shows the worst spectral estimation accuracy overall, the proposed method in this study shows the best accuracy. For the application of digital camera-based spectral estimation, the proposed optimal samples selection method is superior than the existing commonly used methods in general. It is referred that the superiority of the proposed method is to benefit from that the optimal samples are selected with consideration of the characteristics of the spectral estimation system. In addition, Fig. 3 shows the chromaticity distribution of 60 selected optimal samples in L-C plane of CIELCh color space.
Fig. 3 The chromaticity distribution of 60 selected optimal samples
It can be seen from Fig. 3 that the selected 60 samples are well dispersed in the L-C chromaticity plane, and it can better cover the pigment sample set. This shows that the selected 60 optimal samples have good chromaticity difference, and maybe the proposed method can be also used to select chromaticity representative samples of sample database in order to construct the portable color chart.
5 Summary A superior optimal samples selection method for digital camera-based spectral estimation is proposed in this study. The method is based on the construction of the representative simulation spectral estimation system and the minimum of the spectral reflectance estimation error. The performance of the method was comprehensively tested and compared to the commonly used methods. Result shows that with number of selected samples increasing the spectral reflectance estimation error tends to converge, and the method exhibited the best accuracy for both the spectral and colorimetric aspects. The method can be applied to develop the portable training sample set in many application fields such
102
J. Liang et al.
as printing, textile, cultural heritage and so on. Future research will focus on the further analysis and improve of the proposed method including the analysis of relationship between simulation and actual spectral estimation system, and the specific chromaticity distribution characteristics of the selected optimal samples. Acknowledgements. Natural Science Foundation of Hubei Province (2020CFB386), Team plan of scientific and technological innovation of outstanding youth in universities of Hubei province (T201807).
References 1. Ribes A, Schmitt F (2008) Linear inverse problems in imaging. IEEE Signal Process Mag 25(4):84–99 2. Kohonen O, Parkkinen J, Jaaskelainen T (2006) Databases for spectral color science. Color Res Appl 31(5):381–390 3. Hardeberg JY (2001) Acquisition and reproduction of color images: colorimetric and multispectral approaches, Universal-Publishers 4. Mohammadi M, Nezamabadi M, Berns RS, Taplin LA (2004) Spectral imaging target development based on hiearchical cluster analysis. Color Imaging Conf 2004:59–64 5. Cheung V, Westland S (2006) Methods for optimal color selection. J Imaging Sci Technol 50(5):481 6. Shen H, Zhang H, Xin H, Shao S (2008) Optimal selection of representative colors for spectral reflectance estimation in a multispectral imaging system. Appl Opt 47(13):2494–2502 7. Hardeberg JY, Schmitt F, Brettel H (2002) Multispectral color image capture using a liquid crystal tunable filter. Opt Eng 41(10):2532 8. Jiang J, Liu D, Gu J, Susstrunk S (2013) What is the space of spectral sensitivity functions for digital color cameras. Workshop on applications of computer vision, pp 168–179 9. Connah D, Hardeberg JY (2005) Spectral recovery using polynomial models. Color Imaging Conf 2005:65–75 10. Jiang J, Gu J (2012) Recovering spectral reflectance under commonly available lighting conditions. IEEE Comput Soc Conf Comput Vision Pattern Recog Workshops. IEEE 2012, pp 1–8
Viewed Lightness Prediction Model of Textile Compound Multifilament Yujuan Wang1,2 , Jun Wang2 , Jiangping Yuan1,3 , Jieni Tian1 , and Guangxue Chen1(B) 1 State Key Laboratory of Pulp and Paper Engineering, South China University of Technology,
Guangzhou, China [email protected] 2 College of Textiles, Donghua University, Shanghai, China 3 Institute for Visualization and Data Analysis, Karlsruhe Institute of Technology, Karlsruhe, Germany
Abstract. To facilitate the design of the textile compound multifilament before spinning, a viewed lightness prediction model was proposed. To examine the prediction effect, 7 different color monofilaments were used to simulate 21 multifilaments according to a simplified multifilament model. Then, the mixed lightness of these multifilaments were calculated by adding the weighted tristimulus values of the surface color of a multifilament, and scaled by 12 observers in the subjective experiment. The average difference between the calculated and scaled lightness was 0.39. In order to determine the significant factors affecting the lightness difference, the Pearson’s correlation analysis was conducted. The results show that the lightness difference decreased with the increase of the calculated CIE lightness (L*) or yellow-blue value (b*) of the multifilament. Finally, the optimized viewed lightness prediction model was derived by the SPSS analysis. The average lightness difference was reduced to -0.01, indicating this model can provide reliable prediction for personalized compound multifilament. Keywords: Lightness Prediction · Textile Filament · Optical Color Mixing · Visual Assessment
1 Introduction In order to color textiles, dyeing, color spinning or dope dyeing are usually used. However, these methods have many disadvantages, such as color difference, uneven color mixing or waste of raw materials [1]. To address this issue, our team [1, 2] suggested producing multifilament, named color compound multifilament, by dying the monofilaments directly into different colors with different proportions and arrangements. However, this new method is still limited to trial spinning at present, which is exceedingly laborious and time-consuming. On the other hand, the color is ultimately assessed by human eye. Therefore, there is a strong need to derive a prediction model of the multifilament, which can express the viewed color. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_17
104
Y. Wang et al.
The process of fiber color mixing includes both additive color mixing and subtractive color mixing. There have been many researches on additive color mixing and subtractive color mixingmodels [3, 4]. As for the research on the color mixing model based on the characteristics of the human eyes, there have been more and more reports in recent years. Takako et al. [5] introduced different filters to simulate the spatial filtering characteristics of human eyes, but the calculation process is too complex, and the cut-off frequency is need to be determined. Chu et al. [6] developed a model to simulate the spatial color blending process of digital camouflage. The color mixing of camouflage clothing is pointto-point color mixing, while the color of the spun filament is continuous in the length direction. Chae et al. [7] investigated many factors of individual yarn colors and their blends on the color appearance of woven fabrics and proposed several color appearance prediction models. However, these predictive models need to use the colorimetric values of physical fabric, the process of spinning filament and then weaving into cloth belongs to trial spinning. Therefore, these models do not bring great convenience to the product design before spinning. This paper mainly studied the viewed lightness prediction model of the multifilament, and the prediction models of other colorimetric parameters can refer to the research method used in this paper. Firstly, the surface colors of the multifilament were calculated with the monofilaments. Then the mixed lightness of the surface colors was calculated, and estimated by observers in psychophysical experiment. Finally, the lightness difference between the calculated and estimated lightness was analyzed, and the viewed lightness prediction model was proposed.
2 Methodology 2.1 Simulating Multifilament In this study, the monofilament was assumed to be an opaque cylinder (Fig. 1a), and the multifilament was composed of two different color monofilaments arranged in a fan shape (Fig. 1b). Besides, the surface overlaps between the monofilaments were ignored. So, the surface color appearance changed from Fig. 1c, d.
Fig. 1 The simulating multifilament: a the monofilaments, b the 3D model, c the surface color appearance, and d the simplified surface color appearance of a single multifilament
Viewed Lightness Prediction Model of Textile Compound …
105
Since the monofilament was assumed to be opaque, the surface color of the multifilament was determined by the monofilaments on the surface of the multifilament. The ratio of the fan-shaped angle of the monofilament determines the distribution ratio of the monofilament on the surface. Besides, the surface color is repeated by the basic color unit. Therefore, according the ratio of the fan-shaped angle of the monofilament, this article designed 42 basic units to simulate the surface colors of the single multifilament (Table 1). The test images were formed by arranging these multifilaments in parallel. Table 1 Designs of 42basic color appearance units Monofilament arrangements RY
RRRY
RG
RRRG
RRB
RRRRB
RK
RRRK
RRW
RRRRW
RRGr
RRRRGr
YYG
YYYYG
YB
YYYB
YYK
YYYYK
YW
YYYW
YGr
YYYGr
GGB
GGGGB
GGK
GGGGK
GW
GGGW
GGr
GGGGr
BBK
BBBBK
BW
BBBW
BGr
BBBGr
KW
KKKW
KKGr
KKKKGr
WWGr
WWWWGr
Note: The letter R, G, Y, B, K, W, Gr refer to the red, green, yellow, blue, black, white and gray monofilaments, respectively
The colors of these monofilaments are often used in color spinning factories, and were measuredby the spectrophotometry (Datacolor 650 spectrophotometer, USA). The tristimulus values of these monofilaments were summarized in Table 2. Table 2 Tristimulus values of the monofilaments Red
Yellow Green Blue
Black White Gray
X 23.61 57.91
5.38
6.1
Y 13.31 60.27
9.27
4.66 2.2
81.18 10.16
Z
8.59
50.19 2.59
82.75 11.63
4.32
7.25
2.06
76.77
9.61
2.2 Calculating and Estimating Mixed Lightness Since the multifilaments were displayed on the monitor, the mixing color of the multifilament was the sum of the weighted surface colors of the multifilaments. Therefore,the tristimulus values of the mixing color can be calculated by the following formulas: (ni ∗ X i ) (1) Xm = i
Here, Xm represents the tristimulus values of the mixing color. The ni and Xi refers to the tristimulus values of the primary color i, respectively. In order to facilitate data
106
Y. Wang et al. ◦
analysis, the tristimulus values were converted into CIE L*a*b* and CIE L*C*h using the functions in MATLAB software. In order to estimate the mixed lightness, a subjective experiment was carried out in a darkened room. A 21-inch NEC liquid crystal display monitor was used and calibrated by the X-rite Eyeone pro display calibration system. The monitor was set to a gamma of 2.2, a white point of 6500 K, and a luminance of 100 cd/m2 .A graphical user interfaces (GUI) was established and run by the MATLAB R2014a (Fig. 2). By sliding the sliders, the color of the reference image was adjusted until there was no perceivable color difference between the test and reference images. All the elements were placed on a mid-gray background with L∗ of 50 [7].The wide of a single monofilament was 0.233 mm. The viewing distance was set to 2.33 m. The viewing angle was set to 0° from the normal of the display, and the size of the test image was 256 × 256 pixels.
Fig. 2 The GUI display for color appearance assessment
Thirteen observers, who had normal color vision color vision according to the XRite color challenge and hue test, were trained for two hour before the experiment. The test image was presented in random order. The experiment was repeated within one week. The coefficient of variation (CV) of the observer accuracy and repeatability of the lightness were 1.15 and 2.14, which were satisfactory compared with that in other studies [8].
3 Results and Discussion 3.1 Comparison of Estimated and Calculated Lightness The mean and median of L* between the estimated and calculated lightness were 0.39 and −0.17, respectively, which were small. However, the distribution of the L* was
Viewed Lightness Prediction Model of Textile Compound …
107
relatively scattered, and there were several outliers.In order to further investigate the effectson the lightness of the multifilament, the Pearson’s correlation analysis between the L* and the calculated CIE lightness L*, redness-greenness a*and yellownessblueness b* was conducted. The significant factors at a significance level of 0.01 found were the mixed L* and b*. 3.2 Effects on Lightness Difference The lightness difference L∗ between the estimated and calculated lightness of the multifilaments was plotted against the calculated mixed CIE lightness L∗m in Fig. 3a and CIE yellowness-blueness b∗m in Fig. 3b. From the Fig. 3a, b, it can be found that the relationships between the L∗ and L∗P , L∗ and b∗P was similar. They were both that the L∗ decreased with the increase of the L∗P or b∗P in general. This shows that for multifilament with lower lightness or bluish color, the viewed lightness was higher than the calculated lightness, while for multifilament with higher lightness or yellowish color, the viewed lightness was similar to the calculated lightness.
Fig. 3 The relationship between: a the calculatedlightness LP∗ , b the calculated yellow-blueness b∗P and the lightness difference L∗ between the estimated and calculated lightness
4 Optimizing In order to obtain a more accurate lightness prediction model, the mixed lightness calculated by the additive color mixing formulas was optimized with the stepwise regression analysis of the statistical product and service solutions (SPSS) software. The optimization lightness prediction model was derived, as shown in Eq. 2. L∗V = −0.611 + 1.001 × L∗P − 0.04 × b∗P + 0.036 × cP∗
(2)
Here, L∗P , b∗P and cP∗ were the CIE lightness, yellow-blue and chroma of the multifilamentconverted from the tristimulus values calculated by Eq. 1.It can be found in Fig. 4 that the mean and median of the L∗ from the optimization lightness prediction model were -0.01 and -0.04, respectively. Besides, the distribution of the L∗ was more concentrated, and there were no outliers.
108
Y. Wang et al.
Fig. 4 Comparison of the lightness difference between the estimated and the calculated by the additive color mixing algorithm and by the optimized algorithm
5 Conclusions In this article, aviewed lightness prediction model was proposed for the textile compound multifilament, which was the function of the colorimetric values of the mixing color of the multifilament image. By comparing the lightness viewed and calculated, it was found that for the multifilament with lower lightness or bluish color, the viewed lightness was higher than the calculated lightness. While for the multifilament with higher lightness or the yellowish color, the viewed lightness was similar to the calculated lightness. The average lightness difference between the lightness estimated and calculated by the prediction model was -0.01, indicating the model performs very well in predicting the viewed lightness of the multifilament. It is worth noting that the actual distribution of the monofilaments is very complicated. This is exactly what our team is studying. Acknowledgements. This work has been financially supported by the Natural Science Foundation of China (Grant No. 61973127), Guangdong Provincial Science and Technology Program (Grant No. 2017B090901064), and Chaozhou Science and Technology Program (Grant No. 2020ZX14).
References 1. Wang Q.H., Li W.G., Gan X.H., et al. (2019). Compound filament and its preparation technology: China, 109112655. 01.01. 2. Pi F.D., Li W.G., Liao H., et al.(2019).The collocation colored yarns of polypropylene and their key preparation techniques research. Technical Textiles, 37(3): 24–30. 3. Wang YJ, Ma C, Liu JY et al (2017) Matching color technology of color blended yarn based on modified Stearns-Noeche model. Journal of Textile Research 38(10):25–31
Viewed Lightness Prediction Model of Textile Compound …
109
4. Ma C, Wang YJ, Li JL et al (2017) Theoretical and practical analysis of fiber blend model in gray spun yarn. J Eng Fibers Fabr 12(2):28–38 5. Nonaka T., Matauda M., Hase T. (2007).color mixing simulator for display surfaces based on human color vision. proceedings of the Proceedings 21st European conference on modelling and simulation, Ivan Zelinka: ECMS, F. 6. Chu M, Tian S, Yun J et al (2016) An Approach of Characterizing the Degree of Spatial Color Mixture. Acta Armamentar II 37(7):1306–1311 7. Chae Y. (2019).The color appearance shifts of woven fabrics induced by the optical blending of colored yarns. Textile Research Journal, 0(00): 1–15. 8. Luo M.R., Clarke A.A., Rhodes P.A., et al. (1991).Quantifying colour appearance. Part I. Lutchi colour appearance data. Color Research & Application, 16(3): 166–180.
A Representing Method for Color Rendering Performance of Mobile Phone Screen Yanfang Xu(B) , Zengyun Yan, and Biqian Zhang Beijing Institute of Graphic Communication, Beijing, China [email protected]
Abstract. The color rendering performance of mobile phone screen is analyzed, and a representing method of its color rendering quality is established. There can be a simple model relationship between the drive value RGB of a mobile phone and the CIEXYZ of its screen color. Thus, the basic color metric of trichromatic light and white field is obtained. And the 3D color gamut of the CIECAM QMh color appearance space, the CIECAM QMh values of the eye-sensitive skin, sky blue and the some other typical colors are determined under the conditions of use. The application results show that the primary color difference of difference brands of mobile phones is mainly reflected in green primary color, while the size of color gamut of CIECAM QMh is closely related to brightness, and the QMh differences of typical colors are different. Keywords: Mobile phone · Color rendering · CIECAM
1 Introduction Mobile phone screens, like all monitors, are an additive color system that uses different proportions of red, green and blue light to blend into a rainbow of colors. However, different manufacturers of mobile phones often use different red, green and blue primary colors, that make the number of colors they can max and the color perception properties of the image they present differ, therefore, the differences of color rendering performance and quality between different brands of mobile phones are formed. In this paper, a method for characterizing the color rendering performance of mobile phone screen is proposed, which can represent the three primary colors, the color gamut and the characteristic color rendering. In this method, the CIEXYZ chromaticity values of different RGB control values are obtained, and the corresponding relationship between the two values is obtained, the primary chromaticity, white field chromaticity, color gamut and some typical colors of mobile phone can also be characterized.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_18
A Representing Method for Color Rendering Performance …
111
2 Color Model of Mobile Phone Screen Chose three mainstream mobile phones on the market and, for comparison, chose two professional monitors to conduct the experiment simultaneously. As with the monitor, colors displayed on mobile phone screen are driven by RGB numbers, and the three numbers drive the three light-emitting channels respectively to emit different intensities of red, green and blue light. Ideally, the normalized intensity of each channel of monitor corresponds to its normalized driving value in an exponential relationship of gamma value, and the chromaticity of a mixed color conforms to the principle of additive colors, that is X, Y, and Z of the mixed color’s CIEXYZ are the sum of X, Y, and Z of the red, green and blue light respectively. Thus, the relationship between the RGB and the CIEXYZ of the corresponding color on the screen satisfies a simple exponential and linear matrix operation [1, 2], here namely simple color model. In this paper, 45 color measurements (15 red, 15 green, 15 blue) were used to build the relationship between the RGB values and the CIEXYZ values, 105 color measurements (15 red, 15 green, 15 blue, 15 cyan, 15 magenta, 15 yellow and 15 Gy) were used to test the relationship with the color difference DE00 (CIE DE2000) [3], between the calculated and measured CIEXYZ values. The DE00 results of the five samples are shown in Table 1. Table 1 Color difference of the computational model (mean and maximum of DE00) Sample
Mobile A Mobile B Mobile C Monitor 1 Monitor 2
DE00_ mean 0.82
1.21
1.57
1.26
1.97
DE00_ max
4.23
5.41
4.70
6.64
2.26
It can be seen from Table 1 that the DE00 _mean values are all less than 2, and the DE00 _mean values of three phones are less than that of the two monitors. The results mean that the simple color model can be used in practice to predict the chromaticity value of the mobile phone screen.
3 Color Characteristics of Mobile Phone Screen 3.1 Fundamental Properties of Chromatic Lights The brightest red, green and blue light that a screen can display determine the range of colors by mixing different intensities, which is the basic characteristic of screen color rendering. That contains color coordinate CIExy of the primary colors [3], brightness YW and color temperature T or correlated color temperature ccT of white field. The results of the three mobile phones and the two professional monitors are shown in Table 2.
112
Y. Xu et al. Table 2 The chromatic properties of three primary colors and white field
Characteristic index
Mobile A
Mobile B
Mobile C
Monitor 1
Monitor 2
Red
CIExy − x,y
0.63, 0.33
0.66, 0.34
0.63, 0.34
0.65, 0.32
0.61, 0.33
Green
CIExy − x,y
0.31, 0.61
0.23, 0.71
0.31, 0.59
0.22, 0.68
0.21, 0.68
Blue
CIExy − x,y
0.16, 0.07
0.15, 0.04
0.15, 0.05
0.16, 0.08
0.16, 0.10
White field
ccT (K)
6600
7080
8800
4950
4990
170.50
170.66
168.90
112.80
112.50
YW
(cd/m2 )
Fig. 1 The 2D_CIExy color gamut
The data in Table 2 are the basis for screen color rendering, which determine the 2D_CIExy color gamut, as shown in Fig. 1. For comparison, the color gamut of the sRGB standard color space is also given. It is easy to seen from Fig. 1 that the color gamut shapes of phone A and phone C are all similar to that of the sRGB, while that of the phone B is obviously larger than that of the sRGB. The ccT of the white field is also plotted on the graph. These characteristics form the physical basis of the color rendering of mobile phone screen. 3.2 Three-Dimension Color Gamut Based on the color relations obtained in step 2, the color gamut boundary CIEXYZ values and CIECAM color appearance values could be determined by RGB numbers, where at least one of the RGB values is 0 or 255, and in this paper, the values are 0, 32, 64, 96, 128, 159, 191, 223 and 255. Then, the CIEXYZ of gamut boundary is calculated as CIECAM02 QMh and equivalent CIECAM02 Qam bm color value [4], where the relationship between am bm and Mh is between Cartesian coordinates and polar coordinates.
A Representing Method for Color Rendering Performance …
113
The calculation of CIECAM02 color appearance values requires the lighting environment parameters F, C and Nc. In this paper, the dark environment parameter is selected, and the F, C and Nc are 0.8, 0.525 and 0.8 respectively, which are in accordance with the case in step 2 that the CIEXYZ test of the mobile phone screen is equivalent to the application of the darkroom environment [4]. And thestandard illuminant desired in CIECAM02 QMh calculation is D65. The Q, M and h of CIECAM02 are brightness, colorfulness and hue angle respectively, which correspond to the visual perception attribute of human eyes to color. The Qam bm coordinate space is equivalent to QMh, and theresulting color gamut boundary closed volume in this space is called mobile phone’s color gamut. The color gamut figures of the phone A and the phone B are shown in Fig. 2.
Q
bM
Q
aM (a)
bM
aM (b)
Fig. 2 Color gamut comparison between phone A and phone B
As shown in Fig. 2, the Q value at the highest point of the Qam bm color gamut surface results the actual brightness of the screen white field. The L* in CIEL*a*b*, its maximum value is normalized to 100, does not represent the actual brightness of the white field. From Fig. 2a that the color gamut shape of the phone B is wider than the phone A in the plane perpendicular to the Q axis. Colors that are farther away from the Q axis are more colorful, so, the phone B can show some more vivid colors than the phone A. In addition, the color gamut of the mobile phone screen can be compared with that of the standard RGB color gamut with the same white brightness, to measure its color performance. As shown in Fig. 2b, the gamut of phone B is compared with the sRGB color space with the same white field brightness. It shows clearly that the color performance of the phone B is stronger than that of the sRGB, particularly the green of the phone B is more vivid. 3.3 Presentation of Typical Colors Human eyes are very familiar and sensitive to some colors, such as skin color, sky blue, grass green, and so on, forming memories of those. Therefore, skin color, sky blue, grass green, rape flower yellow, and some fruit colors are selected as a group of typical colors.
114
Y. Xu et al.
The sRGB color is usually applied to the screen displaying standard. So, RGB data of typical colors is extracted from the standard sRGB images of Hp, Sony, etc. The method is: for each typical color, in its tonal range, extract a number of lightness and saturation corresponding RGB values as the representation of the typical color [5]. For the extracted typical color RGB values, the corresponding CIEXYZ and CIECAM02 QMh are calculated from the color relation in step 2. Further, for the array of Q, M and h of each typical color, three change curves are drawn with the ordinal number of the typical color (sorted by lightness from small to large) as the horizontal coordinate and Q, M and h as the vertical coordinate respectively, it is used to represent the brightness, colorfulness and hue characteristics of the typical color.
Fig. 3 Comparison of color appearance of typical color-grass green between the phone A and the phone B
Figure 3 shows a set of QMh color values for the typical color, grass green, of the phone A and the phone B. The average values of QMh for the phone A and the phone B are [104.15, 62.17, 123.21°] and [99.29, 74.25, 130.34°] respectively, with Q = QA − QB = 4.88, M = MA − MB = − 12.08 and h = hA − hB = − 7.13. It can be seen from Fig. 3, although the white field brightness of the two phones is the same, there are differences in the grass green color characteristic. As compared with the phone B, the phone A has higher brightness, lower colorfulness and yellowish hue. All of these features are similar with the features in Fig. 2, where the color gamut of the phone B is broader at the am bm plane in the green zone than the phone A. Similarly, other typical colors can also be analyzed and compared in the same way. The experiment also compares the typical colors of the phone A and the phone B with the that of the sRGB, and finds that the phone A is very close to the sRGB, indicating that the phone A is more suitable for the sRGB, and the phone B is more colorful.
A Representing Method for Color Rendering Performance …
115
4 Conclusions The experimental results show that the color relation of mobile phone can be constructed with simple color model. From this color model, the basic chromaticity properties can be determined, including the chromaticity of the primary colors and the brightness and color properties of the white field. The 3D color gamut of the color appearance QMh and the characteristics of the color appearance QMh including skin, sky blue, grass green and some other typical colors can also be given, to characterize the mobile phone screen in the use of visual color rendering ability and the human eye color-sensitive rendering quality. Acknowledgments. This study is funded by Projects of ShiPei Plan “Color appearance characterization method in display application” (No. 03150120001/058).
References 1. Xu Y (2011) Color management principle and application, 2nd edn. Printing Industry Press, Beijing 2. Berns RS (1996) Methods for characterizing CRT displays. Displays 16(4):173–182 3. Liu H (2008) Color science and technology, 2nd edn. China Light Industry Press, Beijing 4. Hu W, Tang S, Zhu Z (2007) Principle and application of modern color technology, 1st edn. Beijing Science and Technology Co., Ltd., Beijing 5. Shao S, Xu Y et al (2018) An evaluation method for the color quality of digital hardcopy output sRGB image. In: Applied sciences in graphic communication and packaging. Lecture notes in electrical engineering, vol 477, pp 147–157
Color Measurement and Analysis of Unpacked Jujube in Shelf Life Danyang Yao1 , Jiangping Yuan1,2 , Xiangyang Xu3 , and Guangxue Chen1(B) 1 State Key Laboratory of Pulp and Paper Engineering, South China University of Technology,
Guangzhou, China [email protected] 2 Institute for Visualization and Data Analysis, Karlsruhe Institute of Technology, Karlsruhe, Germany 3 School of Media and Communication, Shenzhen Polytechnic, Shenzhen, China
Abstract. Color attribute is an important indicator for the perception and evaluation of food quality, especially for unpacked valuable fruits. To this end, the color and weight evaluation of vitamin C enriched jujubes was carried out at constant temperature and humidity over the entire shelf life to provide a new pathway for their dynamic quality evaluation. Four unpackaged jujubes were examined with five random color points on the surface during the 11-days shelf-life. Spectral and colorimetric values of each jujube were recorded and correlated with its dynamic weight loss. Final, results showed that there was a significant linear correlation between the weight loss and the average chromatic aberration for each unpacked jujube. Meanwhile, the variable intervals of the spectral distribution were obviously distributed in the 630–690 nm band. Thus, specific spectral compensation can be verified for unpacked jujubes display in shelf life. Keywords: Jujube quality · Fruit display · Color assessment · Spectral analysis · Shelf life
1 Introduction Recently, the sustainable and organic perception of unpackaged fruits on the shelves of modern supermarkets has attracted a great deal of attention. Supermarket shelves with a constant temperature, humidity, light sources are to keep fruit fresh, and to attract more customers to buy. However, compared with packaged fruit, the shelving period of unpacked fruit is still short. Therefore, more advanced interventions are needed to improve the current sales woes [1]. In previous studies, common measures have been fully developed for packaged fruits, ranging from preservatives and respiratory inhibitors to air-modified plastic films, but these suffered from negative environmental effects [2]. However, consumers’ color perception depends on many conditions including light source attribute, object attribute, receptor mechanism, genetic and learning response, direct environmental and formed image and other factors [3]. Thus, the measurement © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_19
Color Measurement and Analysis of Unpacked Jujube in Shelf Life
117
and analysis of color dynamics during the fruit shelf cycle is worth investigating under the standard illumination and storage conditions [4]. In addition, color attribute is an important indicator of food evaluation, but this feature was evaluated under a single condition. Most of the tested samples were packaged foods such as chocolate, fruit juice, fruit yogurt and chicken breast [5]. Other researches were focused on the correlation between fruit appearance and quality grading. The measured color attributes have been graded in quality criteria including maturity, freshness, stability and shelf life date for raw meat, vegetables, fruits and other fresh food [6]. However, most unpackaged fruit lacks dynamic color measurement during the entire shelf life. The real-time color of unpacked fruit is determined by the intrinsic surface properties and the color rendering index of a specific integrated fluorescent tube for fruit display. The respiratory consumption of unpacked fruits during storage will not only affect the weight of fruits, but also change the surface color of fruits, indicating that the lighting source needs to dynamically match the color change of unpacked fruits [7]. In addition, real-time measurement of fruit color is a complex activity for supermarket management, as compared to weight testing. Due to the lack of unified lighting standards, operators hardly adjust the light source to the expected effect. The interactive response to fruit display can be indirectly simulated through multiple lighting on and off controls based on captured dynamic color [8]. In addition, Ennis et al. established a hyperspectral database at University of Giessen, but these samples didn’t consider dynamic changes [9]. Jujubes are one of the most vitamin C rich fruits and are popular with the public. Based on the spectral measurement and colorimetric measurement, unpacked jujubes are stored under constant temperature and humidity, to provide a database reference for intelligent display and smart quality evaluation.
2 Methods and Materials The experimental route is shown in Fig. 1. Four edible jujubes were randomly selected and in turn stored in the constant temperature and humidity testing machine (Programmable Temperature and Humidity Chamber, HE-WS-800). The constant temperature was 18 °C, and the constant humidity was 80% RH. The initial CIE colorimetric values and weight of each jujube were captured every 24 h after storage.The whole experiment will last for 11 days, which can be observed the obvious deterioration of the jujube skin. 2.1 Jujube Selection and Its Weight Measurement The jujubes selected were ripe fruit after picking, with good skin and pure color. As shown in Fig. 1a, four jujubes were renumbered as jujube 1, jujube 2, jujube 3 and jujube 4, respectively. The weight of fourjujubes was initially measured on the first day, and then fixed on the top of a small transparent plastic tray, as shown in Fig. 1c. All fruit trays are customized to the same size, material and strength, ensuring that all tested fruits are placed and imaged at the same height. Five sampling color points were assigned to six regions, as shown in Fig. 1b. The Tuning-Fork Analytical Balance (HT224) was used for weight measurement with the minimum accuracy of 0.1 mg, as shown in Fig. 1d.
118
D. Yao et al.
Fig. 1 The experimental route and its devices: a fruit samples; b sampling points and shooting angles division; c storage conditions; d weight measurement; e colorimetric measurement
2.2 Colorimetric Measurement of Fruit Skin As each jujube is a 3D object, each color point was marked with a circular label on the jujube skin. The marked color points are randomly selected on each jujube. The colorimetric values (L*a*b*) of each color point of jujube samples were measured by the i1Pro2 (Parameters: Measurement condition = M1, Filter = D50, Illumination name = D50, Observer angle = 2°). At the same time, the spectral distribution of each color point at the range of 380–730 nm was measured, with a resolution as high as 10 nm. 2.3 Data Analysis In this study, chromatic aberration and weight loss are calculated and analyzed. CIEDE76 formula was used to calculate the relative chromatic aberration of five color points of jujube skin to be measured, as shown Eq. (1). The weight loss of each test jujube is calculated in Eq. (2). ∗ 2 2 2 ∗ E76 Lt − L∗1 + at∗ − a1∗ + b∗t − b∗1 = (1) where E * 76 is the relative color difference, the L * 1 , a* 1 , b* 1 are the color contributes of each jujube in the first day, and the L * t , a* t , b* t is the colorimetric data of each jujube in the t day, while t is a positive integer selected from 2 to 11. Weight loss = W1 − Wt
(2)
where W 1 is the mass of each jujube in the first day, W i is the mass of eachjujube in thet day while t is a positive integer from 2 to 11, and its unit is the g.
3 Results 3.1 Consistency Analysis for Spectral Distribution and Chromatic Aberration of Jujube Sampling Color Points In Fig. 2 the spectral reflectance distribution of each color point is compared between the first day and the last day, and the corresponding mean chromatic aberration is shown in Fig. 3. For all color points on each unpacked jujube skin are compared together, When the mean chromatic aberration of each color point changes greatly, it is marked with red circle to distinguish.
Color Measurement and Analysis of Unpacked Jujube in Shelf Life
119
With the exception of jujube 2, the spectral reflectance values of the other jujube samples were relatively similar throughout the visible light waveband. At the same time, slight differences in the original spectral distributions of the five sample points on the skin of each jujube can be found, but the general trend is consistent. With respect to the first day’s spectral distribution, the last day’s spectrum is significantly shifted in the fixed waveband. For example, the jujube 1 in Fig. 2a, its spectral reflectance values of color points become smaller at the 520–600 nm waveband but higher at the 630–690 nm waveband, except for the color point 2.The fixed variable wavebands of jujube 3 were also smaller than that of jujube 4.
Fig. 2 Specific spectral distribution of jujube in the entire shelf life: a jujube 1; b jujube 2; c jujube 3; d jujube 4
In Fig. 3, the trend of the daily mean color difference is also divided into two parts: a sharp change in the beginning and a slight fluctuation in the middle and late stage. It can be found the marked days of the former stage for jujube 1, jujube 2, jujube 3 and jujube 4 were 4 days, 6 days, 7 days and 4 days to stabilize. In addition, for all A10 values, there were large fluctuations in the color difference of the five sample points for jujube 3, but the sample points for the other jujubes were very close to each other. Combined with the above spectral analysis, it can be concluded that the five current color points are significantly representative or consistent for each jujube, and the measured data can be for subsequent analysis. 3.2 Correlation Analysis of Weight Loss and Color Difference During the Entire Shelf Life In Fig. 4 the weight changes were recorded and analyzed per days. In Fig. 4a, the weight of the jujubes decreased daily at a specific constant rate. The linear weight loss trend
120
D. Yao et al.
Fig. 3 Mean chromatic aberration of each jujube in the entire shelf life: a jujube 1; b jujube 2; c jujube 3; d jujube 4
Fig. 4 Weight record and loss changes in the entire shelf life
can be easily found any one of four unpacked jujubes, as shown in Fig. 4b. In addition, this figure also showed the trend line corresponding to the linear equation and R-squared values. The R-squared values of the four trend lines are very close to 1. Thus, it can be inferred that under certain conditions, the predicted weight loss function for this jujube category can be calculated. In Fig. 5, the linear trends of relative chromatic aberration of five color points on the surface of each jujube were shown with the fitted R-squared values. In Fig. 5a, the relative chromatic aberration of certain color points fluctuated greatly, and the corresponding Rsquare value changed from 0.0613 to 0.8421. In Fig. 5b, the relative chromatic aberration of the color point 1 in jujube 2 fluctuated slightly, while other color points increased linearly. In Fig. 5c, color point 1 and color point 5 in jujube 3 showed large color difference, but color point 4 demonstrated an obvious linear increasing trend. The relative chromatic aberration of jujube 4 also displayed a significant linear trend in Fig. 5d. In Fig. 6, the further quantitative relationship between the average of relative chromatic aberrations of five color points and its weight loss were illustrated with linear
Color Measurement and Analysis of Unpacked Jujube in Shelf Life
121
Fig. 5 Chromatic aberration of each jujube in the entire shelf life: a jujube 1; b jujube 2; c jujube 3; d jujube 4
function and the corresponding R-squared values. It can be observed that the R-squared values of four linear trend-lines were all greater than 0.7700, with a good fitting degree. Moreover, a contrast of the form of each linear equation showed that the current trend of four jujubes can be unified to a particular correlation model with a specific weighted value. This provides a new path to predict the color changes for the operator by weight monitoring.
Fig. 6 Trendlines of relative color difference and weight loss during the unpacked fruit shelf life: a jujube 1; b jujube 2; c jujube 3; d jujube 4
122
D. Yao et al.
4 Conclusions In this experiment, the spectrum colorimetry and weight of four unpacked jujubes were recorded dynamically under constant shelving conditions simulating an organic fruit supermarket. The mean chromatic aberration per days showed a good correlation with spectral distribution. Moreover, a significant quantifiable linear correlation between weight loss and relative chromatic aberration can be observed for current jujubes. The limitation of this study is that the number of tested jujubes is not huge and its ripeness was high. Jujubes with different maturity can provide more universal results. Current colors of fresh date are measured under standard lighting conditions, which differ somewhat from actual lighting conditions. Evaluating color perception in multiple simulated lighting environments needs to be scheduled for future work. In addition, the captured skin images of selected jujubes can be further studied to provide more practical applications. Acknowledgments. This work has been financially supported by the Natural Science Foundation of China (Grant No. 61973127), Guangdong Provincial Science and Technology Program (Grant No. 2017B090901064), and Chaozhou Science and Technology Program (Grant No. 2020ZX14).
References 1. Herpen EV, Immink V, van den Puttelaar J (2016) Organics unpacked: the influence of packaging on the choice for organic fruits and vegetables. Food Qual Prefer 53:90–96 2. Monteiro J, Silva FJ, Ramos SF, Campilho RD, Fonseca AM (2019) Eco-design and sustainability in packaging: a survey. Proc Manuf 38:1741–1749 3. Hutchings JB (1999) Food colour and appearance. Springer US 4. Szabo F, Keri R, Schanda J, Csuti P, Wilm A, Baur E (2015) A study of preferred colour rendering of light sources: shop lighting. Light Res Technol. https://doi.org/10.1177/147715 3515573042 5. Briones V, Aguilera JM (2005) Image analysis of changes in surface color of chocolate. Food Res Int 38:87–94 6. Elizagoyen ES, Hough G, Garitta L, Fiszman S, Bravo Vasquez JE (2017) Consumer’s expectation of changes in fruit based on their sensory properties at purchase. The case of banana (Musa Cavendish) appearance evaluated on two occasions: purchase and home consumption. J Sens Stud 32(4):e12278 7. Watanabe T, Nakamura N, Ota N, Shiina T (2018) Estimation of changes in mechanical and color properties from the weight loss data of “shine muscat” fruit during storage. J Food Qual 2018:1–6. https://doi.org/10.1155/2018/7258029 8. Aliakseyeu D, Meerbeek B, Mason J, van Essen H, Offermans S, Wiethoff A, Streitz N, Lucero A (2012) Designing interactive lighting. In: DIS 2012 workshops. Newcastle, UK, 11–15 June 2012, pp 1–2 9. Ennis R, Schiller F, Toscani M, Gegenfurtner KR (2018) Hyperspectral database of fruits and vegetables. JOSA A 35(4):B256–B266. https://doi.org/10.1364/JOSAA.35.00B256
Color Assessment of Paper-Based Color 3D Prints Using Layer-Specific Color 3D Test Charts Jiangping Yuan1,2 , Jieni Tian1 , Danyang Yao1 , and Guangxue Chen1(B) 1 State Key Laboratory of Pulp and Paper Engineering, South China University of Technology,
Guangzhou, China [email protected] 2 Institute for Visualization and Data Analysis, Karlsruhe Institute of Technology, Karlsruhe, Germany
Abstract. Color 3D printing is an important advancement in 3D printing for lifelike personalization, but color reproduction in paper-based color 3D printers suffers from accuracy issues. In this study, 5 color 3D test charts containing 6 thickness-increasing colored stairs with 13 specific colored bars were developed based on a thickness tunable printed layers design. At the same time, color differences and surface images of tested charts with specific layers were both captured under standard conditions. Results demonstrated that the chromatic aberrations and Mean-SSIM values of color bars showed a slight downward trend as the stair thickness increases, while the linear correlations were not obvious. Combined with color SSIM maps analysis, color samples with small differences in color quality can be clearly detected to analyze the color reproduction performance of paperbased color 3D printers. This research provides new ideas for the development of powerful color 3D test chart to check the color reproduction performance of paper-based 3D printers and predict paper-based color 3D printing. Keywords: Color 3D printing · Color assessment · Paper-based · 3D test chart · Quality control
1 Introduction 3D printing is a notable digital fabrication with functional and structural properties, until the development of color 3D printing techniques, the color customization of its printed samples became possible [1]. The color 3D printing has a wide range of industrial applications, but the color reproduction accuracy of current printed parts is not yet sufficient [2, 3]. Color 3D printing began to be used by the general public after 2014, and paper-based color 3D printing became more popular as the printing material was coated paper [4, 5]. However, paper-based color 3D printing is still hard for accurate color reproduction although it can print thousands of colors [6, 7]. Accurate color 3D printing has always been a challenging topic in the field of additive manufacturing [8]. Color 3D printed objects were easily measured and evaluated © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_20
124
J. Yuan et al.
for manufacturer and materials scientists by colorimetric methods using different spectrophotometers, while this cannot provide faithful color reproduction [9]. Although the color reproduction of polymer-based 3D printing processes was improved by the uniform thickness conventional test charts, these methods and tools were not suitable for paperbased color 3D printing printers with different color rendering principles. In addition, color 3D printed objects are not all flat, and thus color samples on the same plane are not sufficient to accurately differentiate the color reproduction capabilities of color 3D printing devices [10]. Meanwhile, the digital specific texture and shape are two complex factors affecting the final color reproduction of paper-based color 3D printer. Therefore, there is an urgent need to develop and standardize a printable 3D test chart that can show the vertical variation of colors. Currently, color evaluation methods for 3D printed objects have been gradually developed, but not standardized. For example, color reproduction accuracy of 3D printed cube surfaces was enhanced by the microscopic image analysis proposed by our team [11]. In addition, Fastowicz and Okarma also explored the quality classification of specific flat surfaces printed by FDM 3D printers based on gray properties including the Hough transform and histogram equalization, as well as structural similarity (SSIM) of color regions [12, 13]. Then, using color difference evaluation and mean SSIM analysis, this study can investigate the color reproduction features of paper-based color 3D printing process based on layer-specific color 3D test charts with variable thickness. The primary colors were only printed on the top layer of each designed color stair of our proposed 3D color test chart. This paper will provide a unique and efficient tool for accurate color 3D prints based on current paper-based color 3D printers.
Fig. 1 The flow chart of our experiment
2 Experimental The flow chat of our experiment is illustrated in Fig. 1. There are five parts: digital 3D file preparation, printing parameters options, samples post-printing, colorimetric measurements and image assessment. Firstly, printing materials and parameters options used in ArkePro 3D printer are same for all layer-specific color 3D test charts. The printing materials mainly contain the CMYK inks, water-based adhesive and roll coated paper. Current printing parameters consider the color texture mapping, slicing algorithms and printing materials assignment. The printing accuracy of each slice is 0.1 mm. The sample post-printing is the manual removal of unformed paper surfaces of 3D printed
Color Assessment of Paper-Based Color 3D Prints Using …
125
entities. Each color texture of specific 3D test charts will be automatically mapped to the coloring material system utilizing the dedicated format conversion. Meanwhile, this paper-based 3D printer offers ICC color management options to optimize its color reproduction quality for different printing materials.
Fig. 2 Our proposed 3D color test chart: a top view with color attributes; b lateral view with colored staircases; c global view with naming remarks; d color and white layers
2.1 Designing the Layer-Specific 3D Color Test Charts A special and new 3D color test chart in Fig. 2 was designed by the 3DMax software. There are six color bars, seven neutral color bars and six white blocks in each 3D color test chart. The raw CIE L* , a* , b* values of each colored bar are designed in Fig. 2a. Color bars and neutral color bars were assigned to identical color stairs, as shown in Fig. 2c. The specific number of each stair for all color bars is stair 6, stair 5, stair 4, stair 3, stair 2 and stair 1, respectively. The stair 6 means that there is a total of six stairs over a certain base, similarly, the stair 5 corresponds to five specific stairs. Each color stair can be designed as specific printing layers including a color layer and certain colorless layers (white paper sheets). Each white block showed the same thickness as the corresponding total colorless layers, respectively. Consequently, we have designed 5 kinds of tested charts, as shown in Table 1. This table provides a print thickness for each color stair in all samples. Here, WS1 means the white stair 1, other stairs will be similarly numbered. It should be noted that an additional base is 0.2 mm, which provide stronger support. Since all bases are printed with opaque white paper sheets, the color reproduction of its upper colored layer will not be influenced by the overall thickness of these bases.
126
J. Yuan et al. Table 1 The overall thickness of each colored stair in designed charts
Unit: mm
White blocks
ID
Paper layers
Ws1
1
Color bars
Ws2 Ws3 Ws4 Ws5 Ws6 Cs1
Cs2 Cs3 Cs4 Cs5 Cs6
C × 1 0.1 +W× 1
0.3
0.5
0.7
0.9
1.1
0.2
0.4
0.6
0.8
1.0
1.2
2
C × 1 0.3 +W× 3
0.7
1.1
1.5
1.9
2.3
0.4
0.8
1.2
1.6
2.0
2.4
3
C × 1 0.5 +W× 5
1.1
1.7
2.3
2.9
3.5
0.6
1.2
1.8
2.4
3.0
3.6
4
C × 1 0.7 +W× 7
1.5
2.3
3.1
3.9
4.7
0.8
1.6
2.4
3.2
4.0
4.8
5
C × 1 0.9 +W× 9
1.9
2.9
3.9
4.9
5.9
1.0
2.0
3.0
4.0
5.0
6.0
Fig. 3 Assessing image-based difference of 3D printed samples: a environmental configuration; b calibrating HD camera; c pretreating acquired image; d determining color sample region; e segmenting color chart; f generating a color SSIM map
2.2 Color Measurement and Image-Based Difference Assessment The colorimetric values of 13 bars on printed color 3D test charts were measured by the calibrated X-Rite i1Pro2, and recorded with the CIE L* , a* , b* values, based on the ISO
Color Assessment of Paper-Based Color 3D Prints Using …
127
13655 M1 and D50. Each bar was compared with the average of three measurements. The whiteness measurement of each white block was conducted by the Technidyne Color-Touch PC CTP-ISO with D65 and 2° view-field. The entire assessment of image-based difference is illustrated in Fig. 3. In the standard imaging devices, its light source is D65, and tested distance is 0.752 m. The Canon EOS 500D is used for the acquired HD camera and corresponding lens is 0.35 mm and 1.1 ft − ∞. To provide accurate surface colors of 3D printed charts, this original white reference and 2D color chart was used for calibrating our HD camera to provide an accepted color difference range (ΔE * ab ≤ 2). Each raw image was first manually renamed by ourselves, and then horizontally calibrated by Adobe Photoshop software. Subsequently, same size sampling window was used for the extraction of key color areas in this image. All color samples were split one by one. Finally, other color 3D test chart was compared with the sample ID 1, and obtained relative similarity map and MSSIM index by our MATLAB code. 2.3 Data Analysis The relative color difference of each color sample is calculated by the CIEDE1976 formula for mentioned above color bars, as shown in Eq. (1). The SSIM index and MSSIM index of tested images are calculated by Eqs. (2) and (3). ∗ 2 2 2 E∗ab = (1) Lt − L∗0 + a∗t − a∗0 + b∗t − b∗0 where ΔE * ab is the color difference; L * 0 , a* 0 , b* 0 are the measured color data of each color bar on the sample ID 1; L * t , a* t , b* t are the corresponding data for target samples, wherein t is a positive integer selected from 2 to 5. The SSIM index is an image quality metric considering the structure features of tested images, and is optimized for an advanced image quality metric using local window statistics, such as the Mean-SSIM (MSSIM) [14]. 2μi μj + C1 2σij + C2 SSIM (i, j) = (2) μ2i + μ2j + C1 σi2 + σj2 + C2 where μi and μj is the mean luminance intensity of I signal and j signal, respectively; σ i and σ j is the relative standard deviation, σ ij is the covariance of σ i and σ j ; C n = (K n *L)2 ; L is the dynamic range of the pixel values (255 for 8-bit grayscale images). In this study, the K 1 in C 1 and K 2 in C 2 are 0.01 and 0.03. MSSIM (I, J) =
M 1 SSIM iy , j y M y
(3)
where I and J are the reference and the varied image, respectively; iy and jy are structural contents at the y local window; M is the total local windows used in this image. Finally, we achieved color RGB map to illustrate the vivid difference map based on our MATLAB code.
128
J. Yuan et al.
3 Results and Analysis 3.1 Whiteness Analysis of Underlying White Blocks on Different Samples The whiteness of six printed white blocks showed slight fluctuation from 91 to 93 (ISO bright), as shown in Table 2. The whiteness difference of each white block within each color 3D test chart was imperceptible to the human eye. Meanwhile, compared to the Chart 1 (Sample ID 1), the whiteness of other charts is slightly increased rather than Chart 4, when increasing the number of printed layers. There were no significant numerical differences among the different printed thickness comparisons. Therefore, the effect of the added base and increased layers on the upper colored stairs can be ignored in terms of whiteness. Table 2 Whiteness values of six white blocks in five printed charts Chart 1 Chart 2 Chart 3 Chart 4 Chart 5 WS1 92.407 92.723 92.717 91.953 92.770 WS2 92.347 92.693 92.703 91.903 92.760 WS3 92.437 92.720 92.730 91.937 92.787 WS4 92.387 92.707 92.743 91.933 92.783 WS5 92.343 92.703 92.727 91.910 92.797 WS6 92.373 92.713 92.700 91.913 92.767
Fig. 4 Color difference between samples with n printed layers and sample with a single layer (Sample 1): a 2 layers; b 3 layers; c 4 layers; d 5 layers
Color Assessment of Paper-Based Color 3D Prints Using …
129
3.2 Correlation Analysis of Color Difference and Colored Stair Thickness for Each Colored Bar Figure 4 clearly showed the color differences between sample 1 and other four samples. The colored stair thickness was determined by the specific stair number and corresponding printed layers, as shown in Table 1. For each colored stair with 2 printed layers, except for the stair 1, the color difference fluctuated below 2 for all other stairs. Moreover, except for the stair 5, all other colored stairs showed an overall decreasing trend in each color sample with 3 printed layers. The color differences decrease as the number of stairs increases on the sample 4 and sample 5. When the number of colored stairs is fixed, the more the printed layers per colored stair, the bigger the color difference, which is more obvious in color bars. In addition, there is also no notable quantitative trend in the performance of the color bars and neutral color bars. For this purpose, the average values of the chromatic aberrations for each colored stair containing different numbers of print layers were compared and presented in Table 3. Except for the stair 1 and stair 2, which have relatively larger chromatic aberrations, it is difficult for the human eye to quickly detect the chromatic aberrations of the other stairs. The color change showed small but frequent fluctuations between stair 5 and stair 6. The overall trend for color difference of color bars are more obvious than that of neutral bars among all colored stairs. In summary, the relationship between stair number and average color difference became obscure with increasing stair number, but no linear relationship was found. Moreover, since the colored bars on the same stair have the same initial color values, small color variations of paper-based color 3D printer are also detected by our layer-specific color 3D test charts. Table 3 Average color difference of six stairs with 13 colored bars Unit: NBS Stair 1 Stair 2 Stair 3 Stair 4 Stair 5 Stair 6 r
4.42
3.19
1.44
1.07
0.58
0.76
g
4.95
3.64
1.54
1.20
1.20
0.99
b
4.29
2.69
1.38
1.19
0.82
0.85
y
3.95
3.22
1.48
2.32
0.86
1.42
m
3.89
2.97
2.40
1.25
0.64
0.57
c
4.87
3.94
2.20
1.29
0.72
0.64
k
2.82
2.96
1.74
0.40
0.76
0.80
0.8k
3.17
3.30
1.48
0.85
0.77
0.78
0.6k
3.57
3.11
1.56
0.69
0.41
0.83
0.4k
4.67
3.78
3.19
0.40
0.94
0.72
0.2k
2.94
3.63
2.44
0.74
0.77
1.02
0.1k
4.09
3.80
2.74
0.64
0.42
0.57
w
3.78
3.26
2.78
0.41
0.44
0.77
130
J. Yuan et al.
The lower right corner of Fig. 5 provided the mean similarity values for raw images of all charts, and its upper left corner demonstrated the corresponding color similarity maps. The calculated MSSIM value of each acquired image decreased as its printed layers in a specific stair increased, and didn’t show a linear correlation. Meanwhile, these trends also differed for different print layers when comparing two adjacent numbered samples shown in Table 1 were also not grown equally. When comparing all color similarity maps, this revealed the large variation between the colored stairs and the colored bars. The significant changes of each colored stair occurred at the junction of the stair 2 and the stair 3, and the junction of the stair 4 and the stair 5. For large differences in color bars, the color sample is Y and M, while the neutral sample is K and 60% K.
Fig. 5 Color SSIM maps and corresponding MSSIM values of acquired images from 3D printed 3D samples
4 Conclusions Based on our proposed unique color 3D test chart, the color reproduction ability of the paper-based 3D printer could be easily distinguished, but no clear quantitative trend was found with increasing number of colored stairs. This study verified that the effect of colored stair thickness on the color reproduction quality of color 3D printed samples. This factor cannot be ignored, although current experimental data is difficult for a significant
Color Assessment of Paper-Based Color 3D Prints Using …
131
linear relationship. The suitability of our proposed color 3D test chart is good for paperbased color 3D printing based on two objective assessment metrics. The key contribution of this study is to provide a comprehensive assessment strategy for color reproduction quality of paper-based color 3D printers, and to extend objective metrics for their online monitoring. Acknowledgements. This work has been financially supported by the Natural Science Foundation of China (Grant No. 61973127), Guangdong Provincial Science and Technology Program (Grant No. 2017B090901064), and Chaozhou Science and Technology Program (Grant No. 2020ZX14).
References 1. Brunton A, Arikan CA, Urban P (2015) Pushing the limits of 3D color printing: error diffusion with translucent materials. ACM Trans Graph 35:1–3 2. Yuan JP, Yu ZH, Chen GX, Zhu M, Gao YF (2017) Large-size color models visualization under 3D paper-based printing. Rapid Prototyping J 23:911–918. https://doi.org/10.1108/ RPJ-08-2015-0099 3. Yuan J, Tian J, Chen C, Chen G (2020) Experimental Investigation of color reproduction quality of color 3D printing based on colored layer features. Molecules 25:2909 4. Color management in displays and 3D printing. Available online: http://www.color.org/eve nts/taipei/0-ICC_3D_print_meeting.pdf, Last accessed 20 June 2020 5. He LX, Chen GX (2016) Effect of aqueous adhesives on color of paper surface in paper-based 3D printing. Packag Eng 37:153–156 6. Yuan JP, Yan XY, Wang XC, Chen GX (2017) Paper-based 3D printing industrialization for customized wine packaging applications. In: NIP & digital fabrication conference, pp 118–121 7. Yuan JP, Cai L, Wang XC, Chen GX (2019) Visualization of biomedical products based on paper-based color 3D printing. In: NIP & digital fabrication conference, pp 128–131 8. Sun PL, Sie YP (2016) Color uniformity improvement for an inkjet color 3D printing system. Elect Imaging 20:1–6 9. Sohaib A, Amano K, Xiao KD, Yates JM, Whitford C, Wuerger S (2018) Colour quality of facial prostheses in additive manufacturing. Int J Adv Manuf Technol 96:881–894. https:// doi.org/10.1007/s00170-017-1480-x 10. Yuan JP, Zhu M, Xu BH, Chen GX (2018) Review on processes and color quality evaluation of color 3D printing. Rapid Prototyping J 24:409–415. https://doi.org/10.1108/RPJ-11-20160182 11. Wang XC, Chen C, Yuan JP, Chen GX (2020) Color reproduction accuracy promotion of 3D-printed surfaces based on microscopic image analysis. Int J Pattern Recognit Artif Intell 34:5593–5600 12. Fastowicz J, Okarma K (2018) Fast quality assessment of 3D printed surfaces based on structural similarity of image regions. In: Proceedings of the 2018 international interdisciplinary Ph.D. workshop. Swinoujscie, Poland, pp 401–406 13. Fastowicz J, Okarma K (2019) Quality assessment of photographed 3D printed flat surfaces using hough transform and histogram equalization. J Univers Comput Sci 25:701–717. https:// doi.org/10.4028/www.scientific.net/AMM.644-650.2386 14. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13:600–612
Ink Color Matching Method Based on 3D Gamut Visualization Xu Zhang1,2 and Maohai Lin1,2(B) 1 Key Laboratory of Green Printing and Packaging Materials and Technology in Universities of
Shandong, Qilu University of Technology, Jinan, China [email protected] 2 School of Light Industry Science and Engineering, Qilu University of Technology, Jinan, China
Abstract. Because the traditional computer color matching algorithm can not give people an intuitive feeling, and the color matching process is difficult to understand, aiming at the above problems, an ink color prediction method and process were established. Firstly, the relative position of the ink color gamut in the CIEL*a*b* color space was determined by using the color gamut visualization technology. Then the position of the target color in the CIEL*a*b* color space and the ink color field was found, and the ink ratio of the target color was obtained by matrix calculation using the spectral data of the base color bar. The experimental results show that the method in this paper can meet the actual printing color requirements, and it can provide a new method for ink computer color matching. Keywords: Computer color matching · Gamut visualization
1 Introduction The spot color ink is widely used in securities printing, packaging printing, map printing, anti-counterfeiting packaging and other printing operations due to its characteristics of accuracy, opacity, and color range [1]. In the past, spot ink color matching mainly adopts the way of manual color matching, but manual color matching process has a long time, and accuracy and low repetition rate, easy to produce raw material waste and other defects. Computer color matching technology emerges as the times require. It is widely used because of its advantages of fast calculation and convenient operation. Most of the current computer color matching systems are based on Kubelka–Munk (K–M for short) theory, and the optical properties (absorption coefficient and scattering coefficient) and component proportion of colorants are described by linear sum mathematical relationship [2–4]. A large number of researchers have proposed various accurate models and algorithms based on the K–M theory, such as Sanderson correction, Recursive Quadratic Approximation and Optimization Algorithm, Mixed Shaping Approximation algorithm [5–8], etc., and developed a large number of color matching software based on these algorithms, such as X-Rite Color Premier 8400, Data color 650, © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_21
Ink Color Matching Method Based on 3D Gamut Visualization
133
Data color match-delegation and Ink Formulation System. Divided by K–M theory as the foundation of the proposed algorithm, and the correction algorithm, based on the theory of complex spectrum [9], neural network and particle swarm optimization algorithm [10] and other nonlinear color model has been put forward and research, although this kind of model of color matching effect is relatively good, but the calculation process of this model is very complex, and built-in more undetermined parameters. A variety of color matching algorithms emerge endlessly, but these algorithms can not give people intuitive cognition, need to be sensitive to a variety of numbers; In this paper, color matching is based on 3d gamut visualization technology, and the color matching is carried out through the relative position of the spot color ink color and the color with different ratio of the base ink in the color space, so that the color matching process becomes intuitive and effective.
2 Algorithm and Experimental Process Based on 3D Gamut visualization technology, this algorithm finds the gamut boundary of ink gamut in CIEL*a*b* space, and determines the desired target color in the space, and obtains the relative position by comparing with the color data of basic ink concentration. The basic concentration ink which is closest to the target color in the space is found, and the color data is processed, then the CIEL*a*b* value of the target color is obtained by matrix operation, and finally the target color ratio is obtained by conversion. Matrix operation is mainly used to represent the concentration by the unit vector, through the calculation of the results to deduce the formula. Prior to color matching, ink strips are first made to obtain the boundary point of the color gamut of the ink to be used, to generate an accurate ink gamut, and locate the position in the CIEL*a*b* color space. Then, the production of ink proportion spline will be provided basic data for the subsequent color matching. 2.1 Experimental Materials and Instruments 128g coated paper; Inks: yellow ink, Magenta ink, green ink, black ink, and reducer (produced by DIC Corporation). CB NB600 color developing machine, eXact spectrophotometer (produced by X-Rite, US), ink quantifier (accuracy 0.001 mL), ink knife, etc. 2.2 Experimental Procedure Preparatory work. Clean the color developing machine, ink quantifier and other equipment to ensure its cleanliness, so as to facilitate the smooth conduct of the experiment. Ink strips production. In the production of the strips used in the determination of ink gamut, yellow ink, magenta ink and green ink should be mixed in proportion on the color development instrument, the total amount is 10 mL, and the ink uniformity time is 60 s, and the ink uniformity speed is 600 r/min, and the color development pressure is 0 mm, the color development speed is 10. When making ink ratio spline production,
134
X. Zhang and M. Lin
base printing ink (yellow ink, magenta ink, green ink, black ink) and reducer mix in proportion (that is, the concentration of 2, 4, 6, 8, 12, 16, 20, 24, 28, 32, 40, 45, 48, 50, 55, 64, 80, 99%), to make different concentrations of strips ink ratio. These proportional concentrations were chosen to obtain points as uniformly distributed in one direction in the CIEL*a*b* color space as possible, and as evenly spaced as possible between their spectral reflectance curves. Determine the experimental strips. The spectral data of the target color sample were measured with an X-RITE spectrophotometer and imported into an algorithm, which matched the ink ratio obtained through the gamut visualization technology and produced splines (Fig. 1).
Fig. 1 Base spline and target spline picture
Color data measurement. The X-Rite spectrophotometer was connected to a computer and the X-Rite DataCatcher software was used to import the spectral data of the ink spline produced into an EXCEL sheet. The parameters used in this experiment are D50 light source, 2° Angle of view, and measurement condition is M1. 2.3 Notes for Experiment In the ink strips making, in order to avoid the influence of other factors, it is necessary to maintain the consistency of each operation; In the color measurement, it is necessary to select a clean ink area for measurement and take the average value for multiple measurements to avoid errors.
3 Results and Data Analysis 3.1 The Data Analysis In the course of this experiment, the most important thing is the production of ink proportion spline, because ink proportion spline is the basis of the correct operation of the algorithm. In order to avoid the error in the experimental process, the ink proportion spline need to be tested, and the serious deviation of the data need to be corrected. In this experiment, the method of reflection spectrum was used. Different concentrations of ink proportional spline reflectivity curve should be a regular distribution, and the smaller the base concentration, the higher the reflectivity. If the curve intersects, the sample should be revised again. Taking the spectral reflectance curve of magenta ink with different concentrations as an example (Fig. 2), the curve is regularly distributed, and no crossover occurs, so there is no need to revise.
Ink Color Matching Method Based on 3D Gamut Visualization
135
Fig. 2 The spectral reflectance curve of magenta ink with different concentrations
3.2 Interpretation of Result The spectral data of the target color used in this experiment are as follows (Table 1). Table 1 Target color spectral data Sample number Target color L
*
b*
A
54.58
14.96 −27.28
B
68.89
−22.34 −26.41
C
55.68
−46.86
22.06
The formula after color matching through gamut visualization technology is as follows (Table 2). Table 2 Ink formulations Sample number Ink formulations Yellow
Magenta Cyan
Black Reducer
A
0
0.162
0.124 0.036
9.678
B
0.011
0.003
0.222 0.018
9.746
C
1.88
0
1.353 0
6.767
136
X. Zhang and M. Lin
After measurement, the spectral data and chromatic aberration of the actual color obtained are as follows (Table 3). Table 3 The spectral data and chromatic aberration of the actual color E2000
Sample number
Actual color
A
53.43
11.8
−23.99
4.7
B
66.36
−21.9
−24.76
3.05
C
53.7
−44.35
21.11
3.34
L
a*
b*
Through the comparison between Tables 1 and 3, it can be seen that although the color difference obtained by this algorithm is still relatively large, it can meet the actual printing requirements. In general, the closer the color formula is to the true ratio, the smaller the color difference between the obtained sample and the target sample. The method flow of color matching based on gamut visualization technology proposed in this paper can be summarized as the following Fig. 3.
Fig. 3 Ink color matching method and process
4 Conclusions Based on 3D gamut visualization technology, an ink color matching prediction method was proposed and a prediction process was established. The experimental results show that the proposed method can meet the requirements of actual printing, and has the characteristics of intuitive process, and the method can be updated in the process of using, so as to achieve better results. Acknowledgements. This work was supported by the Shaanxi Key Laboratory of Printing and Packaging Engineering (Project Number: 2017 KFKT-02).
Ink Color Matching Method Based on 3D Gamut Visualization
137
References 1. Xing J, Guichun HU (2011) Establishment and application of ink database for computer color matching system. J Nanjing For Univ Nat Sci Edn 5:099–102 2. Davidson HR, Hemmendinger H (1966) Color prediction using the two-constant turbid-media theory. J Opt Soc Am 56(8):1102–1109 3. Berns RS, Mohammadi M (2010) Single-constant simplification of Kubelka-Munk turbidmedia theory for paint systems—a review. Color Res Appl 32(3):201–207 4. Bandpay MG, Ameri F, Ansari K et al (2018) Mathematical and empirical evaluation of accuracy of the Kubelka-Munk model for color match prediction of opaque and translucent surface coatings. J Coat Technol Res 15(5):1117–1131 5. Yang H (2009) Law of color mixing in textile opaque media. Donghua University 6. Kuehni R (2001) Computer color formulation. Lexington Book, Lexington, MA 7. Walowit E, Mccarthy CJ, Berns RS (2010) Spectrophotometric color matching based on two-constant kubelka-munk theory. Color Res Appl 13(6):358–362 8. Ma T, Johnston WM, Koran A (1987) The color accuracy of the Kubelka-Munk theory for various colorants in maxillofacial prosthetic material. J Dent Res 66(9):1438–1444 9. Yechi P, Duoyi P, Lei Z et al (2009) Study of ink formulation technology based on complex frequency spectrum color theory. In: China printing and packaging study 10. Jian L, Wu-Feng S, Jin-Quan Y et al (2013) Particle swarm optimization based spot color matching algorithm. J Hangzhou Dianzi Univ 437:674–677
Study on Color Gamut of UV Ink-Jet Printing Based on Different Wood Substrates Yongqing Liu1,2 and Maohai Lin1,2(B) 1 Key Laboratory of Green Printing & Packaging Materials and Technology in Universities of
Shandong, Qilu University of Technology, Jinan, China [email protected] 2 School of Light Industry Science and Engineering, Qilu University of Technology, Jinan, China
Abstract. In order to study there production of UV ink-jet printing on different wood substrate. Using UV ink-jet printer, standard color patches are printed on wood substrate, wood substrate with white ink, and wood substrate with white ink and Varnish, and then the printed product color patches are measured using photometer, and then use MATLAB software program to form Color Gamut. According to the experimental data and the program running, the largest color patch in gamut is the wood substrate with white ink and Varnish. The gamut on the wood substrate of the white ink is less than the former, but significantly larger than the gamut on the untreated wood substrate. This will facilitate the flexible selection of different substrates for the printing of duplicates with different printing requirements on the wood substrate. Keywords: UV Ink-jet Printing · Photometer · Color Gamut
1 Introduction In recent years, UV ink-jet printing has shown extraordinary potential in printing technology worldwide. In addition to its environmental protection characteristics, UV ink-jet printing can also be applied to a variety of printing materials due to its wide range of applications [1, 2]. Color reproduction plays an important role in the quality of printed matter. On this basis, the characteristics of UV end dryer and UV interlayer dryer are introduced. From a source gamut to a destination gamut, colors are transmitted through many media with different gamut [3, 4]. For different wood substrates, due to the color of the substrate itself is different, UV ink-jet is no exception, the printing image copy effect is different, copy the color gamut of the image is also different. In order to reduce the difference between the printed matter and the original, the substrate can be purposefully machined. The benefits of UV coatings in terms of image quality improvement may be more opacity, color stability, darker and more vivid colors, clearer graphics, higher gloss and a uniform surface to give the label a more vivid appearance [5]. In the printing industry, it is often necessary to compare two or more gamut to determine how similar they are [6]. There are many algorithms to describe the boundary © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_22
Study on Color Gamut of UV Ink-Jet Printing …
139
characteristics of gamut, such as region segmentation algorithm, convex hulls algorithm, Alpha Shapes algorithm and Delaunay algorithm [7]. Convex hull algorithm is used for gamut generation in this paper. The accuracy of gamut boundary constructed by gamut boundary description algorithm has great influence on gamut mapping algorithm, which determines the accuracy of color transfer replication to a certain extent. Since the gamut is essentially a set of discrete color points and has concave parts, it is difficult to have an excellent technique for evaluating gamut boundary algorithms. Up to now, there are few researches on the evaluation of gamut boundary description algorithm, and there is no uniform standard [8]. In this paper, the control variable method is adopted to only explore the size of printing gamut with different wood substrates, without considering the influence of equipment, ink and other factors on printing gamut.
2 Experiment 2.1 Printing Standard Color Charts with UV Ink-Jet Printer First of all, prepare the color charts file to be printed. For this UV ink-jet printing, I chose it8.7–3 CMYK color chart. The color chart has 1120 color patches, which are well-organized. Each line has 56 color patches and 20 vertical color patches. This color chart as the research object is representative. The indoor temperature is 20 degrees centigrade. In Fig. 1, the UV ink-jet printer is used on the wood substrate, white ink base and white ink plus white varnish wood substrate. Figure 1 is a UV ink-jet printer. After printing, put the printed color chart into the black carton to avoid dust and other external factors affecting the measurement results. After completion, prepare to use spectrophotometer to measure the Lab value, RGB value and other parameters of each color patch.
Fig. 1. UV ink-jet printer
140
Y. Liu and M. Lin
Fig. 2. Spectrophotometer
2.2 Measurement of Printing Standard Color Patches by Spectrophotometer D50 light source is a kind of light source with slightly warm color. According to ISO3664:2000, D50 is the real standard color temperature for observing color. Therefore, D50 standard light source is selected for measurement. Figure 2 shows the spectrophotometer. Its model number is 033979 and it is made in USA. First, connect the spectrophotometer to the computer, open the X-Rite datcher software, click the link, options, select CIEL*a*b*, and click OK. Then turn on the power supply of the spectrophotometer, put the spectrophotometer on the chassis, select Lab mode, D50 light source, and the measurement angle is 2. First correct the instrument. Then start the measurement and input the measured data into the excel file. In order to avoid the influence of accidental factors, continuous measurement of color charts printed on three different wood substrates, the hand does not touch the color patch, so that the measurement result is more accurate. Figures 3, 4 and 5 shows the color cards of three different wood substrates.
3 Analysis of Measurement Data by MATLAB Software 3.1 Using Matlab Software to Form Color Gamut For image replication workflows, it is useful to analyze image gamut by device gamut. CIEL*a*b* values of all pixels in the image can be used to deduce the color gamut of the image. As a future study, a collection of images will be compared with standard color-coded Gamuts, for example, sRGB measured using gamut [6]. Input the measured data into matlab program, and get the color gamut of three substrates by running the following program. Convex hull algorithm is used for gamut generation. For the convenience of reading and writing, the wood substrate is named Wood, the substrate with white ink is named White ink, and the substrate with white ink and varnish is named Varnish. The program’s main operating instructions are “xlsread”, “convexvisual”, “labforecast” and “reshape”. After running MATLAB program, the color gamut diagram is as follows.
Study on Color Gamut of UV Ink-Jet Printing …
141
Fig. 3. The color chart of wood substrate
Fig. 4. The color chart of wood substrate with white ink
3.2 Calculating Gamut Volume by MATLAB Software The program’s main operating instructions are "xlsread" and "labforecast ". Through the program running, we can clearly see that the color gamut formed by copying the color charts based on the wood substrate is the smallest, and the color gamut formed by Figs. 7 and 8 is not much different, but the color gamut of the latter two is obviously larger than that of Fig. 6. Next, we use MATLAB software to calculate the volume of the three color gamut. Procedure is as follows: clc clear all cmyk = xlsread(’Wood.xlsx’,’RGB’,’C15: F1134 );
142
Y. Liu and M. Lin
Fig. 5. The color chart of wood substrate with white ink and Varnish
Fig. 6. Color gamut of wood substrate
lab_jidi = xlsread(’Wood.xlsx’,’SP’,’AP1: AR1120 ); lab_1 = xlsread(’White ink.xlsx’,’SP’,’CQ1: CS168 ); lab_2 = xlsread(’White ink.xlsx’,’SP’,’BO169:BQ169 ); lab_3 = xlsread(’White ink.xlsx’,’SP’,’AG170: AI1120 ); lab_ = [lab_1;lab_2;lab_3]; lab_guangyou = xlsread(’Varnish.xlsx’,’SP’,’AP1: AR1120 ); rgb(:,1) = 255.*(100-cmyk(:,1)).*(100-cmyk(:,4))./10000; rgb(:,2) = 255.*(100-cmyk(:,2)).*(100-cmyk(:,4))./10000;
Study on Color Gamut of UV Ink-Jet Printing …
Fig. 7. Color gamut of white ink substrate
Fig. 8. Color gamut of varnish
rgb(:,3) = 255.*(100-cmyk(:,3)).*(100-cmyk(:,4))./10000; labforecast1 = lab_jidi; [district1,points_information1,dela_volume_1] = dela_volume(labforecast1); dela_volume_1 labforecast2 = lab_baidi; [district2,points_information2,dela_volume_2] = dela_volume(labforecast2); dela_volume_2 labforecast3 = lab_guangyou;
143
144
Y. Liu and M. Lin
[district3,points_information3,dela_volume_3] = dela_volume(labforecast3); dela_volume_3 The output of the program is as follows: dela_volume_1 = 1.7617e + 05 dela_volume_2 = 3.1555e + 05 dela_volume_3 = 3.7157e + 05.
4 Analysis and Conclusions Through subjective evaluation, we can see that the color gamut effect of UV ink-jet printing on wood substrate is poor. It’s easy to see why the color of the wood itself can have a big impact on lighter colored areas. Through the operation of the program, we can be very intuitive figure six gamut minimum. By calculating the gamut volume, the printable gamut of the three bases is 176,170, 315,550 and 371,570, respectively. The color card gamut of wood substrate printing is the smallest, the main reason is that the color of wood substrate itself has a greater impact on the lighter color blocks. When the wood substrate is added with white ink, the influence of the white base on the printed color patches becomes smaller, so the color range appears larger. After adding white ink and varnish to the wood substrate, the color range is the largest, which may be because the gloss coating retains the color of the color block as much as possible. The color gamut volume of the substrate with white ink is 1.79 times that of the wood substrate, so we can add a layer of white ink to the substrate if we want to make it more colorful in UV printing. After adding light oil color gamut volume is 2.1 times of wood, greatly increased the color richness. In addition to light oil can not only increase the color gamut, but also can make a more three-dimensional pattern. Therefore, the use of ink-jet printer photos, art prints, graphics and text when printing, you can add white ink base on the basis of light oil, so that the printing effect is better. Due to the characteristics of the wood itself, the paper of the printing reproduction effect is obviously better than the wood, the experiment of the three prints between the color difference. How to reduce the chromatic aberration between prints is the next step to be studied. Acknowledgements. This work was supported by the Shaanxi Key Laboratory of Green Printing and Packaging Engineering(project Number :2017KFKT-02).
References 1. Ibrahima NA, Khalilb HM, Eida BM (2015) A cleaner production of ultra-violet shielding wool prints. J Cleaner Prod 92:187–195 2. Das BR (2010) UV-radiation protective clothing open text. J 3:14–21 3. Catt D, Warming To UV printing. AP Australian Printer Magazine, n SEP. Australian
Study on Color Gamut of UV Ink-Jet Printing …
145
4. Chen GX, Li XZ (2010) Study on digital original gamut based on imagedependent color gamut mapping in digital printing. China 5. Jurica D, Jesenka P, Igor M (2014) Influence of UV varnish pattern effect on print quality. J Imaging Sci Technol 6. Deshpande K, Green P, Pointer MR (2015) Metrics for comparing and analyzing two colourgamuts. Color Res Appl 40(5):465–471 7. Chen HS, Yuan JP, Fu WT, etc. (2018) Applicability evaluation of proposed new gamut comparison metrics based on MATLAB functions. J Packag 010(006):81–88 8. Lin M, Guangyuan WU, Zheng Y, Li Z, Li W (2019) Color science and technology. China Light Industry Press, Beijing 140–141
Preparation of Photonic Crystals by Rapid Coating Method Jie Pan, Bin Yang, Yonghui Xi, Min Huang, and Xiu Li(B) School of Printing and Packaging Engineering, Beijing Institute of Graphic Communication, Beijing, China [email protected]
Abstract. In order to realize the rapid preparation of large-area photonic crystals on the paper surface, this paper combines the coating method and the selfassembly technology of colloidal microspheres to prepare photonic crystals on the paper surface. At the same time, the assembly conditions such as the concentration of colloidal microspheres and black dye are optimized, so that the colloidal microspheres are regularly arranged on the paper substrate, and the color rendering performance of the photonic crystal is improved. The digital cameras and the scanning electron microscope were used to measure the color appearance and micro-morphology of the samples. The X-Rite MA68II multi-angle spectrophotometer was used to measure the reflection spectrum. The rapid coating method overcame the long preparation cycle of the traditional way. Keywords: Structural color · Photonic crystal · Coating method · Self-assembly
1 Introduction Photonic crystal is an optical material composed of materials with different refractive indexes arranged in an alternating order in three-dimensional space [1, 2]. Many bright colors in nature are actually presented in the form of photonic crystals [3–5]. For example, the opal produced in Australia show bright colors [6]. The general method of artificially preparing photonic crystals is to periodically arrange a material in another medium with a different dielectric constant [7, 8]. There are mainly top-down physical processing methods and bottom-up self-assembly methods. While he cost of physical processing methods are high [9, 10]. The self-assembly method has the advantages of easy repeatability, low cost, and large area preparation [11]. However, assembly growth of colloidal crystals will inevitably produce polycrystalline and structural defects such as dislocation and structural collapse, and the preparation period is long [12, 13]. In order to make the photonic crystal structure color widely used in the printing field, the coating technology and the self-assembly technology are combined to prepare the photonic crystal in this paper. So as to achieve simple and fast technique to obtain ordered photonic crystal on the surface of the paper substrate, the self-assembly conditions such as the concentration of the colloidal microsphere solution and the black dye were optimized in our experiment. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_23
Preparation of Photonic Crystals by Rapid Coating Method
147
2 Experimental Section In order to improve the adhesion and the brightness of the photonic crystal, a mixed dispersion of colloidal microspheres was prepared by adding aqueous varnish and black dye to the SiO2 solution. First, the 280 nm diameter SiO2 microsphere solution was added with deionized water to adjust the concentration. The black dye and the aqueous varnish with a volume fraction of 0.3% were added to the colloidal microsphere dispersion liquid, followed by ultrasonic dispersion for 2 min to obtain a mixed dispersion. Then the black laser marking paper is cut into 3.5 cm * 4.5 cm, and the substrate was placed in a low-temperature plasma surface treatment device for 5 min with gas flow rate 80 SCCM and 200 w power. Using a 10# coating film rod, the mixed dispersion liquid was evenly coated on the surface of the substrate, and then placed on a 55 ° heating table and dried to obtain photonic crystals. In this experiment, 5 different samples were prepared, and numbered 1#, 2#, 3#, 4#, and 5#. The parameters of the prepared samples are shown in Table 1. Table 1 Specific parameters of the sample Volume fraction of SiO2 (%)
Volume fraction of dye (%)
1#
10
9
2#
15
9
3#
20
9
4#
20
6
5#
20
17
3 Results and Discussion 3.1 Surface Morphology of Photonic Crystals The photos of the above five samples are shown in Fig. 1. The first row is taken by the 1–5# samples under indoor light illumination, and the surface is purple-red. The photo conditions for the second row are D65 light sources and 45/0 viewing conditions in the GretagMacbeth Judge II standard light source observation box. In the standard observation box, the sample can observe the obvious red reflection phenomenon. To further observe the microstructure of the sample, the surface morphology of the structural color film was characterized by SEM, which is shown in Fig. 2. It can be seen that the photonic crystal surface is uniform. In some areas, the SiO2 microspheres are closely arranged and form a hexagonal arrangement structure corresponding to the FCC{111} plane. Some areas also present the body-centered cubic arrangement of SiO2 microspheres. At the same time, there are defects in some areas, such as vacancies and dislocations, which are arranged in a disorderly manner.
148
J. Pan et al.
Fig. 1. Photograph under different lighting conditions
Fig. 2. SEM photos
3.2 Effect of SiO2 Microsphere Concentration on Structural Color The 1#, 2# and 3# samples are photonic crystals assembled with 280 nm SiO2 microspheres. The concentration of black dye is 9%, and the concentration of SiO2 microspheres is 10%, 15%, and 20%, respectively. Comparing the photos of the samples in Fig. 1, it can be found that the three samples are purple under indoor ambient light, and a bright red can be observed under the D65 light source. The surface of 1# sample is uniform, butthe color is dark with low brightness; the edge part of 2# appears a little white, and the brightness is higher; the brightness of sample 3# is higher than that of
Preparation of Photonic Crystals by Rapid Coating Method
149
samples 1# and 2#, but it can be found that the surface of the film is not Uniform, and a lot of white can also be observed. Using the MA68II multi-angle spectrophotometer with the whiteboard as the reference, the reflection spectrum of the sample was measured at 45° incidence and 15° direction, and the reflection spectrum is shown in Fig. 3. As can be seen from Fig. 3, the main reflection peak position of samples 1#, 2#, and 3# are at 600, 590, and 570 nm, respectively, and the reflection peaks are 12.0%, 15.5%, and 24.8%. The half-width of the spectrum of sample 3# is the narrowest. The measured main wavelength of the samples is different from the theoretical value, but as the concentration of the microsphere increases from 10 to 20%, the main wavelength is closer to 564 nm (theoretical value). This is because the theoretical value calculated according to the Bragg diffraction equation is strictly according to the FCC structure, but the photonic crystal prepared by the coating method is not completely a face-centered cubic structure, resulting in a difference between the test value and the theoretical value. It is also known that the higher the concentration of microspheres, the higher the spectral reflectance and the narrower the half-height width of the spectral curve. The reason is that when the concentration of microspheres is low, the photonic crystals formed on the surface of the substrate are discontinuous and cannot completely cover the black substrate. When the concentration of microspheres is high, the film is continuous and can completely cover the black substrate. At the same time, the increase in concentration is conducive to the formation of a compact face-centered cubic structure, resulting in a higher degree of ordering of the periodic structure, so the spectral reflectance is higher and the half-height width is narrower.
Fig. 3. Reflectance spectra of 1#, 2#, and 3# samples
3.3 Effect of Black Dye Concentration on Structural Color In this paper, an appropriate amount of black dye is added to the colloidal microsphere dispersion to absorb stray light, thereby improving the color rendering properties of the
150
J. Pan et al.
Fig. 4. Reflectance spectra of 3#, 4#, and 5# samples
photonic crystal. In order to investigate the effect of the concentration of black dye on the structural color, samples 3#, 4# and 5# were prepared respectively. The concentration of SiO2 microspheres is 20%, and the concentration of black dye in samples 3#, 4# and 5# are 9%, 6% and 17%, respectively. Using the same measurement conditions as in 3.2, the measured reflectance curve of the sample is shown in Fig. 4. The reflection peaks of samples 3#, 4#, and 5# are at 570 nm, 580 nm, and 580 nm, respectively. The reflection peaks are 24.8%, 19.5%, and 22.1%, respectively. It can be seen that when the black dye concentration increases from 6 to 9%, the reflection peak increases by 5.3%, and when the black dye concentration increases from 9 to 17%, the reflection peak decreases by 2.7%. This is because when the added black dye is excessive, the black dye molecules gather in the middle of the SiO2 microspheres, blocking the assembly of some SiO2 microspheres, and resulting in defects, so the spectral reflectance decreases. It can be seen that the brightness of the structural color can be improved by adding a black dye. When the concentration of the black dye is 9%, the brightness of the photonic crystal can be most effectively improved.
4 Conclusions In this paper, the coating method was used to achieve rapid preparation of SiO2 photonic crystals on the paper surface by controlling assembly conditions such as SiO2 microsphere concentration and black fuel concentration. As the concentration of SiO2 microspheres increases, the reflection peak increases, and the half-width of the spectral curve becomes narrower. When the concentration is greater than 10%, a white component will be produced on the surface of the structural color film. When the concentration of the black dye is 9%, the color rendering performance of the photonic crystal can be most effectively improved. Using this method can quickly prepare photonic crystals on paper, which is beneficial to the wide application of structural colors in the printing field.
Preparation of Photonic Crystals by Rapid Coating Method
151
Acknowledgements. This work was supported by the National Science Foundation of China (No. 61805018), BIGC Project (Ec202003), General program of science and technology development project of Beijing Municipal Commission (KM20190015008), and Practical Training Plan for High Level Talents in Beijing Universities (03150120001/062).
References 1. Yablonovitch E (1987) Inhibited spontaneous emission in solid-state physics and electronics. Phys Rev Lett 58(20):2059–2062 2. John S (1987) Strong localization of photons in certain disordered dielectric superlattices. Phys Rev Lett 58(23):2486–2489 3. Kinoshita S, Yoshioka S (2010) Structural colors in nature: the role of regularity and irregularity in the structure. ChemPhysChem 6(8):1442–1459 4. Xia Y (2004) Photonic crystals. Adv Mater 13(6):369 5. Haibo H, Wenjie G, Rui Z (2020) Direct growth of vertically orientated nanocavity arrays for plasmonic color generation. Adv Funct Mater 2002287. https: //doi.org/https://doi.org/10. 1002/adfm.202002287. 6. Xia GH, Jun FJ, Peng ZX (2003) Research progress on photonic crystals and the ways of preparation. J Funct Mater 34(1):5–8 7. Ximin C, Xiaolong Zh, Lei Sh et al (2020) Plasmonic color laser printing inside transparent gold nanodisk-embedded poly(dimethylsiloxane) matrices. Adv Opt Mater 8:1901605 8. Suli W, Baoting H, Yue W et al (2020) Reflection and transmission two-way structural colors. Nanoscale 12:11460–11467 9. Ghiradella H (1991) Light and color on the wing: structural colors in butterflies and moths. Appl Opt 30(24):3492–3500 10. Wu Z, Lee D, Rubner M et al (2010) Structural color in porous, superhydrophilic, and selfcleaning SiO2/TiO2 Bragg stacks. Small 3(9):1445–1451 11. Joannopoulos JD, Villeneuve PR, Fan S (1997) Photonic crystals: putting a new twist on light. Nature 386(6621):143–149 12. Campbell M, Sharp DN, Harrison MT et al (2002) Photonic crystals for the visible spectrum by holographic lithography. Nature 34(6773):53–56 13. Hu HB, Chen QW, Li R et al (2013) Exploration of the Structures of the Magnetically Induced Self-Assembly Photonic Crystals in a Solidified Polymer Matrix. Advanced Materials Research 634–638:2324–2331
Image Processing Technology
Study on Image Enhancement Method for Low Illumination Target Recognition Hongli Liu1(B) , Jing Cao2 , Xueying Wang1 , Anning Yang1 , and Qiang Wang1 1 School of Media and Design, Hangzhou Dianzi University, Hangzhou, ZheJiang, China
[email protected] 2 National Taiwan University of Arts, Taiwan, China
Abstract. Low-illumination target recognition is a popular topic in computer vision. This study investigated the issue of the low accuracy target recognition caused by image aliasing and noise, and which happened in the low-illumination environment. In this paper, we first analyzed the imaging mechanism. Then, a national standard image data collecting environment and a collecting standard procedure was set up. Based on this, we extracted the characteristics of lowillumination image and determined the relationship between imaging environment, sample collection and Mask R-CNN target recognition. Finally, we proposed an image enhance algorithm for low-illumination target recognition which improved the robustness and recognizing accuracy significantly. This algorithm also extended the application context of the target recognition. Keywords: Low-illumination · Target recognition · Image enhancement · Mask R-CNN
1 Introduction With the widening application of machine vision in social and economic development, scholars specializing in image enhancement and target recognition at home and abroad are paying more and more attention to improved methods using image enhancement and its algorithm model to deliver more accurate target recognition results, especially in complex light environment and low illumination environment. In this regard, this study topic is becoming the focus of computer vision study and a full analysis on it will provide critical reference and worthy understanding for its future study [1]. Currently, the image enhancement for low-illumination target recognition at home and abroad is mainly aimed at the images collected by special weather scenes such as rainy and foggy days and those collected at night. Nevertheless, the correlation between target recognition and image enhancement has not been clearly demonstrated. Based on analyzing the how the characteristics of light source influence imaging, this paper established the correlation model of light source characteristics, target recognition and image enhancement, and the algorithm model of image enhancement and target recognition, especially the mechanism resulted from low illumination that led to poor target © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_24
156
H. Liu et al.
recognition effects and low detection accuracy. Besides, the image quality was improved by image enhancement algorithm, promising to improve the effectiveness and precision of target recognition.
2 Related Technical Principles 2.1 Low Illumination Image Enhancement Principle Low-illumination imaging means that compared with normal-illumination imaging, lowillumination imaging is often affected by the characteristics of the light source and imaging environment, resulting in insufficient image contrast, low accuracy and other problems. Therefore, an image enhancement algorithm based on Retinex theory was employed in this paper to enhance low-illumination images. The enhancement principle can be expressed by Formula (1): p = g(p, k)
(1)
In Formula (1),p is the enhanced image,p is the input image,g is the brightness conversion function, and k is the expected exposure rate graph for each pixel. Then, the weak optical image enhancement problem was further progressed to the estimation of the brightness conversion function g and matrix k. If the BTF model estimation and CRF model estimation of the camera response model [2] were determined, the brightness conversion function can be expressed as Formula (2): (k∗)
g(p0 , k∗ ) = p∗ = β∗ p0r∗ = eb(1−k∗) p0
(2)
At the same time, restrained by the expected exposure rate k, the final enhancement formula of each pixel was shown in Formula (3): pc (x) = eb(1−k(x) ) pc (x)(k(x)a) a
(3)
3 Mask R-CNN Algorithm Mask R-CNN is a convolutional neural network proposed on the basis of Faster R-CNN architecture [3]. The overall framework of the model was shown in Fig. 1. Mask R-CNN detects, recognizes or predicts image targets by adding prediction branches. During the training, Mask R-CNN defined the ROI of each sample as a multi-task loss, namely: L = Lcls + Lbox + Lmask
(4)
where Lcls and Lbox are respectively classification loss functions and positioning loss functions [4], For each target recognition area, there is an output for the mask branch, which means that there is a mask output for each category in K. For this reason, Mask R-CNN tried to compare pixel by pixel and used the average binary cross entropy to define the loss. In particular, only the mask of the correct category K is defined, and the other error categories do not affect the loss function. Lmask is the loss function of the cutting task. In the cutting task, k categories branched into the system generate a m ∗ m mask, and the final output vector is km2 .
Study on Image Enhancement Method for Low …
157
Fig. 1 Mask R-CNN model structure diagram
4 Image Enhancement for Low-Light Target Recognition 4.1 Construction of Low-Light Target Recognition Image Acquisition Scene At the present study, under the IEEE certification measurement conditions and CPIQ standard, a low-illumination imaging environment and a digital acquisition platform for image samples were built (as shown in Fig. 2).
Fig. 2 Image sample digital acquisition platform
The IEC62676-5-2018 standard was followed for the image sample digital acquisition. Under the condition that the various imaging environment parameters remained unchanged, the color temperature and brightness of the light source were adjusted [5]. The color temperature of the light source selected in the experiment was D65, D50, CWF and INCA respectively, and the illumination brightness was 550lx to simulate the normal lighting environment, low illumination 100lx, 80lx, 60lx, 40lx, 30lx, 20lx, 10lx and other 7 levels. In the whole experiment, there were 4 groups of pictures with different color temperatures and 8 groups of pictures with different illumination were used for target recognition to determine the relationship between the influence of color temperature and illumination on the accuracy of image recognition. The mainstream Mask R-CNN algorithm was
158
H. Liu et al.
utilized as the image recognition algorithm. Furthermore, the latest MS COCO training parameters were applied as the initial network parameters [6, 7]. 4.2 Analysis of Influencing Factors of Low-Light Target Recognition Images The Effect of Color Temperature on Target Recognition Accuracy. Figure 3 is the image sample comparison of D50, D65, CWF and INCA light sources with different color temperatures but the same light intensity (550lx).
(a)D50
(b)D65
(c)CWF
(d)INCA
Fig. 3 Comparison of image samples of 550lx illumination and different color temperatures
Using the Mask R-CNN algorithm to identify the above samples with the same light intensity but different color temperatures, and then comparison shows the effect of different color temperature on the recognition accuracy with the same light intensity, as shown in Fig. 5. According to the data analysis of the Fig. 5, when the light intensity was under 30lx, the recognition accuracy decreases suddenly. We can observe that when the light intensity was controlled unchanged, the recognition rate between different color temperatures deferred little. The Effect of Light Intensity on Target Recognition Accuracy. Figure 4. Shows the different light illuminances (550lx, 100lx, 80lx, 60lx, 40lx, 30lx, 20lx, 10lx) at the D65 color temperature image comparison.
Fig. 4 Comparison of image samples with different illuminances at D65 color temperature
The experiment used Mask R-CNN algorithm for image target recognition. The statistical results of the image target recognition accuracy rate of 4 color temperatures and different illumination levels as shown in Fig. 5.
Study on Image Enhancement Method for Low …
159
Fig. 5 Accuracy of target recognition with different color temperatures with the same light intensity
From the above experiments and data analysis, it can be seen that under different light intensities, the color temperature had little effect on the target recognition accuracy, but as the light intensity decreased, the recognition accuracy will gradually become lower. When the illuminance was less than 30lx, the target recognition accuracy of the image appeared the sharpest drop. Therefore, image enhancement is required when the ambient illumination is less than 30lx. 4.3 Design and Implementation of Image Enhancement Algorithm for Low Illumination Target Recognition Low Illumination Image Enhancement Algorithm Based on Color Temperature Transformation. It can be seen from the experiment that the color temperature had little effect on the accuracy of target recognition, but when the illumination was lower than 30lx, the target recognition accuracy of the image will be greatly reduced. Using the image enhancement algorithm based on Retinex theory to improve the target recognition results after image enhancement as shown in Fig. 6. It can be seen from the above result graph that after the image enhancement of the sample images taken under different color temperatures with an illumination intensity lower than 30 lx was performed, the sample images under low illumination intensity were obviously improved in visual contrast. Low Illumination Image Enhancement Algorithm Based on Brightness Transformation. It can be seen from Fig. 6. That after the image enhancement of the lowillumination sample image with illumination less than 30lx, the target recognition accuracy of the sample image was significantly improved, and the proportion of objects in the image are all recognized was shown in Table 1. It can be seen from the above table that under the environment of the same color temperature and different illuminance, sample recognition after image enhancement, the proportion of accuracy has been improved.
160
H. Liu et al.
A1) D50light source-20lx
A2) before Recognitionresults
A3) Image enhancement
A4) after recognition results
B1) D65 light source-20lx
B2) before Recognition results
B3) Image enhancement
B4) after recognition results
C1) CWFlight source-20lx
C2) before Recognition results
C3) Image enhancement
C4) after recognition results
D1) INCAlight source-20lx
D2)before Recognition results
D3)Image enhancement
D4)after recognition results
Fig. 6 Comparison of recognition effect before and after image enhancement in low light
5 Conclusion This paper reported the low image recognition accuracy, the weak brightness adaptation and other constraints and impacts under the current low-illumination environment, an image enhancement algorithm that can solve the low accuracy of the target recognition model was established by systematically studying the accuracy relationship between the imaging environment and the image recognition model. The improved algorithm effectively improves the problem of low recognition accuracy of low-illumination images in low-illumination environments, optimizing the target recognition algorithm model, and provided a reasonable solution for low-illumination target recognition.
Study on Image Enhancement Method for Low …
161
Table 1 Comparison of recognition effects of sample images with different illuminances under D65 light source after enhanced Type of light source
Exposure time
Aperture size
Sensitivity
Illumination
Proportion of sample recognition accuracy (%)
D65
1/4S
F/5.0
100
550
94.73
D65
1/4S
F/5.0
100
100
94.67
D65
1/4S
F/5.0
100
80
92.24
D65
1/4S
F/5.0
100
60
91.05
D65
1/4S
F/5.0
100
40
88.96
D65
1/4S
F/5.0
100
30
86.95
D65
1/4S
F/5.0
100
20
85.46
D65
1/4S
F/5.0
100
10
85.86
Acknowledgement. This work is funded by Digital Imaging Theory- GK188800299016-054.
References 1. Chi Z, Fei L, Guangqi H et al (2016) Light field imaging technology and its application in computer vision 2. Dong C, Loy CC, He K et al (2015) Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell 38(2):295–307 3. He K, Gkioxari G, Dollár P et al (2017) Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp 2961–2969 4. Sommer L, Schumann A, Schuchert T et al (2018) Multi feature deconvolutional faster r-cnn for precise vehicle detection in aerial imagery. In: 2018 IEEE winter conference on applications of computer vision (WACV), pp 635–642 5. Huang J, Rathod V, Sun C et al (2017) Speed/accuracy trade-offs for modern convolutional object detectors. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7310–7311 6. Shrivastava A, Sukthankar R, Malik J et al (2016) Beyond skip connections: Top-down modulation for target recognition. arXiv preprint arXiv1612.06851 7. He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778
Analysis of the Influence of Color Space Selection on Color Transfer Wenqian Yu, Liqin Cao(B) , Zhijiang Li, and Shengqing Xia School of Printing and Packaging, Wuhan University, Wuhan, China [email protected]
Abstract. With the development of deep learning technology, the automated grayscale image colorization methods have been widely used. As color space is very important in colorization, we analyzed 3 different color spaces, named YUV, Lab, and HSV. We used deep learning methods to evaluate the effect of different color spaces to image colorization methods. The colorization method was based on the VGG16 and the Residual Encoder model. The results demonstrated that the quality of color transfer in the YUV and Lab color spaces is better than that in the HSV color space. Keywords: Colorization · Color space · Grayscale image · Deep learning method
1 Introduction Researches on human visual characteristics show that the perception of multicolor is significantly higher than that of grayscale. Grayscale image colorization is applied in many fields, such as image processing in medical, industrial, and movie fields (colorization of medical imaging and X-ray scanning). Therefore, the research of grayscale image colorization has important theoretical and practical application value. The existing image colorization methods can be divided into three categories: color graffiti materials-based methods, example images-based methods, and deep learning methods. The first type of methods attempts to insert colors based on the color graffiti provided by the artists [1, 2]. The second type of methods are essentially an examplebased algorithm, which attempts to transfer color information from the reference image to the target grayscale images [3]. The third method is a coloring method based on a neural network. So far, almost all traditional colorization techniques need the corresponding source color image, and the similarity between the source image and the target image has a great influence on the quality of the results [4]. Therefore, the obtainment of reasonable coloring depends on whether an appropriate source color image can be selected for each given grayscale image. The colorization method of deep learning addresses the traditional limitation and reduces manual workload, which is a highly efficient, mature and stable colorization method. The disadvantage is that the predicted colors are reasonable in most cases, but the system is inclined to form low-saturated or blurred images. Deep learning © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_25
Analysis of the Influence of Color Space Selection …
163
methods for colorization have been used and verified on a large scale, but the use of multi-color space and its influence on the color transfer effect is rarely discussed. This paper uses a fully automated method based on CNN to colorize grayscale images using YUV, Lab, and HSV color spaces and analyze the influence of different color spaces for colorization results. During colorization processing, the brightness information of grayscale images is maintained and borrowing hue and saturation information from color images. Using the CNN method for grayscale images coloring, the specific judgment of the colorization effect not only depends on the error size obtained by the loss function but also considers the suitability of the colorization result without the comparison of the original image. The colorization goal is not necessarily to completely restore the actual color but to produce reasonable coloring, which is acceptable to the observer. In our work, the model adopted in the experiment specifically includes a pre-trained VGG16 CNN and a Residual Encoder. In the experiments, we choose three kinds of images from ImageNet, and the colorization results demonstrate that the algorithm is relatively mature and has been successfully used in threecolor spaces. In order to make the comparison referential and scientific, the method of controlling variables will be used: the same model for training, the same loss function to calculate the gradient, and the value after the color space conversion will also be normalized to the same range. The remaining of this paper is organized as follows. Section 2 describes the related color space and colorization techniques in detail. Experimental results and analysis are shown in Sect. 3. Conclusions are given in Sect. 4.
2 ColorSpaces and Colorization Methods 2.1 Color Space Transformation The main color spaces used in this article are RGB, Lab, YUV, and HSV. Different from choosing to reconstruct the entire complete RGB image, the neural network model generates two color channels and superimposes them with the original brightness channel to generate a color image. RGB. In the RGB color space, the required colors are obtained by mixing the three primary colors (red, green, and blue) in a certain proportion. The value range of each channel of RGB three channels is [0, 255]. Lab. In the Lab color space, L represents luminance, ab value represents chrominance information. Here, a represents the position between magenta and green, and b represents the position between yellow and green. YUV. In the YUV color space, Y represents brightness, and U and V represent hue and saturation. Generally speaking, the value ranges of the three channels are Y ∈ [0,255], U ∈ [−112, +112], V ∈ [−157, +157]. HSV. The color channels in this model are H (Hue), S (Saturation), and V (Value). The hue is measured by angle, and the value range is 0°–360°. Red is 0°, green is 120°, and blue is 240°. The value of S ranges from 0 to 100%. The larger the value, the more saturated the color. The range of brightness V is also between 0 and 100%.
164
W. Yu et al.
2.2 GrayscaleImage Colorization Method Based on CNN In this paper, the grayscale image colorization method is based on the pre-trained VGG16 model, and the five-layer convolution layer data of VGG16 is combined with the Residual Encoder model to obtain the colorization results. The specific process of the model is shown in Fig. 1. The input of the whole model is a 224 × 224 × 3 RGB image, which is of a fixed size. Since the original image is gray-scaled, the size of a single-channel grayscale image is 224 × 224 × 1. In order to meet the VGG16 model input requirement, the input grayscale image will be connected three times during preprocessing, and then is used as input to the model. The five-layer convolutional layer of the VGG16 model and the Residual Encoder model is used for cross training to obtain the output color transfer results. In Fig. 1, Yxy represents color space, Y is the luminance channel, and xy is the chrominance information.
Fig. 1 VGG16-Residual Encoder colorization flowchart
Structure of VGG16 Convolutional Neural Network. A complete convolutional neural network usually includes five types of layers: input layer, convolution layer, rectified linear unit layer, pooling layer and fully connected layer. The general architecture of VGG 16 is [input-conv-relu-pool-fc]. The size of the input layer is 224 × 224 × 3, which means the width and height of the image are 224, and it has three color channels RGB. The convolutional layer is the core part of the entire neural network, and it is responsible for most of the calculations.
Analysis of the Influence of Color Space Selection …
165
For the specific convolution kernel of VGG16, the size of the convolution kernel of the first layer is 3 × 3 × 3, and the number is 64. In the structure of VGG16, a pooling layer is inserted between consecutive convolutional layers. It performs a down-sampling operation on the size of the image along with the space (changes in width and height, but does not affect the depth), thereby reducing the size [5]. ReLUs (Rectified Linear Unit) is the activation function, and the mathematical Formula is (1): f (x) = max(0, x)
(1)
Colorization of Grayscale Images Based on Residual Encoder Model. A selflearning model is built on the pre-trained VGG16 model. In this model, the Hypercolumns model occupies a large amount of memory, and is impossible to increase the number of images processed at a time [6]. The amount of processing will affect the learning ability of the neural network, and cannot bring quality color migration results. Therefore, the Residual Encoder model is adopted on this basis. In the Residual Encoder model, the grayscale images are input to the VGG16 model, and the highest convolutional layer is utilized to infer some color information. Then the predicted color information is enlarged in size, while the color information from the next highest layer of VGG16 is added. This process will be looped until it encounters the lowest layer of VGG16 and the output data size is 224 × 224 × 3 [7].
3 Experiments and Analysis The training neural network model used 8000 images selected from ImageNet, including three categories: natural scenery and architecture, human skin color (further divided into light skin and dark skin), animals (cats and dogs). There are 100 test set images, the classification of which are consistent with that of the training set. 3.1 Evaluation Methods Three girls and two boys aging from 21 to 22 are invited to subjectively evaluate the colorization effect, and the evaluation score is from 0 to 100. The analysis and comparison will refer to the following indicators: 1. Does it conform to common sense perception? Is there coherence between the coloring and general perception? 2. Do the colorization results have a richness of coloring, including color saturation and color gamut? 3. How about the performance of color transition? Is there a smoothness transition? 4. How about the ability of showing details? 3.2 Colorization Results and Analysis Since the HSV color space uses polar coordinates, it is quite different from the other two color spaces (Lab and YUV). If the HSV space is directly used for colorization,
166
W. Yu et al.
the entire color gamut is limited to the blue-green area, lacking warm colors. Therefore, a new intermediate color space is introduced, which is the HSV color space under the rectangular coordinate system. For the results (Figs. 2, 3 and 4), the results in Fig. 2a, 3a and 4a are grayscale images, and (b), (c), (d) in Figs. 2, 3 and 4 are the results of color transfer under the YUV color space, Lab color space and HSV color space respectively, Figs. 2e, 3e and 4e shows the original color image for comparison. Figure 2 shows colorization of images about natural scenery and architecture. The colorization results demonstrate that the coloring images in the three color spaces are consistent with natural cognition. In terms of richness of colors, Lab is slightly better than YUV, while the performance of the HSV is the worst. On the gradient color, the YUV gradient color has a smooth effect, Lab has a perceptible gradient boundary, and the HSV space has stripes. In terms of details, both the YUV and Lab color spaces perform well while the HSV space lacks detailed expression. Figure 3 shows human skin colorization images. The result images deduce that the colorization results of the YUV color space are in the acceptable range of natural perception without the comparison of the original images. The space in Lab and in HSV for the images produces the skin color of people yellow and gray, especially the HSV space is completely different from the actual skin color, and the result is not ideal. The three color spaces are similar in terms of color richness. Meanwhile, in terms of color transition, it is clear that the three color spaces have the disadvantage of unnatural transition, which is shown from the boundary of the shadow on the main character’s face. In terms of details such as lip color, the performance of Lab color space is more tense and fuller. In general, YUV and Lab have excellent performance in coloring the skin of dark people, but Lab is better in detail. Figure 4 shows the colorization of animal images. It can be seen from the images that the Lab color space has a higher saturation for the colorization results of animal coat, and the results under the three color spaces are with common sense. The HSV color space has lower color saturation and grayish colors, which is inferior to the YUV and Lab color spaces. Considering the details like a cat’s tongue coloring in the third line in Fig. 4, the colors under the Lab color space is more vivid, and the transition under the three color spaces is all-natural (Table 1). The subjective evaluation of people in the three color spaces was carried out, the results of which are shown in Table 1. The scores of colorization images for natural scenery and animals in Lab color space got more than 90.0, which is higher than those in YUV and HSV color space. While the YUV has the best performance for human skin colorization. HSV color space achieved not satisfactory results in those three scenes, and the scores are lower than 75.0.
4 Conclusions In this paper, three different color spaces, YUV, Lab, and HSV are analyzed in image colorization. In order to avoid the colorization results from relying too much on the color of reference images, the CNN model is adopted to achieve image colorization. During image colorization processing, the pre-trained VGG16 CNN model and its extended Residual Encoder model are used to colorize images. We perform subjective comparison and evaluation of the colorization results.
89.6
80.8
Lab
HSV
59.6
88.2
84.4
41.2
83
90.2
58.8
90.0
89.6 60.1
87.7
88.3
Average
30.4
81.2
79.8 49.4
79.0
79.6
2
30.6
46.4
80.0
3
28.6
91.2
82.4
4
34.8
74.5
80.5
Average
83.6
91.6
88.8
1
Animals
49.4
90.2
81.6
2
86.2
89.0
87.2
3
79.6
91.6
83.2
4
74.7
90.6
85.2
Average
Note 1, 2, 3, and 4 represent the four evaluation criteria shown in Sect. 3.1, including coherence, smoothness, richness, and the details of coloring. The evaluation scores are from 0 to 100. The number of people invited is 5, and the ratio of girls to boys is 3:2. The scores in the table are the final average scores, with one decimal place
88.8
YUV
4
1
3
1
2
Human skin color
Natural scenery and architecture
Table 1 Subjective evaluation results of three color spaces Analysis of the Influence of Color Space Selection … 167
168
W. Yu et al.
Fig. 2 colorization of images about natural scenery and architecture
Fig. 3 Colorization of images about human skin color
The results deduce that the performance of color transfer under YUV and Lab color space is better than that under HSV color space in general. In particular, the YUV color space is more suitable for people’s skin and gradient color, while the color expression in the Lab color space is more vivid and rich. In our future work, we will explore the
Analysis of the Influence of Color Space Selection …
169
Fig. 4 Colorization of images about animals
use of a deep learning method for image colorization, and other scenery data will also be considered. Acknowledgements. This work was supported by Fundamental Research Funds for National Key Research and Development Program of China, Grant/Award Numbers: SQ2020YFC150101, 2017YFB0504202, 2018YFB10046; National Natural Science Foundation of China, Grant/Award Number: 41671441. Ethical Approval The experiments were carried out in accordance with the ethical guidelines and the images used are selected from ImageNet dataset.
References 1. Zhang R, Isola P, Efros AA (2016) Colorful image colorization. In: European conference on computer vision. Springer, Cham, pp 649–666 2. Gupta RK, Chia AYS, Rajan D et al (2012) Image colorization using similar images. In: Proceedings of the 20th ACM international conference on multimedia. ACM, 369–378 3. Levin A, Lischinski D, Weiss Y (2004) Colorization using optimization. ACM Trans Graph (ToG). ACM 23(3):689–694 4. Liu Y, Shao C (2009) Improvement of welsh colorization algorithm of gray image. Mod Electron Tech 32(24):141–143 5. He K, Zhang X, Ren S et al (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision, 1026–1034 6. Chia AYS, Zhuo S, Gupta RK et al (2011) Semantic colorization with internet images. ACM Trans Graph (TOG) ACM 30(6):156 7. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 1097–1105
A Detection Method for Definition Quality of Printed Matter Yanfang Xu(B) , Haiping Wang, and Biqian Zhang Beijing Institute of Graphic Communication, Beijing, China [email protected]
Abstract. A method to evaluate the definition quality of printed matter is developed. A detection pattern has been used, which contains 12 × 12 close-spaced ring cells. The bright-dark ring of each ring cell has a certain spatial frequency and a certain digital contrast. Using digital imaging and its calibration techniques, the optical reflectivity gray image of a printed matter of the detection pattern could be obtained. And using digital imaging processing techniques, the optical reflectivity contrast value of every ring cell and its weighted value by sensitivity of human vision to brightness contrast could be calculated, marked visual brightness contrast Cv, that varies with spatial frequency. Based on this, two quantitative indexes, cut-off frequency Hth and visual definition quality factor Q, are established. The Hth value shows the spatial resolution limit under a certain digital contrast, and the visual definition quality factor Q represents the overall clarity quality of printed image. Keywords: Printed matter · Cut-off frequency · Visual definition quality factor
1 Introduction The definition of the hard copy output image is an important quality factor, which is determined by the ink, paper and process. At present, the output quality of patterns such as equidistant black-and-white lines with different densities is often used to characterize the output quality [1, 2]. But all these are visual assessment, there is no quantitative method. The practice shows that visual assessment will vary from person to person, resulting in instability of the assessment results. In addition, at present, the black-andwhite line only corresponds to the maximum contrast of 0 and 100%, the lack of other contrast conditions, and can not fully characterize the clarity of printed image. Therefore, this paper presents a detection method for printing clarity quality, in which, the definition quality of the printing image is comprehensively characterized by quantifying various contrast degrees in the specific detection pattern.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_26
A Detection Method for Definition Quality of Printed Matter
171
2 Detection Pattern Make a detection pattern, as shown in Fig. 1, which contains 12 × 12 square units in two-dimensional close-spaced arrangement. Each square unit, with side length of 6 mm, is made up of bright and dark alternating circular rings of equal radial width, and the center of which is located at the center of the square unit. The square unit would later be called the ring cell.
Fig. 1 The detection pattern
The reciprocal of the radial width of a pair of bright -dark rings corresponds to the spatial frequency of the Ring Density (number of ring pairs per millimeter). The ratio of the number difference between the bright and dark grayscale values and sum of the numbers is called the digital contrast of the bright-dark ring. In the two-dimensional close-spaced pattern, all the ring cells in one row have the same digital contrast and different spatial frequencies, while all the ring cells in one column have the same spatial frequency and different digital contrast. The 12 spatial frequencies are 0.630, 0.776, 0.956, 1.178, 1.451, 1.788, 2.203, 2.714, 3.343, 4.118, 5.073, 6.25 (c/mm-cycles/millimeter) respectively, and the 12 pairs of bright and black digital values of the ring cells are 161 & 158, 162 & 157, 163 & 156, 165 & 154, 167 & 150, 170 & 144, 172 & 134, 177 & 120, 184 & 101, 197 & 76, 220 & 45 and 255 & 0, corresponding to digital contrasts of approximately 0.009, 0.016, 0.022, 0.035, 0.054, 0.083, 0.124, 0.192, 0.291, 0.443, 0.660, 1.000 respectively. Further, a cell-size white background is expanded around the pattern, in which a black dot is formed as a positioning dot that corresponds to the center of a ring cell. The final digital image is called detection pattern, the resolution is the device’s output resolution, and is stored as *.tif or *.bmp format. And the detection pattern can be set to four ink colors of C (cyan), M (magenta), Y (yellow), and K (black) respectively.
3 Digital Imaging of Detection Prints 3.1 Output of the Detection Pattern The detection pattern is output with the device precision, called detection prints. The output devices, papers used and the prints numbers are shown in Table 1.
172
Y. Xu et al. Table 1 Output conditions and numbers of the detection prints
Sample no
1#
2#
3#
4#
Output devices
Hp Indigo 5500 Digital Printing Machine
Hp 1020 Printer
Epson 7908 Inkjet Printer
Paper
Coated paper
Office printing paper
Photographic paper
3.2 Calibration of Digital Imaging System A professional color scanner Epson GT-X970 is chosen as a digital imaging system for detection prints. To convert the RGB image values of the prints into their optical feature quantity, process of establishing a mathematical relationship between the optical feature quantity and the RGB image value is desirable. Print out a gray target with 21 levels of patches. Measure each patch’s visual optical density (D) values by a spectrophotometer and calculate their optical reflectivity ρ by ρ = 10−D . Then, the target is imaged by the scanner. The scanning parameters are 300 dpi resolution, 24-bit color, no image enhancement, optimization or other processing, saved as *.tif image. Obtain the mean values of G for all the patches. Finally, establish ¯ a mathematical relationship between the ρ and the d(G/255), as shown in formula (1). ρ = −1.151d 5 + 2.954d 4 − 3.221d 3 + 2.410d 2 − 0.061d + 0.001
(1)
The average error of the reflectance ρ predicted by (1) is 0.001, could meets the prediction accuracy requirement.
4 Detection Index of Definition Quality 4.1 Reflectivity Contrast of a Ring Cell Scanning the pattern prints with 1200 dpi using Epson GT-X970 scanner, and the other conditions are the same as color target scanning. The G data of scanned images are normalized, and converted into reflectivity data by the formula (1). Then, the contrast of all the light-dark ring cells can be extracted. The process is as follows: • Determination of center coordinates of locating dot. • Using the background information between the locating dots and the two-dimensional close-spaced pattern, each single dot image in the upper, lower, left and right parts is extracted, and their center coordinates are calculated. • Determination of center coordinates of each ring cell. • According to the results above, the center of the two outside locating dots of one-row ring cell is connected into a straight line. Then, the line is divided into 13 equal parts, and all the equipartition points beyond the ends are the central coordinates of ring
A Detection Method for Definition Quality of Printed Matter
173
cells. And the central coordinates of each ring cell can be obtained by doing the same for each column of ring cells. Finally, the central coordinates of each ring cell are represented by the mean of the above two process results. • Determination of reflectivity contrast of each ring cell. According to the cell size and the resolution of its digital imaging, each square cell image is cut out from the whole pattern image, with 90% of the designed side length as the side length. Then, according to the radius and width of the bright-dark rings determined by the spatial frequency of the ring unit, 0.4 times of the designed width of the ring width is taken as the extraction width of the elements. Image of a ring cell and the schematic diagram of extraction of bright and dark ring elements are shown in Fig. 2a, b respectively.
Fig. 2 Location pixels of reflectivity of a ring cell
The mean values of the optical reflectivity of each bright ring and each dark ring are calculated, namely ρBright and ρDark respectively. The reflectivity contrast C of the ring cell is defined as the formula (2). C = (ρBright − ρDark )/(ρBright + ρDark )
(2)
4.2 Relationship Between Visual Brightness Contrast and Frequency The ring cells in any row of the detection pattern correspond to the same digital contrast. So, the variation of the reflectivity contrast of each ring cell in the same row reflects the variation of the reflectivity contrast C with the spatial frequency H. The sensitivity of human vision to brightness contrast is related to spatial frequency, and it is described by the Contrast Sensitivity Function (CSF). The CSF functions are different for different colors. In case of achromatic color, the commonly used CSF function is presented by Mannos and Sakrison [3], in the following form: S(H) = 2.6(0.0192 + 0.114αH)e−(0.114αH)
1.1
(3)
In formula (3), α is a constant parameter, and H is the spatial frequency, in the unit of circles/degree. At first, the α value takes 1 [3], and later research shows that it can
174
Y. Xu et al.
take value between 1–2 [4]. In this paper, α value takes 1.5, considering several studies [4–6]. And the unit of H is translated from cycles/degree to cycles/millimeter (c/mm for short). Then, the reflectivity contrast is weighted by the CSF, called visual brightness contrast, marked Cv. As shown in Fig. 3a, b, these are the results of tested samples under digital contrasts of 1 and 0.083 (rows M and F in Fig. 1), respectively (the corresponding observation distance is 30 cm).
Fig. 3 Curves of Cv against frequency H
Fig. 4 Curves of QV against digital contrast
4.3 Cut-Off Frequency Hth and Visual Definition Quality Factor Q Under certain lighting condition, there is a limit to the perception of visual brightness contrast, which is called threshold and is denoted as Cvth . Then, in the curves shown in Fig. 3, the spatial frequency corresponding to the Cvth is called cut-off frequency, marked Hth . In this paper, 0.02 is suitable for Cvth by visual experiments, obtaining all the Hth values for 12 digital contrasts. Define the integral area under curves in Fig. 3 as QV , where the horizontal axis is less than Hth , representing the clarity quality of image detail under a given digital contrast. The resulting relationships between QV and digital contrast of the samples are shown in Fig. 4. From Fig. 4, in the case of digital contrast 1, the sample 4# has the highest QV , meaning that its image detail is the best, but QV becomes lower than sample 1# for digital
A Detection Method for Definition Quality of Printed Matter
175
contactless than 0.291. This is exactly related to the difference between the sample 4# and 1# shown in Fig. 3a, b. Further, define the mean value of QV for the 12 digital contrasts as visual definition quality factor, named Q, representing the overall clarity quality of the printed image. And in numerical order, the Q values of the samples are 37.73, 30.91, 18.99 and 37.79. The numerical results agree well with the visual clarity quality comparison of the samples. As listed in Table 1, samples 1# and 4# are the standard outputs of digital printing and photo-level inkjet printing respectively, and both require high performance. Sample 3# is the worst because of its inappropriate combination of inkjet printer and office printing paper. And sample 2# is an office printer using toner electrostatic imaging technology and supporting paper, mainly for text output. The obtained Q values are also matched with the feature comparison.
5 Conclusion By the imaging and processing technology used in the experiment, the variation of visual brightness contrast with spatial frequency can be detected for printed matter, which shows the visual definition characteristics of printed image with spatial detail under a certain digital contrast. And so, the cut-off frequency and the visual definition quality factor can be derived, representing the spatial resolution limit under a certain digital contrast and the overall clarity quality of printed image respectively. The results show that these quantization factors can represent the visual clarity of printed matter.
References 1. Ma T, Zhan Q, Lu X (2011) A study on the design and use of test plate for digital printing, image technology, vol 3, pp 44–49 2. Yang Q (2017) Study on the influence of digital paper properties on printing quality control. Hunan University of Technology, pp 24–26 3. Mannos JL, Sakrison DJ (1974)The effects of a visual fidelity criterion on the encoding of images. IEEE Trans Inf Theor 20(4):525–536 4. Zhu J (2011) Important visual feature-based image quality assessment and image compression. Huazhong University of Science & Technology, pp 14–15 5. Fan Y, Shen X, Xiang Y (2011) No reference image sharpness assessment based on contrast sensitivity. Opt Prec Eng 19(10):2485–2493 6. Cha K, Chen S (2012) Image definition assessment based on wavelet frequency band partition and HVS property. Electron Des Eng 20(9):183–186
Neutral Color Correction Algorithm for Color Transfer Between Multicolor Images Ane Zou, Xuming Shen, Xiandou Zhang(B) , and Zizhao Wu School of Media and Design, Hangzhou Dianzi University, Hangzhou, China [email protected]
Abstract. This paper designs a post-processing algorithm of neutral color correction to deal with its deviation due to color transfer between multicolor images. The algorithm, based on L∗ a∗ b∗ color space, uses saturation of each pixel in the original image as the weight of color transfer degree, and adjusts the weight of image transfer results. Meanwhile, as saturation weighting may cause insufficient transfer of some neutral colors and no transfer of hues accordingly, a global hue correction strategy is designed in the proposed algorithm of this paper to solve this problem. The method used in this paper preserves the neutral color of the original image while keeping the effect of color transfer between multicolor images, avoids neutral color deviation caused by color transfer and ensures the quality and natural authenticity of the new image. Keywords: Color transfer · Color deviation · Saturation weighting · Neutral color correction · Hue correction
1 Introduction Image color transfer refers to the technology of transferring the color of the reference image to the original one on the basis of retaining the shape of the latter. At present, color transfer technology can be divided into two categories: transfer between multicolor images and that between gray and multicolor images. This paper focuses on studies of the former. In 2001, Reinhard [1] firstly realized global color transfer between color images through probability statistics. Pitiéf [2] used the iterative one-dimensional probability density function to realize the hierarchical transmission of colors. In 2011, Pouli [3] proposed an update method based on multi-scale histogram reconstruction. In 2005, Tai [4] used Gaussian mixture model to segment color regions before color transmission. In 2017, Arbelot [5] put forward color transfer based on local texture. In 2017, He [6] proposed to match semantic information between images by convolution neural network before transfer. This paper finds that due to the imperfect algorithm, some of the methods will give rise to neutral color deviation of the original image. However, in evaluation of image quality, the sensitivity of human vision to the neutral color is always higher than that of the © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_27
Neutral Color Correction Algorithm for Color …
177
non neutral. Therefore, this paper proposes a universal post-processing algorithm based on saturation weighting to reduce neutral color deviation in color transfer of multicolor image.
2 Design of Neutral Color Correction Algorithm The post-processing method of neutral color correction is based on L∗ a∗ b∗ color space. In addition, conversion of color space between RGB and L∗ a∗ b∗ before and after image correction is needed. In L∗ a∗ b∗ color space, L∗ is the lightness channel and the a∗ b∗ is the chroma channels. The following formula can be used to calculate saturation for each pixel in the image. (1) S = a2 + b2 Neutral color is a colorless black-white-gray system, namely, the a∗ b∗ channels of which are all 0 in L∗ a∗ b∗ color space. However, the color with extremely low saturation is also neutral in human vision. Therefore, this paper does not directly divide the boundary between the multicolor and non-color parts, but does neutral color retention through saturation design weights. The structure of the color correction algorithm is shown in Fig. 1.
Fig. 1 The overall framework of the color correction algorithm
2.1 Saturation Weight Calculation In order to keep most of the color transfer effect of low saturation image after neutral color correction, first, calculate image saturation and then normalize the saturation map S to S . The formula of the latter process is as follows:
S =
S − Smin Smax − Smin
(2)
178
A. Zou et al.
In it, Smin is the minimum value of all pixels in the saturation map S, and Smax the maximum value. To guarantee the effect of color transfer, divide the normalized saturation map S by the saturation value Sn with a specific percentile in the saturation map for enhancement.Sn represents the standardized saturation value at n% after the normalized saturation map is sorted from small to large, here n is 30. Nevertheless, it can be adjusted according to the content of the image. Process the abnormal value for the result of saturation higher ¨ The formula is as follows: than 1 to obtain the enhanced saturation map S. 1 if SSn > 1 ¨S = (3) S Sn else In order to smooth the saturation image, a Gaussian filter with the convolution kernel ¨ The size is determined size of size × size needs to be conducted to the saturation map S. ¨ round() means the by how large S¨ is; m and n are pixels on the length and width of S; rounding function: size = round (
min(m, n) ) 10
(4)
2.2 Neutral Color Correction When correcting the neutral color of the transferred image, thethree channel difference of L∗ a∗ b∗ between the original and the transferred image needs to be first calculated to obtain the difference image: ⎧ ∗ ⎨ L = L∗r − L∗s (5) a∗ = ar∗ − as∗ ⎩ b∗ = b∗r − b∗s In the formula, L∗r ar∗ b∗r is the three channel difference of L∗ a∗ b∗ of the transferred image, and L∗s as∗ b∗s the three channel difference of L∗ a∗ b∗ of the original one. The positive and negative values of L∗ a∗ b∗ are distinguished here. The color transfer image after neutral color correction can be calculated by multiplying the difference image and the saturation weight image designed in 2.1 and then adding the original image. The formula is as follows: If the intention of neutral color correction is to retain the neutral color completely, the calculation method for correctionis as follows: ⎧ ∗ ¨ ∗ ⎨ Lnew = L∗s + SL ¨ ∗ (6) a∗ = as∗ + Sa ⎩ new ∗ ∗ ∗ ¨ bnew = bs + Sb If the intention of neutral color correction is to retain its chroma, the first equation in Formula (6) needs to be modified to L∗new = L∗s .
Neutral Color Correction Algorithm for Color …
179
2.3 Hue Correction For some pixels with low saturation, some hues may not be transferred after correction due to insufficient transfer degree by the above neutral color method. As shown in Fig. 2, the blue box displays the trunk that has color deviation due to color migration, and the yellow one shows the image after saturation weighting treatment. Despitesuccessful correction of the main part, the edge area of the trunk still retains the original red color due to low saturation and small weight of migration, yet the actual migration result is green. Therefore, further rectification of image hues is needed to ensure hues consistency before and after neutral color correction.
(a) origin image
(b) migrated image
(c) saturation weighted result
(d) abnormal local area comparison
Fig. 2 Possible problems in correction with mere saturation weighting
The method of hue correction in this paper is shown in Fig. 3. The original pixels corigin are transferred to cr through the transfer algorithm, and turn to cnew through neutral color saturation weighting. To maintain constant saturation, make a circle with the origin O as the center and the distance from O to cnew as the radius. The nearest point from cr to the circumference is the final result cnew . The method for hue correction of all pixels in the image can change the hue that fails to be transferred into the target one without altering saturation (Fig. 3, left); when the hue is normal after transfer, little change will be made (Fig. 3, right). Hue correction is written as the form of goal solving, and the function is as follows: (7) X ∗ = argminX − T 2 := X 2 = |Sc | X
∗ 2 + b∗ 2 , T = a ∗ , b∗ ; In Formula (7), |Sc | = anew new r r The solution is as follows: ∗ a¨ new = δar∗ b¨ ∗new = δb∗r
(8)
180
A. Zou et al.
Fig. 3 Schematic diagram of hue correction
In Formula (8), Sn δ= ar∗ 2 + b∗r 2
(9)
In Formula (9), ∗ 2 + b∗ 2 Sn = anew new
(10)
∗ b ¨ ∗ as the final corrected image, and transfer it back from At last, take L∗new a¨ new new to RGB color space.
L∗ a∗ b∗
3 Experimental Effect and Comparison This paper takes the Reinhard color transfer algorithm as an example to conduct post-processing to images with neutral color deviation in color transfer results. The experimental effects and comparison are shown in Fig. 4. The above figures suggest the method in this paper can transit neutral color of image naturalyand preserve it after color transfer. Mere weighting of ab channels saturation can retain information on brightness of the transferring style, while weighting of L∗ a∗ b∗ three channels saturation preserves main structural features of the original image.The two effective methods for color deviation can be selected according to the demands.
4 Conclusion For the problem of color deviation after image color transfer, this paper sets the weight for the color transfer effect according to saturation of the original image, so that the neutral color with low saturation has weak transfer effect, while the multicolor part with
Neutral Color Correction Algorithm for Color …
(a)
(b)
(c)
(d)
(e)
181
(f)
Fig. 4 The results of neutral color correction in transferred images. a Reference drawing. b Original image. c Transferred image. d Saturation weight chart. e The correction result 1 of this paper. f The correction result 2 of this paper
high saturation remains obvious transfer effect. The method of this paper helps retain both the fine transfer style of the image and the neutral color, thus improving visual quality of the image; the proposed algorithm which brings ideal visual effect and high efficiency is widely available after application of color transfer algorithms between all multicolor images and will automatically correct those images with color deviation, while those without such a problem will not be affected. Acknowledgement. This work is supported by the National Natural Science Foundation of China (61675060).
182
A. Zou et al.
References 1. Reinhard E, Ashikhmin M, Gooch B, Shirley P (2002) Color transfer between images. IEEE Comput Gr Appl 21(5):34–41 2. Pitié F, Kokaram AC, Dahyot R (2007) Automated colour grading using colour distribution transfer. Comput Vis Image Underst 107(1–2):23–37 3. Pouli T, Reinhard E (2011) Progressive color transfer for images of arbitrary dynamic range. Comput Gr 35(1):67–80 4. Tai YW, Jia J, Tang CK (2005) Local color transfer via probabilistic segmentation by expectation-maximization. IEEE computer society conference on computer vision & pattern recognition. IEEE 5. Arbelot B, Vergne R, Hurtut T, Thollot J (2016) Local texture-based color transfer and colorization. Comput Gr 62:15–27 6. He M, Liao J, Yuan L, Sander PV (2017) Neural color transfer between images
Application of Image Processing in Improving the Precision of a 3D Model Wei Zi1 , Xiaoyang Fang2 , and Runqing Su2,3(B) 1 School of History, Zhengzhou University, Zhengzhou, China 2 School of Humanities, University of Chinese Academy of Sciences, Beijing, China
[email protected] 3 School of Civil Engineering, Zhengzhou University, Zhengzhou, China
Abstract. To meet the needs of different aspects, original image data usually need to be processed. The Jiahu bone flute is a precious musical relic in China. In 2012, researchers used medical Computed Tomography to three-dimensionally scan the bone flute, but the accuracy of the data obtained at that time was not high. Due to the cultural relic protection policy, it is impossible to directly obtain image data on the bone flute today. In this paper, a Lagrange interpolation algorithm conducting grey interpolation is used to process the CT data of the original bone flute, and more sectional images are obtained. The mean squared error, peak signal-to-noise ratio and image information entropy are used to evaluate the algorithm. Then, the filtering effects of the three methods of mean filtering, median filtering and Wiener filtering on the bone flute CT data are compared. The results show that the median filtering effect is the best, and the processed bone flute CT image is more delicate and realistic. The image processing method used in this article is also applicable to general image data, which has referential significance for image processing in traditional printing and emerging 3D printing. Keywords: Image processing · Interpolation · Filtering · 3D reconstruction
1 Introduction At present, the computed tomography (CT) suitable for nondestructive scanning of large samples includes industrial CT and medical CT. Industrial CT scans usually have higher accuracy, but the scanning time is longer, and different types of stages have to be used for different samples. The accuracy of medical CT scanning is slightly lower, but the scanning is more convenient and the requirements on the stages are not strict. For a special type of sample such as cultural relics, due to limited conditions early on in the technology, it was not possible to use industrial CT scanning and only limited medical CT was available. The accuracy of the samples cannot satisfy the current requirements, image processing technology must be used for post-processing to improve 3D modelling accuracy. The Jiahu site is located on the alluvial plain in the east of Wuyang County, Henan Province, with a total area of approximately 55,000 m2 [1]. It is an important site in the © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 P. Zhao et al. (eds.), Advances in Graphic Communication, Printing and Packaging Technology and Materials, Lecture Notes in Electrical Engineering 754, https://doi.org/10.1007/978-981-16-0503-1_28
184
W. Zi et al.
early Neolithic period in China. The unearthed Jiahu bone flute has a history of 8000– 9000 years [2]. The discovery of the Jiahu bone flute has important academic value in archaeological and musical studies, specifically in regard to rewriting the beginning of the history of Chinese music and attracting widespread attention internationally [3, 4]. However, since the discovery of the bone flute, scholarly research has not reached the depth it deserves [5]. The main reason is that the Jiahu bone flute was seriously damaged and deteriorated due to being buried for thousands of years. Due to the huge cultural relic value of the bone flute, direct research on the bone fluteis not supported bythe State Department of Cultural Relics. Therefore, the latest breakthrough research seeks to accurately replicate the Jiahu bone flute under nondestructive conditions and make it accessible for repair, sound measurement and temperament research. In 2012, for the first time, Xiaoyang et al. used CT scanning, 3D model reconstruction and other techniques to produce a restored product with a geometric shape and physical size almost the same asthe Jiahu bone flute [6]. On this basis, taking the data of the Jiahu bone flute as an example, image processing technology is used to improve the accuracy of the bone flute image.
2 Sample Bone flute M494:2 (Fig. 1) was unearthed in 2001. It is 24.53 cm long and basically remained intact. The mouthpiece and the first sound hole are slightly damaged, the body is brown and smooth, and there are bone joints at both ends. When unearthed, it was crushed and fractured at approximately 6.5 cm from the mouth end, and the sound hole section was covered with strip-shaped wraps. There are obvious wrapping marks between the first tone hole and the fifth and sixth holes. A row of seven holes of different sizes are engraved on the front of the body, and there are scratches on the sides of the holes. The hole array slightly changes. Taking the middle of the first and seventh holes as reference, only the fourth hole is on the line, the second hole and third hole are biased to the left, and the fifth hole and sixth hole are biased to the right [7].
Fig. 1 Jiahu bone flute M494:2
A Philips 64-type medical CT was used to scan the bone flute, and the scanning conditions were as follows. The bone tissue window scan was selected, the scan interval was 0.625 mm, the scanning probe working voltage and current were 120 kV/246 mA, and the long axis of bone flute was perpendicular to the CT scan line. The scanned files were stored in the DICOM format. A total of 823 tomographic images were obtained, each with a size of 125 mm × 125m m, a resolution of 540 × 540, and a pixel volume of 0.23 mm × 0.23 mm × 0.625 mm [8].
Application of Image Processing in Improving …
185
3 Image Processing Tomographic medical image interpolation technology is used to regenerate the original CT image. Then, the CT image is filtered to remove the noise interference points, and finally, the Mimics software developed by Materialise is used for 3D reconstruction. Materialisewas founded in 1990 and is headquartered in Leuven, Belgium. 3.1 Interpolation of Tomographic Images Regarding the slice data sequence generated by medical CT, the data density in each layer is much larger than the data density between the sequences. If these slice data are directly used to reconstruct the surface or three-dimensional structure of the object, due to insufficient slices, the reconstructed image will be distorted [9].Therefore, it is necessary to insert some slices into them. The data of the newly inserted slices are not directly derived from the actual slices, but they are calculated from the existing slices using an interpolation algorithm. Interpolation is the calculation process used to determine the value between the sampled values of a function. Image interpolation is a process of regenerating image data to make the image more refined [10]. Tomographic image interpolation can be divided into greyscale interpolation methods, shape-based interpolation methods and wavelet-based interpolation methods. Owing to the single shape of the CT image of the bone flute and considering the interpolation complexity, calculation time and other factors, grey interpolation is used to interpolate the CT image of the bone flute. 3.2 Grey Interpolation Grey interpolation is one of the most common interpolation algorithms [11]. It directly uses grey information of known tomographic images to construct the interpolation image. Considering the single material density, interpolation quality and computational complexity of the original image, Lagrange interpolation, one of the grey interpolation algorithms, is used in this paper. Lagrange interpolation is a type of polynomial interpolation. It can specify how many points pass the polynomial, such as N − 1 degree polynomials allow N points to pass. The Lagrange interpolation function of order N − 1 is defined as: ⎧ N −1 ⎪ ⎨ n−i−x , n−1≤x