143 43 4MB
English Pages 113 [110] Year 2021
Behaviormetrics: Quantitative Approaches to Human Behavior 6
Minoru Nakayama Yasutaka Shimizu Editors
Pupil Reactions in Response to Human Mental Activity
Behaviormetrics: Quantitative Approaches to Human Behavior Volume 6
Series Editor Akinori Okada, Professor Emeritus, Rikkyo University, Tokyo, Japan
This series covers in their entirety the elements of behaviormetrics, a term that encompasses all quantitative approaches of research to disclose and understand human behavior in the broadest sense. The term includes the concept, theory, model, algorithm, method, and application of quantitative approaches from theoretical or conceptual studies to empirical or practical application studies to comprehend human behavior. The Behaviormetrics series deals with a wide range of topics of data analysis and of developing new models, algorithms, and methods to analyze these data. The characteristics featured in the series have four aspects. The first is the variety of the methods utilized in data analysis and a newly developed method that includes not only standard or general statistical methods or psychometric methods traditionally used in data analysis, but also includes cluster analysis, multidimensional scaling, machine learning, corresponding analysis, biplot, network analysis and graph theory, conjoint measurement, biclustering, visualization, and data and web mining. The second aspect is the variety of types of data including ranking, categorical, preference, functional, angle, contextual, nominal, multi-mode multi-way, contextual, continuous, discrete, high-dimensional, and sparse data. The third comprises the varied procedures by which the data are collected: by survey, experiment, sensor devices, and purchase records, and other means. The fourth aspect of the Behaviormetrics series is the diversity of fields from which the data are derived, including marketing and consumer behavior, sociology, psychology, education, archaeology, medicine, economics, political and policy science, cognitive science, public administration, pharmacy, engineering, urban planning, agriculture and forestry science, and brain science. In essence, the purpose of this series is to describe the new horizons opening up in behaviormetrics—approaches to understanding and disclosing human behaviors both in the analyses of diverse data by a wide range of methods and in the development of new methods to analyze these data. Editor in Chief Akinori Okada (Rikkyo University) Managing Editors Daniel Baier (University of Bayreuth) Giuseppe Bove (Roma Tre University) Takahiro Hoshino (Keio University)
More information about this series at http://www.springer.com/series/16001
Minoru Nakayama · Yasutaka Shimizu Editors
Pupil Reactions in Response to Human Mental Activity
Editors Minoru Nakayama Department of Information and Communications Engineering Tokyo Institute of Technology Tokyo, Japan
Yasutaka Shimizu Professor Emeritus Tokyo Institute of Technology Tokyo, Japan
ISSN 2524-4027 ISSN 2524-4035 (electronic) Behaviormetrics: Quantitative Approaches to Human Behavior ISBN 978-981-16-1721-8 ISBN 978-981-16-1722-5 (eBook) https://doi.org/10.1007/978-981-16-1722-5 © Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Preface
The study of the psychological change of pupil size began after Eckhard Hess and James Polt first reported on the phenomenon in the 1960s [3, 4]. This stimulated various types of research about pupillary response, which included topics concerning cognitive science [1, 2, 7], and clinical aspects [5], and the volume of this research continues to increase, thus developing a wider awareness of the phenomenon. Together with these studies, the development of equipment to measure pupil size has played a major role in the promotion of this research. Taking pictures of pupil images, the visual recording of changes in pupil size and the measurement of pupil sizes using these images are examples of these advances. This monograph summarizes our early-stage work. One of the editors has developed a system of measurement of changes in pupillary size [6], and his work has contributed to this area of research. Our scientific interest is to establish a means of measuring educational assessment by observing pupillary change evoked by learning activities. As with Hess and Polt [3], we began measuring psychological pupil responses to visual stimuli. However, as the brightness of stimuli influenced pupil responses, a means of compensation was required and developed. When visual images became moving images, another type of compensation was required and developed. Also, a suitable processing procedure was needed to reduce the various artifacts associated with temporal pupillary observation. In addition to these improvements, another type of evaluation of pupillary change was developed. Each author proposed various new ideas, experimental procedures, modeling, and analytical techniques that were refined through repeated trial and error. The results of some experiments provided motivation to continue the research. These topics are addressed in the following chapters. Historically, the editors published their research in domestic journals. Fortunately, some researchers asked us to publish our work in English language publications, and their requests encouraged us to produce this monograph. The editors would like to thank Dr. Akinori Okada, the editorial supervisor of the series of the monograph: “Behaviourmetrics: Quantitative Approaches to Human Behavior”, who invited us to summarize this monograph. Also, we wish to thank Mr. Yutaka Hirachi, of the editorial management division of Springer Japan. v
vi
Preface
Once again, the editors would like to thank the co-authors Mr. Ikki Yasuike, Mr. Shigeyoshi Asano, and Mr. Maki Murai for the chapters they have contributed. And finally, as the original publishers have agreed to the republication of the manuscripts, the editors wish to express their gratitude to the Japan Society of Educational Technology (JSET), to the Institute of Electronics, Information and Communication Engineers (IEICE), to the Institute of Image Information and Television Engineers (ITE), and to the Association of Computing Machinary (ACM). Tokyo, Japan July 2020
Minoru Nakayama Yasutaka Shimizu
References 1. 2. 3. 4. 5. 6. 7.
Beatty J (1982) Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychol Bull 91(2):276–292 Granholm E, Steinhauer SR (2004) Pupillometric measures of cognitive and emotional processes. Int J Psychophysiol 52:1–6 Hess EH, James PM (1960) Pupil size as related to interest value of visual stimuli. Sci 132:349– 350 Hess EH, Polt JM (1964) Pupil size in relation to mental activity during simple problemsolving. Sci 143:1190–1192 Kuhlmann J, Bottcher M. (eds.) (1999) Pupillography: Principles, Methods and Applications. W. Zuckschwerdt Verlag, Munchen, Germany Shimizu Y, ichi Kondo S, Maesako T, Kumagai R (1987) Measuring pupil size in humans – reactions to emotional states. Jpn J Educ Technol 11:25–33 Steinhauer SR, Hakerem G (1992) The pupillary response in cognitive psychophysiology and schizophrenia. In: D. Friedman, G. Bruder (eds.) Psychophysiology and Experimental Psychopathology: A tribute to samuel sutton, pp vol 658; pp 182–204. NY Academy of Sciences, New York, USA
Contents
Controlling the Effects of Brightness on the Measurement of Pupil Size as a Means of Evaluating Mental Activity . . . . . . . . . . . . . . . . . . . . . . . . Minoru Nakayama, Ikki Yasuike, and Yasutaka Shimizu
1
Pupil Reaction Model Using a Neural Network for Brightness Change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shigeyoshi Asano, Ikki Yasuike, Minoru Nakayama, and Yasutaka Shimizu
15
A Neural-Network-Based Eye Pupil Reaction Model for Use with Television Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shigeyoshi Asano, Minoru Nakayama, and Yasutaka Shimizu
31
The Relationship Between Pupillary Changes and Subjective Indices to the Content of Television Programs . . . . . . . . . . . . . . . . . . . . . . . . Maki Murai, Minoru Nakayama, and Yasutaka Shimizu
49
An Estimation Model for Pupil Size of Blink Artifacts While Viewing TV Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Minoru Nakayama and Yasutaka Shimizu
61
Estimation of Eye-Pupil Size During Blink by Support Vector Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Minoru Nakayama
77
Frequency Analysis of Task Evoked Pupillary Response and Eye Movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Minoru Nakayama and Yasutaka Shimizu
89
Epilogue: Last But Not Least . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
vii
Controlling the Effects of Brightness on the Measurement of Pupil Size as a Means of Evaluating Mental Activity Minoru Nakayama, Ikki Yasuike, and Yasutaka Shimizu
Abstract It is well-known that pupil size responds to both brightness and mental activity. However, the correlation between these two phenomena is not clear. This study is about the changes in pupil size in response to mental activity while the effects of brightness are controlled. As a first step, pupillary changes were measured at various levels of brightness, with verbal instructions that were designed to stimulate the mental activity being given and not given. No interaction between pupillary change due to the effects of brightness and pupillary change due to mental activity was found. As a second step, white, gray, and black patterns were presented to the subjects, and pupil sizes were measured at varying levels of brightness. From these measurements, it was possible to develop an an experimental formula that expresses the relationship between pupil size and brightness. Next, the results of the first experiment were compensated using the extracted function, and analysis of variance (ANOVA) showed that brightness did not have an effect on pupil size. Therefore, a method for removing the effects of brightness upon pupillary changes was developed. Finally, the extracted function was applied to the evaluation of pupil size as a function of mental activity for the patterns presented at several levels of brightness. Corrected pupil sizes correlated with pupil sizes when patterns of pictures were presented at the same levels of brightness.
Originally published in the Japanese Journal of Educational Technology, Vol.15, No. 1, pp. 15–23, 1991. M. Nakayama (B) · Y. Shimizu Information and Communications Engineering, Tokyo Institute of Technology, Tokyo, Japan e-mail: [email protected] I. Yasuike Tokyo, Japan © Springer Nature Singapore Pte Ltd. 2021 M. Nakayama and Y. Shimizu (eds.), Pupil Reactions in Response to Human Mental Activity, Behaviormetrics: Quantitative Approaches to Human Behavior 6, https://doi.org/10.1007/978-981-16-1722-5_1
1
2
M. Nakayama et al.
1 Introduction In education, the means of presentation of information is key. In particular, visual information plays a major role in human perception. Therefore, visual teaching materials such as written texts, slides, and videos promote the understanding of the content. These types of media stimulate the learner’s interest and advance their learning activity. Though visual images and their content are most often evaluated subjectively, such as using an image test, an objective assessment is required in order to compare the materials. Also, temporal assessment is needed to evaluate the contents of educational TV programs and the like. For these educational purposes, biological information may be used in these assessments. For example, EDA (Electro Dermal Activity or GSR (Galvanic Skin Response)) was employed to measure the activation of the sympathetic nerve during the viewing of TV programs [1]. In this section, pupil size is used as one of the pieces of biological information, since the pupil responds to mental activity [2]. The reaction is well known as the pupil light reflex (PLR) [3]. As an example of pupil reaction during mental activity, such as the reading of graph values, pupil size increases, but restores itself to a baseline once reading is completed. This reaction occurs when a viewer is presented with a graph image and asked to read values from the graph [4]. This phenomenon suggests the possibility of measuring viewer activity, as the pupil responds to mental activity during the reading of graph values. Pupil response is very sensitive, and this reaction behavior is highly useful bio-information. Therefore, pupil size can be an objective index for the measurement of mental activity. However, as the pupil reacts to changes in brightness, such as on TV programs and during movies due to the PLR phenomenon, a procedure to extract pupillary changes in order to evaluate mental activity is required [5]. Hess et al. [6] reported the results of an experiment where pupil size was not influenced by differences in brightness of small areas of the images displayed. However, the report notes that some differences in pupil sizes were observed, and the detailed relationship has not yet been summarized. Though various factors may influence the relationship between pupil size and brightness levels, pupil size constantly changes within a certain range of brightness [7]. In this study, pupillary changes are observed using a limited range of brightness, such as on a video monitor showing images. During experiments, pupillary changes due to mental activity and brightness levels are analyzed, and the contribution of the level of brightness is evaluated using analysis of variance (ANOVA). An extraction procedure for mental activity using visual stimuli is prepared. The following topics are addressed in this chapter. 1. Examination of the relationship between mental activity and level of brightness of pictures during pupillary changes. 2. Measurement of pupillary changes in response to the level of brightness, and extraction of the relationship using a mathematical formula. A procedure for evaluating psychological changes in pupil size is considered.
Controlling the Effects of Brightness on Pupil Size …
3
3. Pupillary changes in response to mental activity are extracted and evaluated when visual stimuli are presented in several brightness. In order to examine the above issues, pupillary changes of viewers were analyzed using equipment for measuring pupil size [8] while images are presented at various levels of brightness. This work aims to evaluate mental response during the viewing of educational TV programs, while brightness levels are controlled.
2 Pupillary Change Due to Picture Brightness Pupil sizes in response to slide shows of patterns projected at different levels of brightness were observed. Also, oral instructions were given or questions were asked during the experiment in order to measure pupillary change during mental activity.
2.1 Picture Brightness Control Visual stimuli in the form of video clips were prepared by capturing images in color using a video camera, as shown in Fig. 1. In order to evaluate pupil size, which is affected by the overall level of brightness of presented images, the brightness level is controlled using the following procedure. The brightness level of a 9-inch video monitor which displayed the pictures was measured using a color luminance meter. The area measured (a visual angle of 1◦ ) focused on the center of the monitor, and the brightness values of images displayed were defined as the overall brightness of the images. While pictures were captured from the analog videos used at that time, the level of brightness was controlled by adjusting the iris of the camera to maintain the color temperature. When the level of lightning is controlled, the color temperature of objects changes. Two sets of visual stimuli, which consist of high and low levels of
Fig. 1 Procedure for capturing photos at several levels of brightness
4
M. Nakayama et al.
brightness, were prepared, with 30 pictures in each set. Audio instruction was added to 14 pictures in each set, for a total of 28 pictures, in order to evoke mental activity. The instructions ask the number of people, or the name of flowers, for example.
2.2 Experimental Method of Observing the Influence of Brightness As the experimental setup in Fig. 2 shows, video clips were presented on a 37-inch display which was positioned 1.5 m away from the subject. Viewers wore pupil measuring equipment that captured images of pupil sizes and eye movement on two synchronized VTRs. Viewing conditions such as viewing angles were considered in order to observe eye movement. Subjects were instructed in advance to answer questions they were asked while viewing the images, and their responses were recorded simultaneously with the images of their pupils. The duration of image presentation when instructions were not given was 10 s, and when instructions were given (5 s after images were shown), the total time was 15 s. A blue screen was inserted between pictures to prevent any influence on subsequent images. Mean pupil sizes for the entire 10 s pictures without instructions being given were viewed, and for the time after oral instructions were given, were evaluated. Each video clip was also presented using two levels of brightness, so that the pupil sizes were measured at four levels of brightness. These were 12.1, 37.6, 71.9, and
Fig. 2 Experimental setup for observing pupil size during the viewing of pictures
Controlling the Effects of Brightness on Pupil Size …
5
107.5 cd/m2 . In addition, the brightness of the septum blue image varied between two levels (20.9 and 42.6 cd/m2 ) as a reference. The subjects were 6 male university students.
2.3 Pupillary Change With Levels of Picture Brightness 2.3.1
Relationship Between Pupil Size and Brightness Levels
Mean pupil sizes were summarized in Fig. 3, which shows pupil sizes at 4 brightness levels and under the two instructional conditions. Observed pupil sizes for each subject are standardized using the overall mean of pupil size, which is called the relative pupil size. In the figure, the solid line represents the condition with oral instructions, and the dotted line represents the condition of simply observing the pictures. The relationship between the level of brightness and pupil size suggests that pupil size gradually decreases with brightness level, though with oral instructions pupil sizes increase at every brightness level. The increase in pupil size when audio instruction is given may reflect the viewer’s level of mental activity, so that the dilation rate indicates the evoked levels of the viewer’s mental activity. The dilation rates are almost the same across the 4 levels of brightness. A statistical test shows that there are significant differences in pupil sizes when instructions are given or not given (Welch procedure, 12.1 cd/m2 : t = 5.60, d f = 123.04, p < 0.01; 37.6 cd/m2 : t = 4.65, d f = 150.88, p < 0.01; 71.9 cd/m2 : t = 6.18, d f = 130.39, p < 0.01; 107.5 cd/m2 : t = 4.05, d f = 150.13, p < 0.01).
2.3.2
Results of ANOVA
The contribution of the two factors on pupillary changes is examined using two-way ANOVA (analysis of variance). The results are summarized in Table 1. The factor
Fig. 3 Pupillary changes due to changes in brightness and the addition of oral instruction
Luminance (cd/m2)
6
M. Nakayama et al.
Table 1 Two-way ANOVA of mental activity and levels of brightness using the experimental data Source df SS V F Metal activity Brightness Interaction Residual Total **: p < 0.01
1 3 3 40 47
0.30 1.24 0.03 1.15 2.72
0.30 0.41 0.01 0.03 0.06
10.5∗∗ 14.3∗∗ 0.3
for mental activity consists of two levels (when instructions are given and not given), and brightness consists of four mean levels of brightness of the displayed pictures. The results of F-test show that both two factors are significant ( p < 0.01), and their interaction is not significant, since the F-value is 0.3. Therefore, the two factors influence pupillary changes independently. The results suggest the possibility that pupil size changes due to mental activity may be extracted when the factor of the influence of the brightness of the image being viewed is considered.
3 Pupillary Change in Response to Brightness Levels The previous Sect. 2 shows that the factors for mental activity and levels of brightness that influence pupillary changes are practically independent. In this section, the behavior of pupillary change in response to brightness is examined using the following experiment.
3.1 Experimental Procedure for Pupillary Change Due to Brightness As the level of brightness of the TV display could not be controlled step-wise during the previous experiment, another experiment was prepared, as shown in Fig. 4, in order to control the brightness of the display [9]. A dark background environment was produced using a black velvet curtain which prevented outside light from entering. Three 37-inch panels (in the same shape as the video monitor) and colored black, gray, and white were arranged on a black pedestal 50 cm in front of the background. These panels were illuminated by two Halogen lights with adjustable levels of brightness. Since the panel colors were black, gray, and white, only the brightness level was changed so that both chroma (saturation) and color phase (hue) would not be influenced. The brightness level was varied between 10 and 200 cd/m2 , which is within the range of brightness of the video monitor. Each pattern was presented to
Controlling the Effects of Brightness on Pupil Size …
7
Fig. 4 Experimental setup for the observation of pupil size during changes in brightness
subjects for 30 s, and mean pupil size was measured for 10 s in the middle of the presentation. The subjects were 7 male university students.
3.2 Results of Observation of Pupillary Change According to Brightness Level The relationship between the level of brightness and relative pupil size is summarized in Fig. 5. In the figure, open circles represent experimental observations. As with Fig. 3, pupil size decreases with the level of brightness. If the function of the relationship is defined, standardized pupil sizes for any level of brightness can be calculated. An appropriate function was derived using the results obtained by observing the experiment. Four types of functions were hypothesized as a monotonically decreasing function, as shown in Fig. 5. Optimization of each function was conducted using the differential errors (δ), and the summations ( δ 2 ) of the experimental observa-
8
M. Nakayama et al.
Fig. 5 Pupillary change using levels of brightness and an estimating formula
Luminance (cd/m2)
Table 2 Extracted function formulae and the sums of their square errors No. Extracted formula SSE ( δ 2 ) 219.04 f (x) = x+175.34 + 0.12 f (x) = 1.1 × 10−5 ∗ x 2 + 5.4 × 10−3 ∗ x + 1.09 f (x) = exp(1239.92 ∗ x)0.75 + 0.75 f (x) = √ 0.0031 ∗ x + 2.9 × 10−10 + 1.23
(1) (2) (3) (4)
0.000069 0.00042 0.44 0.00013
x : Brightness (cd/m2 )
tions and calculation values were calculated using the functions. The hypothesized functions are the four formulas that follow. (1) (2) (3) (4)
a +c f (x) = (x+b) f (x) = a ∗ x 2 + b ∗ x + c f (x) = √ exp(b ∗ x)a + c f (x) = a ∗ x + b + c.
The formula parameters and a summation of the differential errors are shown in Table 2. The smallest error is Formula (1), and the derived function is as follows: f (x) =
219.04 + 0.12 x + 175.34
The derived formula is matched to the results of the previous study, so that pupil size change is inversely proportional to the level of brightness [7]. The formula is illustrated as a solid line in Fig. 5, and it matches the experimental observations. In this chapter, the pupil size calculated using the formula is called the standardized pupil size.
Controlling the Effects of Brightness on Pupil Size …
9
Fig. 6 Relationship between pupil size and brightness of presented pictures
Luminance (cd/m2)
3.3 Procedure for Correcting Pupil Size An appropriate procedure was developed in order to apply the formula from the section above to the experimental observations in Sect. 2. In Sect. 3, standardized pupil sizes can be calculated for any level of brightness using the same formula and the pupil sizes measured at four levels of brightness during the experiment. There are several possible procedures to compensate for the influence of brightness on measured pupil sizes. In this section, the ratio of mean pupil size to standard pupil size is employed in the compensation procedure, and the amount of compensation should be standardized because the standard depends on the level of brightness. The procedure is summarized in the following equation. Compensated pupil size =
Measured pupil size Standardized pupil size
Here, standardized pupil size is the pupil size in response to the level of brightness calculated using the extracted formula. The results of compensation of Fig. 3 are summarized in Fig. 6. As Fig. 6 shows, mean pupil sizes between levels of brightness are almost all flat. As in the previous analysis, two-way ANOVA was applied to the compensated sizes, and the results are summarized in Table 3. In the results using an F test, the factor for mental activity is significant ( p < 0.01), though other factors such as brightness and the interaction of these factors were not significant. The results provide evidence that regarding pupillary change, the factor for brightness was reduced when a compensation procedure was used. Therefore, compensated pupil size can be an index of mental activity during the viewing of images.
10
M. Nakayama et al.
Table 3 Two-way ANOVA of mental activity and levels of brightness for corrected data Source df SS V F Metal activity Brightness Interaction Residual Total **: p < 0.01
1 3 3 40 47
0.29 0.06 0.01 1.12 1.48
0.29 0.02 0.00 0.03 0.03
10.5∗∗ 0.8 0.1
4 Pupillary Changes in Response to Picture Content The results of Sect. 3 suggest the possibility to evaluating mental activity during the viewing of pictures using compensated pupil sizes. In this section, the feasibility of doing so is confirmed in another experiment. During the experiment, a set of pictures was presented to subjects twice. The first set is presented at the same level of brightness and the second set is presented at five varying levels of brightness. Pupil responses to the two sets are compared using the compensation procedure.
4.1 Assessment Procedure The following 25 pictures were scanned and their brightness was set at a mean of 110 cd/m2 , in order to measure mental activity during the viewing of pictures. They were classified randomly into 5 groups, and their brightness was controlled at 5 levels: 60, 85, 110, 135, and 160 cd/m2 . The pictures were presented to subjects in two sets using different sequences. The contents of the sets of pictures contain the following three types of images. (1) Snapshots: a family meal, etc. (11 photos) (2) Posters: portraits of actresses (5 photos) (3) Landscape photographs: natural scenes (9 photos). The classification of pictures into five groups was carefully conducted to ensure that each group that contained content that was the same as the other groups. The presentation procedure is shown in Fig. 7. The first set, which consists of images with the same level of brightness, was shown to 5 male subjects using a 37-inch video monitor, and the second set with 5 levels of brightness was presented after a 1 hour break. All pictures were presented for 10 s, with no septum between images. Pupil size during the latest 7 s was evaluated, as pupil size might have been influenced by the brightness of the picture during the first 3 s of viewing. Relative pupil sizes for each subject were evaluated. In order to prevent the primary effect on pupillary reaction with the set of pictures shown, dummy content was used for the first two
Controlling the Effects of Brightness on Pupil Size …
11
Fig. 7 Presentation procedure for two sets of pictures
pictures of each set, and were not included in the evaluation. The dummy images contained content similar to the target pictures.
4.2 Results of Assessment Pupil sizes for each photo of the two sets are summarized in Table 4, which consists of the constant brightness condition in the first set, and the 5 brightness condition levels in the second set. In Table 4, picture numbers are arranged by pupil size. The larger pupil sizes may reflect the level of brightness or the level of the viewer’s mental activity. The picture numbers represent the category of the picture. Using the procedure mentioned in Sect. 3 above, pupil sizes were compensated and the results are summarized in Table 4. In a comparison of the rankings of pupil sizes in the table using the three conditions, the ranked order of compensated pupil sizes fits most closely with the rank of pupil size when images were presented at a constant level of brightness. In order to evaluate the relationship of ranked order between the three conditions, both Pearson and Spearman correlation coefficients are calculated. If the brightness compensation was appropriate, the correlation coefficient approaches 1. These correlation coefficients and significant levels of probability are summarized in Table 5. The number of photos (N) for each subject is indicated below the subject number (Subjects 1∼5). This means there are numerous photos of the same subject. Mean pupil sizes across subjects are summarized in Fig. 8. The horizontal axis indicates pupil sizes at the same level of brightness, and the vertical axis indicates compensated pupil sizes. The correlation coefficient is r = 0.68 as is also
12
M. Nakayama et al.
Table 4 Comparison of pupil sizes between experimental conditions Constant brightness 5 brightness levels (exp.) 5 brightness levels (comp.) Pic No. L6 P 13 S 10 P 12 S8 S 14 S 16 S1 P9 P3 S 21 L 25 S 11 S 19 P 20 S5 S4 L 23 L 15 L 18 L 17 L 22 L7 L2 L 24
Pupil size 1.196 1.141 1.107 1.105 1.103 1.099 1.061 1.048 1.044 1.029 1.025 1.021 1.016 1.005 0.991 0.984 0.983 0.983 0.972 0.941 0.892 0.833 0.831 0.816 0.774
Pic No. P9 L 23 S8 L 17 S5 P3 L 15 S1 L 24 L6 S 19 P 13 S4 P 20 S 16 S 21 L 22 S 11 S 14 P 12 S 10 L7 L 18 L2 L 25
S: (1) Snapshot, P: (2) Portrait, L: (3) Landscape Fig. 8 Relationship between relative pupil sizes at a constant brightness level and corrected pupil sizes at 5 brightness levels
Pupil size 1.195 1.134 1.124 1.122 1.105 1.096 1.095 1.089 1.082 1.081 1.061 1.046 1.020 1.019 0.988 0.983 0.941 0.929 0.910 0.874 0.862 0.861 0.857 0.797 0.730
Pic No. L6 S 21 P 13 S 19 S8 S4 P 20 P3 P9 S1 P 12 S 11 S 10 L 18 S 14 L 23 L 17 L 22 S5 L 15 L 24 S 16 L 25 L2 L7
Pupil size 1.189 1.155 1.151 1.086 1.062 1.044 1.043 1.036 1.034 1.029 1.027 1.022 1.012 1.007 1.001 0.981 0.970 0.964 0.956 0.947 0.936 0.934 0.858 0.816 0.814
Controlling the Effects of Brightness on Pupil Size …
13
Table 5 Correlation coefficients for before and after compensation Correlation coefficient (r ) Rank correlation coefficient (r ) (significance p) (significance p) Before After Before After Overall (N=125) Subject 1 (N = 25) Subject 2 (N = 25) Subject 3 (N = 25) Subject 4 (N = 25) Subject 5 (N = 25) Averaged size (N = 25)
0.19 (0.02) 0.28 (0.09) 0.22 (0.14) 0.28 (0.09) −0.12 (0.28) 0.27 (0.24) 0.15 (0.24)
0.51 (0.00) 0.75 (0.00) 0.56 (0.00) 0.58 (0.00) 0.26 (0.11) 0.49 (0.01) 0.68 (0.01)
0.17 (0.03) 0.28 (0.09) 0.22 (0.15) 0.14 (0.25) −0.07 (0.38) 0.27 (0.10) 0.07 (0.10)
0.51 (0.00) 0.75 (0.00) 0.58 (0.00) 0.45 (0.01) 0.31 (0.06) 0.44 (0.00) 0.60 (0.00)
shown in Table 5. When the correlation coefficient using experimental pupil size is r = 0.15, the compensation index may be most appropriate. In addition, the probability of significance is improved. Furthermore, the rank coefficient improves from r = 0.07 for experimental data to r = 0.60 for compensated data. This suggests that for photos with the same brightness condition, the compensated pupil size ranking order approaches that of the ordered pupil sizes. Coefficients for most subjects and overall subjects show the same tendency. As a result, there is no correlation relationship between pupil sizes with the same level of brightness and those with 5 varying levels of brightness, though a significant relationship appears when pupil sizes are compensated for using pictures with a controlled level of brightness. The same tendency can be observed with the ranked order of pupil sizes. Though there are some deviations between subjects, correlation coefficients for all subjects overall, and for the mean data of subjects, improved. In regards to this evidence, the compensation of the brightness of images can provide a procedure to facilitate the evaluation of the content of pictures using pupillary change measurement.
5 Summary In order to evaluate viewer’s impression of pictures intended for use as educational materials, pupil size due to mental activity is examined by compensating for the influence of the level of brightness on pupillary changes. In the results, the following points were confirmed.
14
M. Nakayama et al.
1. Though both viewer’s mental activity and the level of brightness of a video display affect pupil size, pupil dilation during mental activity is almost always constant across all levels of brightness. 2. Both factors of mental activity and brightness level independently affected pupillary change in regards to the result of ANOVA. 3. An optimized functional formula was derived from the measurement of pupil size in response to several levels of brightness. The observed pupil sizes can be compensated for using the derived function and a procedure that was developed. In an analysis of the compensated results using ANOVA, the factor of the level of brightness was not significant. 4. Using the procedures above, it was confirmed that viewer’s mental activity for photos that were displayed on a video monitor could be evaluated using pupillary responses. The feasibility of evaluating viewer’s impression of pictures, movies and TV programs using pupillary changes will be the subject of our further study.
References 1. Kouzuki S (1986) A study of the psychophysiological reactions of handicapped children to television. Jpn J Educ Technol 10:31–42 2. Hess EH (1965) Attitute and pupil size. Sci Am 212:46–54 3. Otsuka R (1966) Doukou. In: Hagiwara A (ed) Me no Seirigaku. Igaku Syoin, Tokyo, pp 257–296 4. Nakayama M, Maesako T, Shimizu Y (1988) Psychological changes of pupil size in response to visual patterns and audio instructions. J Sci Educ Jpn 12:90–97 5. Nakayama M, Yasuike I, Shimizu Y (1989) Pupil size changing by pattern brightness and human activities. Technical Report ET89-35, IEICE 6. Hess EH, Beaver PW, Shrout PE (1975) Brightness contrast effects in a pupillometric experiment. Percept Psychophys 18:125–127 7. Toida N, Uchizono K (eds) (1965) Shin Seirigaku, vol 1. Igaku Shoin, Tokyo 8. Shimizu Y, Ichi Kondo S, Maesako T, Kumagai R (1987) Measuring pupil size in humans— reactions to emotional states. Jpn J Educ Technol 11:25–33 9. Tasaki K, Oyama T, Hiwatashi K (eds) (1979) Shikaku Jouhou Syori-Seirigaku, Shinrigaku. Seitai Kougaku. Asakura Shoten, Tokyo
Pupil Reaction Model Using a Neural Network for Brightness Change Shigeyoshi Asano, Ikki Yasuike, Minoru Nakayama, and Yasutaka Shimizu
Abstract Pupil reaction models for brightness change are developed in order to introduce emotional pupillary changes into the evaluation of a video. The models are designed with experimental pupil reactions to temporal changes in brightness using a linear model and a layered neural network model. Their performance in reproducing the training data, and the possibility of applying these models to short video clips are evaluated.
1 Introduction The eye pupil is influenced by changes in brightness, which is a well-known phenomenon called the pupil light reflex. In addition, the human pupil responds according to the level of interest in things observed as well as mental activity, such as the use of short-term memory. In most daily observations, the factor for mental activity may be of greater influence than pupil dilation due to temporal changes in brightness, but the factor should not be ignored. By using this phenomenon, pupil size can be used as an index of the level of interest in videos or educational TV programs. Therefore, a technique to reduce the factors of pupillary change due to brightness should be considered for use in evaluating educational videos and images using pupillary change. The authors have already developed a procedure to reduce the influence of brightness that occurs when pictures are presented for viewing, the effectiveness of which has been evaluated [1, 2]. Originally published in IEICE Trans A, Vol. J77-A, No. 5, pp. 794–801, 1994, May. S. Asano · I. Yasuike Tokyo, Japan M. Nakayama (B) Information and Communications Engineering, Tokyo Institute of Technology, Tokyo, Japan e-mail: [email protected] Y. Shimizu Tokyo Institute of Technology, Tokyo, Japan © Springer Nature Singapore Pte Ltd. 2021 M. Nakayama and Y. Shimizu (eds.), Pupil Reactions in Response to Human Mental Activity, Behaviormetrics: Quantitative Approaches to Human Behavior 6, https://doi.org/10.1007/978-981-16-1722-5_2
15
16
S. Asano et al.
Since the brightness of a display changes temporally during viewing a video or other similar programs, the technique used with still pictures cannot be used. The development of a new reaction model for pupil response which can represent pupil behavior during temporal brightness changes is required. In a related work, pupil light reflex behavior was analyzed using a dynamic system approach involving responses to light pulses from a flashlight. The size of the pupil reaction was approximated using a third-order transfer function [3, 4]. However, this impulse reaction model cannot be applied to temporal changes in brightness. A model that can respond to continuous changes in brightness is required. In order to resolve the issues mentioned above, first pupil reactions were observed using videos that consisted of temporal changes in brightness using irradiated square gray panels, then a possible model for pupil reaction to changes of brightness was created. In the results, a model consisting of the linear reaction of the pupil [5], and a nonlinear model based on a layered neural network [6, 7], were developed. The performance of the models was evaluated using actual videos which were presented to some of the viewers. Compensation performance of viewer’s pupil responses was examined in order to measure viewer’s interest. The internal representation of the neural network, which was trained using viewer’s observed pupillary responses, suggested the possibility of latent behavioral activity.
2 Experimental Method 2.1 Visual Stimuli The visual images that were presented consisted of plain gray images at two levels of brightness: the high level was 80 cd/m2 and the low level was 10 cd/m2 . These values correspond to the range of brightness of actual TV programs on a TV set. The images were created by using gray paper and a video camera which was able to control the iris level. The visual stimuli were generated by editing these images using squared waveforms, as shown in Fig. 1. The waveforms consisted of 15 cycles of switching between two levels of brightness, with the duration they were shown controlled in four steps, where periodical duration T was set to T = 4, 3, 2, and 1 s. Another set of stimuli consisting of triangular waveforms which were generated using 10 cd/m2 images stepped from low to high or from high to low levels of brightness were shown for periods of time presented in four steps, where periodical duration T was set to T = 10.7, 5.3, 2.2, and 1.1 s. The videos produced were presented to subjects in order to observe pupil responses. The experimental setup is shown in Fig. 2. These visual stimuli were presented on a 37-inch video monitor, and subjects sat 1.5 m from the monitor. The ability of the video monitor to produce the changes in brightness necessary was confirmed in advance. Subjects were 7 male university students who were around 22 years old, and all visual stimuli were presented to each of them twice.
Luminance (cd/m2)
Fig. 1 Brightness stimulus patterns
Luminance (cd/m2)
Pupil Reaction Model Using a Neural Network for Brightness Change
Fig. 2 Experimental setup
17
80
10 0
T/2
T/2 Time (sec.)
T/2
T/2 Time (sec.)
90
10 0
37 inch Discplay
Subject
VTR
VTR
PC
18
S. Asano et al.
Fig. 3 Pupillary changes in response to brightness change (T = 2.0 s)
Fig. 4 Pupillary changes in response to brightness change (T = 1.0 s)
2.2 Observed Records An example of pupil response for the square waveform at T = 2.0 s is displayed in Fig. 3. The horizontal axis represents time in seconds, and the vertical axis represents the level of brightness and relative pupil size. Here, the relative pupil size is normalized using an overall average of all pupil sizes recorded at 30 Hz for each participant. In the figure, the fine line represents changes in brightness, and the bold blue line represents pupil size. As shown in the figure, the pupil responds periodically, such as dilating when levels of brightness are low and shrinking when levels of brightness are high. However, as the lowest and highest sizes appear after a change in brightness, some response delay is observed. When the cycle of brightness changes at T = 1.0 s, pupils did not respond to the change in brightness immediately (Fig. 4). These results confirm the response delay of pupillary size change.
Pupil Reaction Model Using a Neural Network for Brightness Change Fig. 5 Pupil reaction model
[Output]
[Input] Brightness
19
Transfer Function
H(f) Pupil Size
Psychological Activity
3 Pupil Reaction Model When pupil size is used to measure viewer’s response to videos, the influence of pupillary reactions to changes in brightness of videos should be considered, as the above section shows. Therefore, the influence of changes in brightness during videos should be removed from the pupil response measurements as much as possible. In the previous study, two factors of pupillary changes, such as mental activity and level of brightness of visual stimuli affected pupil size independently [1]. This hypothesis for the assessment of mental activity during video viewing is illustrated in Fig. 5. Here, the argument f provides information about input brightness. The model is based on a transfer function H ( f ) for pupil responses to changes in brightness, where the observed pupil size may consist of a linear summation of mental activity and output of the above transfer function. When the function of H ( f ) is derived from the experimental observation of pupil responses, the level of mental activity can be estimated, since the brightness input of the visual stimuli is observed and measured. Possible H ( f ) models are discussed in the following sections.
3.1 Linear Model A linear analysis technique was applied to 8 patterns of pupillary changes in response to the level of brightness [5], as shown in Fig. 3. From the results, a linear response model can be created in the form of a transfer function using the level of brightness as input and the pupil size as output. The linear model as a transfer function H ( f ) is defined using characteristics of amplitude and phase responses for frequencies of pupillary change due to changes in brightness. The result can be noted using following equations: |H ( f )| = 5.75 ∗ 10−3 ex p(−1.02 ∗ f ) ∠H ( f ) = −1.39π ∗ f + π
(1)
20
5
Relative Amplitude
Fig. 6 Amplitude characteristics of pupil size in response to frequency of brightness
S. Asano et al.
4 3 2 1 0 0
1
2 3 Frequency (Hz)
4
The characteristic of amplitude, which is extracted from the experimental responses of pupils, is shown in Fig. 6. In this figure, the horizontal axis represents the frequency of the input, or change in brightness, and the vertical axis represents the relative pupil size. The points represent the values that are calculated from observed pupil responses in the experiment, and the solid line represents the characteristics of the above equation in 1. Figure 6 suggests that pupil response emulates a low pass filter for frequencies less than 3 Hz, and the response to the change in brightness is delayed by around 0.5 s.
3.2 Neural Network Model Pupillary change due to brightness suggests a nonlinear reaction [8]. In order to create a nonlinear model using the sets of experimental data, a neural network (NN) model was employed. The NN model consists of a 3-layer network, as shown in Fig. 7. The model feeds the level of brightness values for the input layer in time series, and these summations are converted into pupil sizes at the output layer via a hidden layer. In considering the delayed response of the pupil to changes in brightness, a weight balance across time series is added to the input layer in units known as “Delay Neurons” which contain discrete time constants [9]. As Fig. 3 shows, brightness and pupil size are synchronized using the input and the output layers as a time series. This training procedure gives brightness values and pupil sizes to individual 1/30 s video frames, in response to the levels of brightness. The procedure is based on the backward propagation of errors, which is known as backpropagation [6]. The training data in Fig. 3 consists of 8 sets of pupillary reactions to changes in brightness, represented as 42 and 4 triangular waveforms, as shown in Sect. 2. Though pupillary size changes contain some noise factors, such as pupillary noise, they are independent of factors of mental activity. In order to suppress these noises,
Pupil Reaction Model Using a Neural Network for Brightness Change Fig. 7 Neural network model of brightness
Input Layer
21
Hidden Layer Output Layer
Brightness Delay unit D delay=1/30sec. D
Pupil Size
D
D
smooth processing was applied in advance using a moving average of 5 data points. Each set of training data is 10 s in length. One condition (T = 10.7 s) could not contain the entire cycle, but another result shows that it scarcely influences performance. The size of units in the NN model is optimized using a calculation experiment that will be noted later.
4 Evaluation of Performance Models In this section, the accuracy of the prediction of pupil sizes is evaluated using the NN trained model. The model is designed to generate pupillary changes from the temporal changes in brightness. The performance of the novel data will be discussed in Sect. 6.
4.1 Temporal Changes in Pupil Size As an example, pupil sizes generated by changes in brightness are illustrated, such as the square waveform (T = 2.0 s) in Fig. 8 and the triangular waveform (T = 5.3 s) in Fig. 9. The horizontal axis represents time (seconds), and the vertical axis represents relative pupil size. Three types of pupil sizes are illustrated: from the experimental data, from the estimated pupil size using the linear model, and from the estimated pupil size using the trained NN model. In the following section, this model was optimized so that the number of input layer units is 40, and the number of hidden layer units is 2. The output of the NN model seems to be more accurate than the values estimated using the linear model.
22
Square Wave (T=2.0sec) 0.9
Relative Pupil Size
Fig. 8 Comparison of pupil responses between a linear model and a neural network model for a square wave
S. Asano et al.
0.5
Experiment Linear Model Neural Network Model
0.1 0
1
2
3
4
5
6
7
6
7
Time (sec) Triangular Wave (T=5.3 sec) 0.9
Relative Pupil Size
Fig. 9 Comparison of pupil responses between a linear model and a neural network model for a triangular wave
0.5
Experiment Linear Model Neural Network Model
0.1 0
1
2
3
4
5
Time (sec)
4.2 Mean Square Errors The estimated errors, which are the differences in reproduction sizes, are compared between the linear model and the NN model, as shown in Fig. 10. In the figure, the horizontal axis represents errors in the linear model, and the vertical axis represents errors in the NN model. Errors in the 8 sets are plotted using a log scale. All plots are located under the diagonal line, so that the errors of the linear model are larger than the errors of the NN model. The reproductions using the NN model are exceeded rather than the reproductions that use the linear model. In order to evaluate reproduction performance, correlation coefficients of pupil sizes between the reproduction and the experiment are calculated, as shown in Fig. 11. All coefficients using the NN model are larger than the coefficients using the linear model. Therefore, overall performance for the reproduction of pupil sizes using the trained NN model is better than with the linear model.
Pupil Reaction Model Using a Neural Network for Brightness Change Fig. 10 Comparison in mean square errors
23
Square Wave
Neural Network Model
Triangle Wave 0.01
0.002 0.01
0.002
Linear Model Fig. 11 Comparison of correlation coefficients between the two methods
Neural Network Model
1.0
0.5
Square Wave Triangular Wave 0
0
0.5
1.0
Linear Model
5 Characteristics of the Model Developed While layered neural networks have been applied to various data processing issues in order to obtain better performance, both network architecture and its connecting structure are also discussed in order to better estimate appropriate processing techniques [10]. In a sense, the optimized model is extracted by controlling the numbers of input units and hidden layer units, and the representation of the model during data processing is examined.
24
S. Asano et al.
Fig. 12 Change in RMS for network structures Mean Square Error
0.15
0.10
0.05
30
en idd fH ro be m Nu
26
22
18
14
10 6
its Un
2
50
45
40
35
er umb
30
25
20
put
of In
15
10
s Unit
N
5.1 Optimization of the Number of Units Evaluation of the performance of the trained model was conducted using a root mean square error (RMS) while the number of input layer units and hidden layer units were controlled. In order to optimize the network architecture, a sufficient number of iterations were provided for each condition of the network by controlling the number of input layer units between 10 and 50, using 5 unit steps, and the number of hidden layer units between 2 and 30 using 2 unit steps in addition to 1. The simulation was conducted 100 times for each condition, and performance was evaluated as a mean of MSEs. The results are summarized in Fig. 12. The horizontal axis represents the number of input units, the depth axis represents the number of hidden units, and the vertical axis represents the sum of the RMS errors for all training patterns. For the number of input units, the errors are high when the number of units is less than 20, and the errors are the lowest between 25 and 40 units. In addition, the errors increase when the number of input units is over 40. A blown-up figure shows the optimized model consisting of 40 input units and 2 hidden units in Fig. 13. In this model, a series of brightness values is set for the input layer, and the number of input units corresponds to the length of time sampled. Pupil response has a time delay of 0.5 s, so that the model may not respond to changes in brightness when the number of input units is less than the number of input layer units responding to the time delay (equivalent to 15 units). Model performance decreases when the number of input units is insufficient for the time delay. Also, pupil reaction to changes in brightness requires around 0.94 s [11] (equivalent to 28 units). Therefore, performance is generally better when the number of input units is between 25 and 40,
Pupil Reaction Model Using a Neural Network for Brightness Change Fig. 13 RMS changes in network structures (Blown-up)
25
Global minimum point Mean Square Error
0.06
0.06
0.05
0.05
50 be m Nu
45 ro
40
np fI
ut
35 ts
i Un
30 1 2
9 6 7 8 3 4 5of Hidden Units
10
Number
because of the physiological phenomenon mentioned above. However, training using the NN model is not easy with over 40 input units due to the difficulty of calculations involving many local minimums. In general, the training procedure often stays in a valley around the local minimums, as the number of input units is influenced widely by the training of the NN model. In the results, the number of input units for the NN model should be set between 25 and 40, and the discussion which follows employs 40 units for the input layer. As for the hidden units, 2 units show the best performance, which decreases gradually as the number of hidden units increases. However, the influence of the number of hidden units is smaller than the number of input unit errors. Even when the hidden layer was removed, performance did not improve.
5.2 Weight Distribution of Input Layer The input layer of the NN model employs a “Delay Neuron” unit set, and each of the 40 units has weighted connections to the number of hidden layer units according to the change in brightness in the time series. These weight distributions are illustrated in Fig. 14, and are calculated using the best model (40 input and 2 hidden units). As the figure suggests, there are more weighted units between 5 and 15, namely 0.2–0.5 s before the current pupil size. The change in brightness during the time delay may affect current pupil size. The time delay of pupil reaction to the change in brightness is 0.2–0.5 s [4], and coincides with both the physiological evidence and the weight pattern of the model. When the number of input units changed, the unit weights during the delay were larger than the other unit weights.
26
S. Asano et al.
Fig. 14 Weights Distribution for Input Layer
Mean Square Error
0.8
0.6
0.4
0.2
0 5
10
15
20
25
30
35
40
Unit Number
5.3 Representation of Hidden Units In order to illustrate the activities of the two hidden units, their outputs during reproduction of the training data are summarized in Fig. 15. The top panel represents an example of a square waveform of changes in brightness (T = 2 s), and the bottom panel represents an example of a triangular waveform of changes in brightness (T = 5.3 s). Their outputs correlate negatively, and their minimum and maximum peak values correspond. These characteristics were reproduced using various sets of training conditions. The mechanism of pupil light reflex can be summarized in Fig. 16. The retinal input of the brightness is transferred to both the “dilator” and “sphincter” muscles, while additional stimulus is combined with the brightness signal as it is conveyed through the central nervous system (CNS). The functions of the two hidden units may be to simulate these muscles or neuron transfer activities. In Fig. 16, hidden unit 1 may act as the “sphincter” muscle, and hidden unit 2 may act as the “dilator”. If these metaphors are applied the “sphincter” in this model outputs activity that is 8–9 times the level of the behavior of the “dilator”, and this condition may confirm the phenomenon, such as when the pupil shrink depends mostly on the activity of the “sphincter”. This discussion suggests the possibility that the internal representation of the NN model simulates pupillary change due to brightness, and may help in the understanding of the physiological behavior of the pupil.
6 Evaluation of Brightness Compensation Performance In order to evaluate the trained model so that it would remove pupillary responses due to changes in brightness, two types of visual stimuli were applied.
Pupil Reaction Model Using a Neural Network for Brightness Change
1.0
27
Square Wave (T=2 sec.) Hidden Unit 1
Unit Value
0.8 0.6 0.4
Output Unit 1
0.2
Hidden Unit 2
0 0
1
2
3
4
5
6
7
Time (sec.) 1.0
Triangular Wave (T=5.3 sec.) Hidden Unit 1
Unit Value
0.8 0.6
Output Unit 1
0.4 0.2
Hidden Unit 2
0 0
1
2
3
4
5
6
7
Time (sec.) Fig. 15 Internal representation of a hidden layer
Other factors (including psychological factors) Sphincter Brightness
Retina
+
CNS
+ Dilator
Pupil Size
Feedback Fig. 16 A model of pupil reaction to brightness change
6.1 Compensation Due to Random Changes in Brightness During a Video A 3-minute long experimental video consisting of a gray image was created, and the brightness of the image changed randomly between 10 and 70 cd/m2 in 10 cd/m2 steps every second. Pupil response was measured under these conditions, and the pupil sizes measured were compensated using sizes of pupils estimated from calculations taken from linear and NN pupil reaction models. If the compensation was appropriate, the influence of the level of brightness can be removed.
Fig. 17 Mean relative pupil size across brightness levels using a movie of a gray image
S. Asano et al.
Relative Pupil Size
28
Experimental measurement Linear Model Neural Network Model
1.2 1.1 1.0 0.9 0.8
20
30
40
50
60
70
2
Luminance (cd/m )
The results of pupil sizes across levels of brightness are summarized in Fig. 17. The horizontal axis represents the level of brightness, and the vertical axis represents the relative pupil size. In this figure, three pupil size conditions: experimental measured size (cross marks), size compensated using the linear model (open blocks), and size compensated using the NN model (solid blocks) are compared. Means of experimental measurements decrease along with the level of brightness, since brightness influences the size of the pupil. However, means of pupil sizes using both models are almost flat, and the effectiveness of these compensation procedures was confirmed using a one-way ANOVA test. Since compensation using the NN model produces a flatter result, as the figure shows, the NN model may provide a better solution.
6.2 Compensation Performance During a Video The results of the previous subsection show the possibility of compensation of brightness levels in a video. The technique was then applied to a 3-min “SUMO” wrestling video clip. In this experiment, sampling duration for pupil and image brightness is controlled by t, such as from 1 video frame (1/30 s) to 60 frames (1 s). Both pupil sizes and brightness levels are averaged using a duration of t consecutively. When t is set to a high value, compensation may be easier since a smoothing effect suppresses the pulsing changes. However, t should be smaller in order to evaluate changes in mental activity during the viewing of a video. The compensation effectiveness is evaluated using one-way ANOVA which consists of a factor of the level of brightness. If the factor of the brightness level was not significant, the compensation can be confirmed. The results of one-way ANOVA are summarized in Fig. 18. The horizontal axis represents the duration of compensation t, and the vertical axis represents the F value. A dotted line indicates the level of significance ( p < 0.05). In the figure, “open block” indicates results using the linear model, and “solid block” indicates
Pupil Reaction Model Using a Neural Network for Brightness Change Fig. 18 Change in F value using re-sampled duration
29
F-value
Linear Model Neural Network Model
p 0.05, and with model processed data: F(4, 15) = 0.82, p > 0.05. A similar tendency was confirmed using the two remaining video clips. The results confirmed that both the experimental data and the processed data can be compensated for the influence of brightness using the procedure mentioned in Sect. 3 [3]. This result shows that the model for processed data has features equivalent to the observed data, and the estimated blink artifact data may reduce compensation for the influence of brightness. In addition, the factor for viewer’s individual differences was examined using twoway ANOVA of factors for brightness levels and factors for participating subjects.
An Estimation Model for Pupil Size of Blink Artifacts While Viewing TV Programs Fig. 12 The F-value changes for source of display brightness in video program 1
73
5 Experiment Model
F value
4 3
5% level of significance 2 1 0 0
2
4 6 8 Sampling Frame
10
However, the factors for these subjects are not significant in the results from before and after compensation. The effect of the sampling rate for pupil size is examined using the TV commercial measured data. With the sampling rate set to 5 Hz (averaging one sample for 6 frames (6 × 1/30 s)), compensation performance was evaluated. The results are mostly the same as the results in Fig. 11. Mean pupil sizes are influenced by the level of brightness before compensation, and this influence is reduced by compensation. However, the factor of the level of brightness of the experimental data is still significant after compensation (F(4, 15) = 3.09, p < 0.05), but the data processed using the model is also influenced by the level of brightness before compensation. The result is (F(4, 15) = 5.28, p < 0.05). The compensated data produced results that were not significant (F(4, 15) = 0.81, p > 0.05), even when the sampling rate was 5Hz. The changing F-values in the one-way ANOVA for the clip of the TV commercials are summarized in Fig. 12. The horizontal axis represents sampling frame duration for averaged pupil sizes, and the vertical axis represents F-values of one-way ANOVA by examing the contribution of brightness levels in the video clips. The dotted line shows the level of significance ( p = 0.05). As noted in Fig. 11, the results of experimental observation (solid circles) indicate that the factor for the level of brightness is not significant at the 10th frame since the F-value is below the dotted line. The factors are significant, however, when frame sampling is less than 6 frames per second (5 Hz). Also, the pupil sizes estimated using the processing model are almost always under the dotted line (not significant), even at Frame 1 (30 Hz) in the case of the clips of TV commercials. Furthermore, the brightness factor is not significant for two other video clips when estimated using the processing model. The processing model for blinks
74
M. Nakayama and Y. Shimizu
contributes to brightness compensation processing of pupil size during the viewing of video clips. In general, blink artifacts are removed from the pupil observations, but some factors of the artifact do in fact influence pupil size. The trained model can reduce the influence of blink artifacts on pupil size because proper pupil sizes are estimated using the model for periods of blinks. As a summary of this section, with regards to the results of brightness compensation using the three video clips, compensation can be conducted for every frame (sampling rate: 30 Hz) when the trained model is used. In comparison with the previous procedure, this is an obvious improvement for temporal observation of pupillary changes, as it is based on every 10 frame period (sampling rate: 3 Hz). The results show the effectiveness of the model in estimating pupil size during blinks, and reducing pupillary change noise.
5 Summary In this chapter, a model for estimating pupil size during blinks has been developed using a three-layer neural network. The model was applied to pupil size compensation for brightness during the viewing of video clips, and overall performance was evaluated. During the assessment, the following results were achieved. 1. When a three-layered neural network is trained using pupillary reactions to changes in brightness, the possibility of reproducing pupillary changes and suppressing pupillary noise with the model was confirmed. 2. The trained model can predict pupil sizes when the model is trained with a modified training data set consisting of artificial blink artifacts which were inserted into measurements of pupillary changes without blinks. In the trained model, some hidden units transfer the detection blink signal to the output unit. 3. When the trained model is employed, the number of missing pupil size measurements caused by blinks can be reduced. The processing model can contribute to the processing of pupillary changes due to brightness compensation. In the results, the pre-processing model can extend the sampling rate of pupillary changes from 3 to 30 Hz. The development of a more appropriate model which precisely estimates and reduces pupillary noise, and can also compensate for blink bursts, will be a subject of our further study.
References 1. Nakayama M, Yasuike I, Shimizu Y (1990) Pupil size changing by pattern brightness and pattern content. J Inst Telev Eng Jpn 44:288–293 2. Asano S, Yasuike I, Nakayama M, Shimizu Y (1994) Pupil reaction model with neural network for brightness change. IEICE Trans J77-A:794–801
An Estimation Model for Pupil Size of Blink Artifacts While Viewing TV Programs
75
3. Asano S, Nakayama M, Shimizu Y (1995) A neural-network-based eye pupil reaction model for use with television programs. Jpn J Educ Technol 18:61–70 4. Takahashi K, Akira Tsukahara N, Toyama K, Hisada M, Tamura H (1976) Shinkei Kairo Sisutemu. Asakura Shoten, Tokyo 5. Tada H, Yamada F, Fukuda K (1991) Mabataki no Shinrigaku. Kitaouji Shobo, Kyoto 6. Yoshida H, Yana K, Okuyama F, Tokoro T (1993) Respiratory fluctuations in pupil diameter of the human eye. IEICE Trans J76-D-II:776–781 7. Asoh H (1988) Nyuuraru Nettowa-Ku Zyouhou Syori. Sangyo Tosho, Tokyo 8. Kosugi Y (1995) Shinkei Kairo Sisutemu. Korona Sha, Tokyo 9. Luo FL, Unbehauen R (1997) Applied neural networks for signal processing. Cambridge University Press, Cambridge, UK 10. Uto K, Kosugi Y (1998) Discussionon the ill-posed problems arising in the image-guided navigation system and the network realization based on the spline interpolation. IEICE Trans J81-D-II:361–369 11. Kurita T, Asoh H, Umeyama S, Akaho S, Hosomi A (1996) Influence of noises added to hidden units on learning of multi-layer perceptrons and structurization of networks. IEICE Trans J79-D-II:257–266 12. Hoshimiya N (1990) Seitai Kougaku. Shokodo, Tokyo
Estimation of Eye-Pupil Size During Blink by Support Vector Regression Minoru Nakayama
Abstract Pupillography can be an index of mental activity and sleepiness, however, blinks prevent its measurability as an artifact. A method of estimation of pupil size from pupillary changes during blinks was developed using a support vector regression technique. Pupil responses for changes in periodic brightness were prepared, and appropriate pupil sizes for blinks were given as a set of training data. The performance of the trained estimation models was compared and an optimized model was obtained. An examination of this revealed that its estimation performance was better than that of the estimation method using MLP. This development helps in the understanding of the behavior of pupillary change and blink action.
1 Introduction Pupillography can be used as an index of mental activity and sleepiness [1, 2]. In particular, the eye sleepiness test, which is a kind of reading of the frequency power spectrum, can often be applied to measure the degree of tiredness in clinical observations or in industrial engineering situations. Mental activity and sleepiness are based on high-level information processing, but pupil response only can not provide sufficient evidence of the process. Pupillography can be used as a measurable index to understand human behavior, however. Most methods of measuring pupil size are based upon the image processing of the eye. Therefore, any “eye-blink’’ problem can affect measurements due to the eye being obscured by the eyelid during “blink periods’’. Blinks are usually discussed as an artifact for temporal observations such as mean pupil sizes or results of frequency analysis [3, 4]. To extract the change in mental activity, the temporal pupil size should be measured accurately without blink artifact. Originally published in “Modelling Natural Action Selection: Proceedings of an International workshop” Editors, Joanna J. Bryson, Tony J. Prescott, and Anil K. Seth, pp. 121–125, 2005. M. Nakayama (B) Information and Communications Engineering, Tokyo Institute of Technology, Tokyo, Japan e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 M. Nakayama and Y. Shimizu (eds.), Pupil Reactions in Response to Human Mental Activity, Behaviormetrics: Quantitative Approaches to Human Behavior 6, https://doi.org/10.1007/978-981-16-1722-5_6
77
78
M. Nakayama
To reduce the influence of blinks on pupillary change, a model for predicting pupil size has been developed. This model may aid in understanding pupillary behavior or implicit action. An estimation method was developed using a three-layer perceptron as a kind of multi-layer perceptron (MLP) [3, 5]. The training data was prepared as a pair of temporal pupillary changes. One was the original measurement without blinks, and the other was modified by replacing some periods with artificial blinks. Here, artificial blinks are typical patterns of pupillary change during blinks. The MLP was trained by reproducing original pupil size without blinks from pupillary changes with artificial blinks [3, 5]. This estimation method can be applied to various experimental pupil sizes, however, accuracy is often an issue. One of the possible reasons might be the estimation process. The MLP with the sigmoidal function was applied to the estimation according to the pupil response, based on the nonlinear model [6]. The activation function might not represent pupillary change sufficiently. An alternative network to the MLP is the radial basis function (RBF) network [7, 8], and this provides a smooth interpolating function by using basis functions such as the Gaussian function. This estimation issue suggests a kind of regression. Currently, the support vector regression (SVR) can be used as a more robust representation for the regularization or the extraction of feature space. It is also suggested that the Gaussian kernels tend to yield good performance [9]. Another reason might be that the training data consists of artificial blinks. Due to the method of measuring pupil size, the correct size during a blink is never obtained because the pupil is covered by the eyelid. Therefore, it is not easy to prepare appropriate training data for making estimations. In this paper, a new estimation method, which consists of an SVR technique and an experimental pupillary change, was developed to improve estimation accuracy and to observe the pupillary response. The purposes of this paper are addressed as follows: 1. To prepare a training data set which consists of pupillary change with blink artifact for developing the estimation method. 2. To develop an estimation model by use of a support vector regression technique, and to evaluate the estimation performance in comparison with other methods.
2 Method 2.1 Periodic Pupillary Response Estimation of pupil size during the blink provides a possible pupil size from the temporal sizes. Therefore, training data as a prototype of pupil response consists of experimental pupil sizes in the blink and possible sizes. As already suggested, it is not easy to measure the possible size during blinks, so the size should be obtained by estimation. In this paper, the periodic pupillary responses were observed to extract
Estimation of Eye-pupil Size During Blink by Support Vector Regression Fig. 1 Light reflex pupillary change and training data (Bright stimulus T = 4 s)
79
1.5
1.0
Brightness (cd/m2)
Pupil Size
80
Experiment Auto correction Reference
10 0.5 0
2
1
3
Time (sec.)
typical response patterns because the pupil accurately reacts to light stimulus as an eye pupil reflex, despite blink artifact and pupillary noise. It is easy to know the overall change in response to the brightness change. The light reflex was applied to control pupillary change and to extract regularized responses. In a sense, observing the reflex reaction was not the main purpose of this experiment. The bright stimuli consisted of four square waves (T = 4, 3, 2, 1 s) and three triangular waves (T = 2.2, 5.3, 10.7 s) in the range of 10–80 cd/m2 . The duration of each stimulus was 40 s. This visual stimuli were presented on a 17-inch computer monitor. Three subjects (Subject no.1∼3) who have normal visual acuity took part in this experiment. They were seated with their heads on a chin-rest which was positioned 50 cm from the monitor. Figure 1 illustrates pupillary change in response to a bright stimulus from a square wave (T = 4 s) of 3 s. The horizontal axis shows time and the vertical axis shows pupil size and brightness change. The bold line shows brightness, the series of “•” shows experimental measures of pupil size. Pupil responses indicate light reflex reactions with time delays which are approximately 0.2∼0.5 s [10]. The figure shows that pupil size decreases gradually with time delays after the change in brightness. There are two drops that are caused by the blink. The average blink rate in this experiment was 12.3 blinks per minute. Usually, a subject blinks about 20 times per minute [11]. It seems that subjects have suppressed blinks during the experiment, however. The pupillary responses to the stimuli were observed using an eye tracker (nac:EMR-8) with pupil size measuring capability. The pupil image is captured by a small camera placed between the display and the chin-rest. The camera does not prevent the subject from seeing the display because it is located lower than the viewing level. The captured pupil image is analyzed as an ellipse which has longer and shorter diameters. This analyzing equipment measures the longer diameter of an ellipse of the pupil at 60 Hz, and produces the raw data and the status code of the measurement. This equipment also measures the shorter diameter simultaneously, to monitor the aspect ratio of the longer and shorter diameters. If the eyelid covered a part of the pupil and the value of the diameter was affected, the aspect ratio would
80
M. Nakayama
be smaller than 1.0 because of the round shape of the pupil. The longer diameter is the horizontal length of the ellipse and the shorter diameter is the vertical length during blinks. Both diameter and aspect ratio decrease with the degree of coverage of the pupil by the eyelid. When the aspect ratio of the pupil is under 0.7 ∼ 0.8, the measuring error code is given as the output status [12]. This means that pupil sizes during a blink are detected by the equipment. The easiest way to estimate pupil size during blink is to replace the drop in pupil size with a pupil size which is the last valid measurement before the status registers an error code. This replacement algorithm and process are very simple. The estimation value can be replaced automatically according to the status code, and it is defined as the “Auto-correction”. This estimation is also illustrated in Fig. 1. There is no drop in pupil size, but the accuracy is still not sufficient, however, because the pupil size has already begun to decrease when the output status produces an error code. The pupil size as a circle was calculated from the longer diameter of the ellipse. The pupil size is zero when the whole pupil is covered by the eyelid, such as during a blink. The pupil size is significantly different among individuals, then the size is standardized by individual average size. Pupil size was originally observed at 60 Hz, however, the data was resampled at 30 Hz to compare the estimation performance of the previous method [3].
2.2 Training Data To obtain pupillary change without blink, the pupil sizes during blinks were interpolated manually for three participants by referring to the periodical pupillary change. Because the pupil’s light reflex to changes in the brightness of the stimuli is mostly consistent, pupil size can be interpolated from regular responses in other cycles if the pupil size during blinks has dropped off during a cycle. The interpolated periods are longer than the area where the measuring status registers on the error code. This corrected pupil size is also illustrated in Fig. 1 as the “Reference”. This seems to be a more plausible method of measuring pupillary change during blink than the estimation of “Auto-correction”. As a result, a pair of temporal pupillary changes with and without blinks was prepared. Figure 2 illustrates the estimation process which is a mapping. A target pupil size is generated from the pupillary change during the “drop–off period” of the blink. Two out of the three sets of participant data were assigned as training data and the remaining set of participant data was assigned as test data.
2.3 Pupil Size Estimation by Use of SVR The estimation function was derived from the training data by use of the support vector regression technique [9]. The estimation procedure is similar to the one using
Estimation of Eye-pupil Size During Blink by Support Vector Regression Fig. 2 Relationship between experimental data (x) and estimation target (y)
yk
1
xk -(n/2-1)
81
yˆk
xk
n
xk +(n/2-1)
MLP [3]. As displayed in Fig. 2, a sub-string of data x which consists of n components is taken step-wisely from the time sequence data. Here, k th x, xk is noted as follows: xk = (xk−(n/2−1) , . . . , xk , . . . , xk+(n/2−1) ) The estimated pupil size yˆk for the empirical size yk at the time position k is reproduced from xk . This requires deriving the mapping from the experimental pupil size with blinks to a pupil size without blinks, which is termed the “Reference”. Here, the mapping function is defined as f . Figure 2 notes the mapping function from x to f (x). The required mapping function f can provide an interpolated pupil size from x where x includes the zero value component xi as blinks. The number of the training data sets (xi , yi ) is l, and the mapping function f is defined by linear regression as follows [13]: f (x) = (w · x) + b To estimate the function f with a precision of , the minimization problem is defined 1 as introducing the geometric margin w 2 , as follows: 1 w 2 + C |yi − f (xi )| 2 i=1 l
where · 2 is the Euclidean norm, 21 w 2 is a regularization factor, C is a fixed constant, and | · | is the -insensitive loss-function. |z| = max{0, |z| − } Here, one can introduce slack variables ξ, ξ ∗ to support vector “soft-margin” loss function, and then this is written as the minimization problem of τ [13]. 1 w 2 + C (ξ + ξ ∗ ) 2 i=1 l
τ (w, ξ, ξ ∗ ) =
82
M. Nakayama
((w · xi ) + b) − yi ≤ + ξi yi − ((w · xi ) + b) ≤ + ξi ∗ ξ, ξi ∗ ≥ 0. To generalize as nonlinear regression, a kernel k(·) is denoted nonlinear transform (x) for the feature vector x. This procedure is the so-called kernel trick. Introducing Lagrange multipliers αi , (i = 1, . . . , l), this is the minimization problem of the objective function, as follows: 1 (αi − αi ∗ )(α j − α j ∗ )k(xi , x j ) − (αi − αi ∗ ) + (αi + αi ∗ ) 2 i=1 i=1 i=1 l
l
l
l (αi − αi ∗ ) = 0 i=1
0 ≤ αi , αi ∗ ≤ C Then, function f is written as follows: w=
l
(αi − αi ∗ )(xi )
i=1
f (x) =
l (αi − αi ∗ )k(xi , x) + b i=1
In this paper, the Gaussian kernel in the introduction is introduced as follows: k(x, x ) = ex p(− x − x 2 /2σ 2 ) Example l is given by the amount of training data. To obtain the optimized model, a dimension n of x, and a precision (eps) and σ (ST D) of Gaussian kernel should be derived. The practical calculation was conducted using the SVMTorch package [14], and parameters were optimized.
Estimation of Eye-pupil Size During Blink by Support Vector Regression Fig. 3 Mean and SD of square errors
83
0.00006
Mean Square error
i75
i60
0.00005 i55
i50 i45
0.00004
0.00003 0.00004
i30
i40 i35
0.00008
0.00012
0.00016
SD
3 Result 3.1 Reproducing Performance To derive the optimal condition, the performance which was reproduced, was examined according to the parameters. The dimension n in Fig. 2 was examined across eight conditions: n = 30, 35, 40, 45, 50, 55, 60, 75 when the precision was set to (eps) = 0.01. While in this condition, σ (ST D) of the Gaussian kernel function was controlled from 0.4 to 8.0 by 0.4 increment steps. For example, the amount of training data was l = 15, 416 in the condition of n = 35. After the training of the support vector regression with SVMTorch II, the performance reproducing the training data was examined. The mean square error (MSE) and the standard deviation (SD) of errors were compared across the training conditions. The least mean square error and the standard deviation of error for each dimensional condition was summarized in Fig. 3 and labeled as input dimensions (i−n). The vertical axis shows the mean square error, the horizontal axis shows the SD of the errors. As shown in Fig. 3, the least mean square error was n = 35. Where n = 35, the least MSE was σ (ST D) = 2.4, and the number of Support Vectors was 1,380.
3.2 Estimation Results The trained model was applied to the experimental data for Subject 3 in Fig. 1. The output of the model was illustrated as SVR in Fig. 4. According to the temporal
84
M. Nakayama
Fig. 4 Estimation results using SVR and MLP
Pupil Size
1.5
1.0
Experiment MLP SVR
0.5 0
2
1
3
Time (sec.)
pupillary change which was reproduced by SVR, all pupil sizes during blinks were replaced with plausible assumptions. Another estimation using the MLP model [3], which was developed previously with empirical pupil sizes and artificial change for the blink, was also illustrated in the same format in Fig. 4. Some estimation sizes during blink were higher than the plausible sizes. Comparing the two estimations in this figure, the estimation with SVR seems more appropriate.
3.3 Estimation Performance To evaluate estimation performance of pupil size during blink, MSEs for the test data set were compared across the following four estimation methods. Here, the error was defined as the difference between the estimation size and the “Reference” which was determined, as was the training data. 1. 2. 3. 4.
Experimental measured size including blink influence (Exp), Maintaining previous valid size during blinks (Auto), Size estimation using MLP trained with artificial blinks (MLP), and Size estimation using SVR trained with the above training data (SVR).
For the estimation using SVR, the performance was compared across precision (eps) = 0.001, 0.005, 0.01, 0.05. The parameter σ (ST D) was given for each precision condition according to the least square error for the training data. In general, the square error of the reproduction decreases as the precision (eps) becomes smaller. Total square error and square error during blink periods for the test data set were summed up for each condition. Those errors were summarized in Fig. 5. The vertical axis indicates the total sum of square error, and the horizontal axis indicates the sum of the square error during blinks. Both axes are shown in a logarithmic scale. For
Estimation of Eye-pupil Size During Blink by Support Vector Regression
100
Total sum of square error
Fig. 5 Square error change with (eps)
Exp.
MLP
10
1
0.1 0.1
85
SVR(eps=0.05) Auto SVR (eps=0.001) SVR(eps=0.005) SVR(eps=0.01)
1 10 Sum of square error during blinks
100
the experimental data including blinks, the total sum of the square errors resulted in drops in blinks. Therefore both errors coincided and were the largest value. The estimation performance of MLP was comparable to the “Auto” condition. When the performance of SVR was compared across precision parameters, the total square error in the condition of (eps) = 0.01 was the least. According to the test result, parameters of the optimized condition are the input dimension n = 35, a parameter of the Gaussian kernel σ (ST D) = 2.4, and a precision (eps) = 0.01. As a result, the total sum of the square error decreased to 25% of MLP, and 47% of “Auto”. Also, the sum of the square error during blinks decreased to 43% of MLP, and 28% of “Auto”. It is interesting that the sum of the square error during blinks decreases with larger (eps).
3.4 Application to Another Data Set To examine the pupil size estimation possibility of the trained model, another experimental data set was used with the test data set. The pupillary change was measured in an experiment which gave oral calculation tasks to a subject while visual stimulus such as the subsequent ocular task was displayed [15]. Experimental pupil size which was measured at 30 Hz for 10 s is illustrated as Exp. in Fig. 6. The horizontal axis shows time, and the vertical axis shows pupil size, and the drops show blinks. The estimation result using the above-trained model was overlapped in Fig. 6, as SVR. The SVR indicates the same pupil size without blink periods and gives possible sizes during blinks. However, blinks affect estimation sizes before or after blink periods. As the blink often widely influences pupil size before or after the
86
M. Nakayama
Fig. 6 An application result
1.5
Relative pupil size
SVR 1.5
1.0
1.0
0.5 Exp.
0.5
0 0
5 Time (sec.)
10
blink, it is not easy to select the target period for estimation. Another reason is the difficulty in discriminating between correct and incorrect pupil sizes. Some irregular pupil sizes are displayed in Fig. 6, but they have valid sizes in the correct range. These will be the subjects of further study.
4 Summary The estimation method of pupil size during blinks was developed using a support vector regression technique, while the training data was prepared from pupillary responses for 7 periodical brightness changes. According to the periodical pupillary changes, pupil sizes during blink were given manually, in order to prepare a pair of possible pupil sizes and empirical data. The parameters for the support vector regression technique were optimized in the training and test processes. Estimation performance was the highest among the proposed methods. This model can be applied to other pupillary observations which were conducted as part of an experiment using different subjects for a different purpose. The model could simulate human eye pupil and blink, therefore there is a possibility of determining the behavior of pupillary change and blink action. In particular, it may be possible to extract some features of pupil action as support vectors. Therefore, analysis of the support vectors and the relationship between pupil action and these support vectors should be conducted. The examination of these points will be the subject of further study.
Estimation of Eye-pupil Size During Blink by Support Vector Regression
87
Acknowledgements The author would like to thank the editors of Proceedings, Joanna J. Bryson, Tony J. Prescott, and Anil K. Seth. They agreed to the republication of the paper in this chapter. Nakayama (2005) Estimation of Eye-pupil Size during Blink by Support Vector Regression, Editors, Joanna J. Bryson, Tony J. Prescott, and Anil K. Seth, “Modelling Natural Action Selection: Proceedings of an International workshop”, pp. 121–125.
References 1. Kuhlmann J, Böttcher M (eds) (1999) Pupillography: principles. Methods Appl (W. Zuckschwerdt Verlag, München, Germany) 2. Beatty J (1982) Task-evoked pupillary resposes, processing load, and the structure of processing resources. Psychol Bull 91(2):276–292 3. Nakayama M, Shimizu Y (2001) An estimation model of pupil size for blink artifact in viewing tv program. IEICE Trans J84-A(7):969–977 4. Nakayama M, Shimizu Y (2004) Frequency analysis of task evoked pupillary response and eye-movement. In: Spencer SN (ed) Eye tracking research and applications symposium 2004. ACM, ACM Press, New York, USA, pp 71–76 5. Nakayama M, Shimizu Y (2002) An estimation model of pupil size for ’blink artifact’ and it’s applications. In: Verleysen M (ed) Proceedings of 10th European symposium on artificial neural networks. d-side, Evere, Belgium, pp 251–256 6. Takahashi K, Tsukahara N, Toyama K, Hisada M, Tamura H (1976) Shinkei Kairo to Seitai Seigyo. Asakura Shoten, Tokyo, Japan 7. Luo FL, Unbehauen R (1997) Applied neural networks for signal processing. Cambridge University Press, Cambridge, UK 8. Bishop CM (1995) Neural networks for pattern recognition. Oxford University Press, Oxford, UK 9. Smola AJ, Schölkopf B (1998) A tutorial on support vector regression. In: NeuroCOLT2 Technical Report Series, NC2-TR-1998-030. http://www.nerurocolt.com 10. Utsunomiya T (1978) Seitai no Seigyo Zyohou Sisutemu. Asakura Shoten, Tokyo 11. Tada H, Yamada F, Fukuda K (1991) Mabataki no Shinrigaku. Kitaouji Shobo, Kyoto 12. nac Corp (1999) EMR-8 manual, Tokyo, Japan 13. Collbert R, Bengio S (1998) Svmtorch: support vector machines for large-scale regression problems. J Mach Learn Res 1:143–160 14. Collbert R (2000) SVMTorch II package. http://www.idiap.ch/learning/SVMTorch.html 15. Nakayama M, Takahashi K, Shimizu Y (2002) The act of task difficulty and eye-movement frequency for the ‘oculo-motor indices’. In: Spencer SN (ed) Eye tracking research and applications symposium 2002. ACM, ACM Press, New York, USA, pp 37–42
Frequency Analysis of Task Evoked Pupillary Response and Eye Movement Minoru Nakayama and Yasutaka Shimizu
Abstract This paper describes the influence of eye blinks on frequency analysis and power spectrum difference for task-evoked pupillography and eye movement during an experiment which consisted of target following tasks and oral calculation tasks with three levels of task difficulty: control, 1×1, and 1×2 digit oral calculation. The compensation model for temporal pupil size based on MLP (multilayer perceptron) was trained to detect a blink and to estimate pupil size by using blink less pupillary change and artificial blink patterns. The PSD (power spectrum density) measurements from the estimated pupillography during oral calculation tasks show significant differences, and the PSD increased with task difficulty in the area of 0.1– 0.5 and 1.6–3.5 Hz, as did the average pupil size. The eye movement during blinks was corrected manually, to remove irregular eye movements such as saccades. The CSD (cross-spectrum density) was achieved from horizontal and vertical eye movement coordinates. Significant differences in CSDs among experimental conditions were examined in the area of 0.6–1.5 Hz. These differences suggest that the task difficulty affects the relationship between horizontal and vertical eye movement coordinates in the frequency domain.
1 Introduction The oculo-motor systems are driven by a viewer’s mental activities, such as problemsolving. It has been found that eye pupil size, blink, and eye movement respond to task difficulty [1–3]. These oculo-motor indices can be used as an index to estimate Frequency analysis of task evoked pupillary response and eye-movement. In Proceedings of the 2004 symposium on Eye tracking research & applications (ETRA ’04). Association for Computing Machinery, New York, NY, USA, 71–76. DoIurlhttps://doi.org/10.1145/968363.968381. M. Nakayama (B) Information and Communications Engineering, Tokyo Institute of Technology, Tokyo, Japan e-mail: [email protected] Y. Shimizu Tokyo Institute of Technology, Tokyo, Japan © Springer Nature Singapore Pte Ltd. 2021 M. Nakayama and Y. Shimizu (eds.), Pupil Reactions in Response to Human Mental Activity, Behaviormetrics: Quantitative Approaches to Human Behavior 6, https://doi.org/10.1007/978-981-16-1722-5_7
89
90
M. Nakayama and Y. Shimizu
an individual’s ’Mental Workload (MWL)’ [4–7]. This indicates that the temporal change of those indices is often measured to observe the viewer’s behavior. Signal processing studies suggest that both time-domain analysis and frequency domain analysis are very useful for understanding the random signal feature. According to the two categories, temporal observations such as ordinal measurement of pupil size and eye movement are based on time-domain analysis. Also, frequency-domain analysis is sometimes conducted on the oculo-motors. For example, temporal change of the eye pupil is called pupillography [8]. Extracting frequency power spectrum from pupillography, the power can be used as an index of the degree of the activity. Specifically, the power in the area of lower frequency (f