255 16 2MB
English Pages 154 [156] Year 2021
Information Physics Physics-Information and Quantum Analogies for Complex Systems Modeling
This page intentionally left blank
Information Physics Physics-Information and Quantum Analogies for Complex Systems Modeling Miroslav Svítek Czech Technical University, Prague, Czech Republic Matej Bel University, Banska Bystrica, Slovakia
Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2021 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress ISBN: 978-0-323-91011-8 For Information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals
Publisher: Mara Conner Acquisitions Editor: Chris Katsaropoulos Editorial Project Manager: Andrea Gallego Ortiz Production Project Manager: Maria Bernard Cover Designer: Victoria Pearson Typeset by MPS Limited, Chennai, India
Contents About the author Preface
xi
Acknowledgment 1.
2.
3.
ix xiii
Introduction to information physics
1
1.1 Dynamical system
1
1.2 Information representation
3
1.3 Information source and recipient
4
1.4 Information gate
5
1.5 Information perception
7
1.6 Information scenarios
9
1.7 Information channel
11
Classical physicsinformation analogies
13
2.1 Electricsinformation analogies
13
2.2 Magneticinformation analogies
13
2.3 Information elements
15
2.4 Extended information elements
16
2.5 Information mem-elements
17
Information circuits
19
3.1 Telematics
19
3.2 Brain adaptive resonance
22
3.3 Knowledge cycle
24
v
vi
Contents
4.
5.
6.
Quantum physicsinformation analogies
27
4.1 Quantum events
27
4.2 Quantum objects
29
4.3 Two (non-)exclusive observers
30
4.4 Composition of quantum objects
34
4.5 Mixture of partial quantum information
36
4.6 Time-varying quantum objects
37
4.7 Quantum information coding and decoding
37
4.8 Quantum data flow rate
38
4.9 Holographic approach to phase parameters
39
4.10 Two (non-)distinguished quantum subsystems
41
4.11 Quantum information gate
42
4.12 Quantum learning
44
Features of quantum information
47
5.1 Quantization
47
5.2 Quantum entanglement
47
5.3 Quantum environment
49
5.4 Quantum identity
50
5.5 Quantum self-organization
51
5.6 Quantum interference
53
5.7 Distance between wave components
54
5.8 Interaction’s speed between wave components
55
5.9 Component strength
55
5.10 Quantum node
57
Composition rules of quantum subsystems
59
6.1 Connected subsystems
59
6.2 Disconnected subsystems
59
6.3 Coexisted subsystems
60
6.4 Symmetrically disconnected subsystems
60
Contents vii
7.
8.
9.
6.5 Symmetrically competing subsystems
60
6.6 Interactions with an environment
61
6.7 Illustrative examples
61
Applicability of quantum models
65
7.1 Quantum processes
65
7.2 Quantum model of hierarchical networks
66
7.3 Time-varying quantum systems
67
7.4 Quantum information gyrator
72
7.5 Quantum transfer functions
73
Extended quantum models
79
8.1 Ordering models
80
8.2 Incremental models
80
8.3 Inserted models
81
8.4 Intersectional extended models
82
Complex adaptive systems
87
9.1 Basic agent of smart services
87
9.2 Smart resilient cities
88
9.3 Intelligent transport systems
94
9.4 Ontology and multiagent technologies
96
10. Conclusion
99
Appendix A: Mathematical supplement Bibliography Index
137
133
101
This page intentionally left blank
About the author Prof. Dr. Ing. Miroslav Svítek, dr.h.c., FEng., EUR ING was born in Rakovník, Czech Republic, in 1969. He graduated in radioelectronics from Czech Technical University in Prague, in 1992. In 1996 he received his Ph.D. degree in radioelectronics at Faculty of Electrical Engineering, Czech Technical University in Prague (www.fel.cvut.cz). Since 2005, he has been nominated as the extraordinary professor in applied informatics at Faculty of Natural Sciences, Matej Bel University in Banska Bystrica, Slovak Republic (www.umb.sk). Since 2008, he has been a full professor of engineering informatics at Faculty of Transportation Sciences, Czech Technical University in Prague (www.fd.cvut.cz). In 201018 he was the Dean of Faculty of Transportation Sciences, Czech Technical University in Prague. Since 2018, he has been visiting professor in smart cities at the University of Texas at El Paso, Texas, United States (www.utep.edu). In Russia, he cooperates, for example, with the Institute of Mathematical Problems of Biology in Pushchino (www.impb.ru) or Moscow Automobile and Road Construction State Technical University (www.madi.ru). He lectures at various universities overseas, for example, TU Berlin in Germany (www.tu-berlin.de), Pavlodar State University in Kazakhstan (www.psu.kz), and Universidad Autónoma de Bucaramanga in Colombia (www.unab.edu.co). The focus of his research includes quantum system theory, complex systems, and their possible applications to Intelligent Transport Systems, Smart Cities, and Smart Regions. He is the author or coauthor of more than 200 scientific papers and 10 books. Miroslav Svítek was historically the first president of the Czech Smart City Cluster (www. czechsmartcitycluster.cz), the member of the Engineering Academy of the Czech Republic (www.eacr.cz), and in 200618 he was the president of the Association of Transport Telematics (www.sdt.cz). He received the following awards: • • • • • • • •
2019—The Personality of Smart City, Ministry of Regional Development, Czech Republic 2017—Medal of CTU Prague on 310th anniversary of CTU Prague 2015—Silver medal of Department of Flight Transport, CTU Prague 2013—Medal of Institute of Technical and Experimental Physics, CTU Prague 2013—Gold medal of Flight Faculty, Technical University in Kosice, Slovakia 2010—First-grade Rector’s Award for best publication in 2009 2010—Silver medal of Matej Bel University, Banska Bystrica, Slovakia 2008—Gold Felber medal for CTU advancement ix
x
About the author
Parallel to his studies at the Czech Technical University in Prague, he attended the Prague Conservatory and graduated in the art of playing the accordion. On the occasion of the 20th anniversary of the Faculty of Transportation Sciences of the Czech Technical University in Prague, he recorded a solo album Acordeón Encantador. On the occasion of the 25th anniversary of the Faculty of Transportation Sciences, he formed a band called the Duo Profesores with Professor Ondˇrej Pˇribyl (cello) and recorded several pieces by the Argentinian composer Astor Piazzolla.
Preface This monograph covers the work done by the author throughout the past 15 years. This work has been inspired by practical projects where large and complex systems have been modeled and dimensionality problems first identified. The presented work should not be treated as a finished project but as the beginning of a journey. It is easy to understand that a lot of the theoretical approaches mentioned should continue and be tested in practical applications. From my point of view, the wave probabilistic functions used for a large-scale system representation seem to be a very promising area for further research because the mysteries of quantum mechanics, such as entanglement, quantization, and massive parallelism, could be used in probability theory and could enlarge current approaches into information physics and yield a better understanding of selforganization and modeling of emotion or wisdom. The author wish all of his followers much success along the way with all of their exciting thoughts and practical experiences.
xi
This page intentionally left blank
Acknowledgment This work was partly supported by the Czech Project AI&Reasoning CZ.02.1.01/0.0/0.0/ 15_003/0000466 and the European Regional Development Fund. I thank my colleagues and close friends from the Faculty of Transportation Sciences of the Czech Technical University in Prague—Prof. Miroslav Vlcˇ ek, Prof. Petr Moos, Prof. Zdeneˇ k Votruba, Prof. Mirko Novák, and Prof. Vladimír Maˇrík. They acted as conveners of my first steps into the world of system science and revealed new methods in this field to me. It was their examples, which taught me to aspire to a research career. Let me also thank the Faculty of Natural Sciences of the Matej Bel University in Banska Bystrica, Slovak Republic, for their support and the friendly creative environment. This work was partly written during my sabbatical leave at the College of Engineering, The University of Texas at El Paso (UTEP). I take this opportunity to thank Dr. Carlos Ferregut and Dr. Kelvin Cheu for their support during my stay at UTEP. Last but not least, my thanks also belong to Dr. Tomas Horak and Donald Griffin for the perfect language correction of the text. Finally, I take this opportunity to thank my wife, my whole family, and my friends for their patience and tolerance during the process of my work. Last but not least, I dedicate my special thanks to my daughter Kamilka and son Martinek for their everlasting inspiration.
xiii
This page intentionally left blank
1 Introduction to information physics Imagine the information on the example of building a house. We need material (or mass), as well as plenty of workers (or energy), but without the knowledge of the plans as for when and how to build, we cannot erect the house. Information and knowledge are, therefore, the things that enrich the complex system theory and afterwards also natural sciences, enabling them to describe more faithfully the world around us. Information was interestingly described by George Bernard Shaw: “If you have an apple and I have an apple, and we exchange apples, we both still only have one apple. But if you have an idea (a piece of information) and I have an idea, and exchange ideas (this information), we each now have two ideas (two pieces of information).”
1.1 Dynamical system Dynamical system can be defined as a specific information model of a part of the world (object) that interacts with its environment, for example, through several inputs and outputs. This model is isomorphic with the object considering a selected set of criteria. The state space system description is understood to transform an input vector into a state space vector, whereas an output vector can be found through the transformation of input and state vectors into an output vector. By an inputoutput system description, the relation between input and output vectors is understood. The dynamical system from this point of view is, therefore, understood to be as a black box whose characteristics can be identified through a system response to the input vector. Let τ ½ describe the discrete dynamical system response to the input signal. Dirac impuls δðnÞ is defined as sequence {1,0,0,0,..}, δðn 2 1Þ is defined as shifted sequence {0,1,0,0,..}, and δðn 2 mÞ means m-shifted Dirac impulse. The Dirac impulse δðnÞ can be used as an input signal, and the system response to the Dirac impulse can be given as: hðnÞ 5 τ ½δðnÞ hðn; mÞ 5 τ ½δðn 2 mÞ
(1.1)
where hðn; mÞ is the impulse response to the shifted Dirac impulse δðn 2 mÞ.
Information Physics. DOI: https://doi.org/10.1016/B978-0-323-91011-8.00009-3 © 2021 Elsevier Inc. All rights reserved.
1
2
Information Physics
In system theory, the input series uðnÞ can be expressed by a sequence of Dirac impulses: N X
uðnÞ 5
uðmÞU δðn 2 mÞ
(1.2)
m52N
We can easily add the input series (1.2) into the system description τ ½ and determine the output series as follows: " y ðnÞ 5 τ ½uðnÞ 5 τ
N X
# uðmÞUδðn 2 mÞ 5
m52N
N X
uðmÞUτ ½δðn 2 mÞ 5
m52N
N X
uðmÞUhðn; mÞ
(1.3)
m52N
A discrete dynamical system is defined as linear if the following equation holds: a1 Uy1 ðnÞ 1 a2 Uy2 ðnÞ 5 τ ½a1 Uu1 ðnÞ 1 a2 Uu2 ðnÞ
(1.4)
where y1 ðnÞ; y2 ðnÞ are the output responses of a system to the inputs u1 ðnÞ; u2 ðnÞ: y1 ðnÞ 5 τ ½u1 ðnÞ y2 ðnÞ 5 τ ½u2 ðnÞ
(1.5)
When speaking of time invariant systems it means that all events are dependent only on the event time difference (n 2 m) and not on the event in time n or m, respectively. Time invariant dynamical system descriptions can be defined through the convolution sum: hðn; mÞ ! hðn 2 mÞ 5 τ ½δðn 2 mÞ N N X X hðn 2 mÞUuðmÞ 5 hðk ÞU uðn 2 k Þ 5 uðnÞ hðnÞ y ðnÞ 5 m52N
(1.6)
k52N
Thesystem is causal if the output signal y ðnÞ is dependent only on current and past input values uðnÞ; uðn 2 1Þ; . . .; , and thus the convolution sum can be rewritten: y ðnÞ 5
N X
hðk ÞUuðn 2 k Þ
(1.7)
k50
In practical applications, the system output is dependent only on limited past time intervals and so the discrete dynamical systems can be represented as follows: y ðnÞ 5
N X
hðk ÞUuðn 2 k Þ
(1.8)
k50
For linear time invariant systems, the well-known mathematical instruments like Laplace, Fourier, or z-transforms [4] could be used.
Chapter 1 • Introduction to information physics
3
1.2 Information representation The concept of data means a change of state, for example, from 0 to 1 or from 1 to 0, where the state vector is not necessarily only digital or one-dimensional. Every such change can be described with the use of a quantity of information in bits. The information theory was founded by Claude Shannon [12] and his colleagues in the 1940s and was associated with coding and data transmission, especially in the newly emerging field of radar systems, which became a component of defensive systems during the Second World War. Syntactic (Shannon) information was defined as the degree of probability of a given event and answered the question how often a message appears. For example, by telling you that the solar system will cease to exist tomorrow, I will be giving you the maximum information possible, because the probability of this phenomenon occurring is nearly equal to zero. The probability model of information defined in this way has been used for the design of selfrepairing codes, digital modulations, and other technical applications. Telecommunication specialists and radio engineers were concentrating on a problematic description of encoded data and minimizing of probability errors during data transmission. There are a lot of approaches of how to extract information or eliminate entropy. Bayes method [11] interprets the density of probability not as a description of a random quantity but rather as a description of the uncertainty of the system, that is, how much information is available about the monitored system. The system itself might be completely deterministic (describable without a probability theory), but there may be very little information about the available system. When performing continuous measurement, we obtain more and more data, and therefore more information about our system, and thus our system begins to appear to be more definite to us. The elimination of uncertainty therefore increases the quantity of information we have about the monitored system. Once uncertainty has been eliminated, one may proceed to the interpretation of the information, or in other words, to the determination of how to reconstruct the described system, or how to build a more or less perfect model of it using the information [36]. This task already belongs to the theory of systems, where it is necessary to identify the state parameters and individual processes of the system, etc. As a result, a knowledge system emerges, which is able to describe the given object appropriately. The model-theoretical work of semantic information was explained by Bar-Hiller and Carnap [1]. Semantic information asks how often a message is true. Zadeh [39] introduced the theory of fuzzy sets, a specific tool that maps a value for which an element is or is not a member of a set, expressed as a number between 0 and 1. Information flow refers to the frequency of state/signal changes in bits per second or how often the change is carried out (quantity of information). Information content, on the other hand, characterizes the quality of information or how valuable the content is, measured in Joules per bit. A concept of information power [37] has been constructed as a product of Information flow and Information content to study the problem of systems response to the certain
4
Information Physics
information or to the information flow. The definition of information power may be obtained from generally known formulas: power is equal to the amount of work per time unit. The problem is what methodology to choose for the specification of the amount of (information) work. If we consider work as an amount of transferred information through a data network, there are no problems with such process of measuring. On an active element, the amount of transferred data is measured, and we are able simply to calculate the power of this data transmission. Models of complex systems are based on knowledge from information science that has been gathered over the years in classical physics, a specialized part of which is called information physics [13]. At present, this discipline is still in its infancy, but many discoveries have already been made, for example, by Vedral [35], and some scientists have realized that without basic theories in this area further development of complex system theory will not be possible.
1.3 Information source and recipient Let us suppose that information is clustered into a higher semantic structure like our language. For simplicity, we can imagine a set of different events ui each of them is generally described by a syntactic chain, for example, {0, 1, 1, 0} of different lengths. In practical terms, one event can mean, for example, a recommended scenario of future behavior. In this case, the scenario also covers the control of actions necessary for a complex system optimization. In such a case we have an information signal of a syntactic string of events {u1, u2, u3 . . .} encoded into information flow of {0, 1}. It can be assumed that each event be represented by a string with the same length. The value of the event is hardly the same, so we must count with different qualities assigned to each event. The scenario of the critical situation is more valuable than the scenario for a normal situation. Some events carry a low content, some a bit higher. Mathematically we can describe it using two graphs with information flow and information content on the y-axes plotted to the events given on the x-axes. Information power is different for each event. The value of information content (quality) and the value of information flow (quantity) could be normalized in which case we can then speak about probability functions (we have the available power probability spectrum). If it makes sense we can extent this model to wave probabilities on a deeper distinguishing level if the phases are included in the model as it will be described in Chapter 4. Phase parameters represent inner dependencies between events (scenarios). In the theory of information physics [61], the link between the information source and the recipient must be better analyzed and generalized for the purposes of complex systems. On the side of the source, each event u can be described by information source content and flow IS ðuÞ; ΦS ðuÞ. The recipient tries to process the event u in its environment and provides its registration and its understanding. This process results in the representation of received information content and flow IR ðuÞ; ΦR ðuÞ on the side of the recipient.
Chapter 1 • Introduction to information physics
5
For a lot of events u, we can suppose that there is no difference between the information source and the received information. We rightfully suppose that: IS ðuÞ 5 IR ðuÞ Φ S ð uÞ 5 Φ R ð uÞ
(1.9)
For more complex events this assumption is not valid. In social sciences, for example, we must have a lot of data available (information flow) to identify some social event (information content). The link between the event u originated in society and registered by observers has a dynamic time evolution. Problems on interfaces (distorted information content) should be identified. The above source-recipient theory will be later extended into quantum dimensions where we can suppose superposition of different variants of information flows and contents assigned to the event u. It means that we do not know how important the event u (uncertain source) is and how many bits are necessary for its processing. On the side of recipient, we can use the superposition of different scenarios, and work in parallel with all mutually dependent information variants. Such approach is known as the many-worlds interpretation (MWI) of quantum mechanics. There are many worlds, which exist in parallel in the same space and time.
1.4 Information gate For the sake of simplicity, let us imagine an information subsystem as an inputoutput information gate as shown in Fig. 1.1 that issues from a matrix representation in the following form:
I2 φ2
5
ta tc
tb td
I I U 1 5TU 1 φ1 φ1
(1.10)
where the matrix T is called the transmission matrix.
FIGURE 1.1 Information gate (Φ—information flow of data measured in bits per second, I—information content measured in Joules per bit).
6
Information Physics
Between the input ports, input information content is available, and input information flow enters the system. Between the output ports, it is possible to obtain output information content, and output information flow leaves the system. Let us now examine the inputoutput information gate we have created. Input quantities can describe purely intellectual operations. Input information content includes our existing knowledge, and input information flow describes the change to the environment in which our gate operates and the tasks that we want to be carried out (target behavior). Long-term information gained in this way can be used for the targeted release of energy, whereas at the output of the inputoutput gate, there may be information content in the order of millions of Joules per bit (or profits in millions of dollars). The output information flow serves as a model for the provision of such services or knowledge. The basis of information systems is the ability to interconnect individual information subsystems, or in our case, inputoutput information gates. It is very easy to imagine the serial or parallel ordering of these subsystems into higher units. A very interesting model is feedback of information subsystems because this leads to nonlinear characteristics, information systems defined at the limit of stability, and other interesting properties. Using this method, one may define information filters, which are able to select, remove, or strengthen a component of information. In the context of the information model, it is appropriate to deal with the problem of teaching, because the information subsystem called a teacher may be regarded as a source of information content. The teacher has prepared this information content for years with respect to both the content as such (optimizing the information content) and its didactic presentation (optimizing the information flow), so that the knowledge can be passed on to a subsystem known as a student. If we assume that the teacher subsystem has greater information content than the student subsystem, then after their interconnection, the information flow will pass from the teacher to the student, so that the information content of the two subsystems will gradually balance out. The students receive the information flow and increase their information content. If the students are not in a good mood, or if the information flow from the teacher is confused, the students are unable to understand the information received and to process it to increase their information content. The individual components and subsystems of a complex system can behave in different ways, and their behavior can be compared to everyday situations in our lives. A characteristic of politicians is their ability to use even a small input of information content to create a large output information flow. They have the ability to take a small amount of superficially understood content and to interpret and explain it to the broadest masses of people. On the other hand, a typical professor might spend years receiving input information flow and input information content, and within his/her field, he/she may serve as a medium for transmitting a large quantity of output information content. The professor, however, might not spread the content very far, sharing it perhaps only with a handful of enthusiastic students.
Chapter 1 • Introduction to information physics
7
It is hard to find an appropriate system to combine the characteristics of the different information subsystems described above, but it is possible to create a group of subsystems— system alliance [22], where these characteristics can be combined appropriately. In this way, one can model a company or a society of people who together create information output that is very effective and varied, leading to improved chances for the survival and subsequent evolution of the given group. Through an appropriate combination of its internal properties, the alliance can react and adapt to the changing conditions of its surroundings. Survival in alliances thus defined seems more logical and natural than trying for a combination of all necessary processes within the framework of one universal complex system.
1.5 Information perception In Ref. [22], the extended Frege’s concept of information perception was presented based on the results of Refs. [9,27,37]. In Fig. 1.2, basic information quantities are given: Oi ðt Þ—a set of rated quantities on an object, Pi ðt Þ—a set of states, Φi ðt Þ—a set of syntactic strings (data flow), Ii ðt Þ—a set of information images of state quantities (information content). In physics, the state is a complete description of a system in terms of parameters at a particular moment in time. Other parameters representing links between information quantities are: aOP—identification, aPO—invasivity,
FIGURE 1.2 Frege’s functional concept of information image origin and action.
8
Information Physics
aPΦ—projection in a set of symbols and syntactical strings, aΦP—uncertainty correction and identification, aΦI—interpretation, information origin, aIΦ—language constructs reflection, aIO—relation of functions and structural regularity, aOI—integrity verification. Frege’s diagram shows the way in which knowledge can be obtained through observing an unknown system. The Frege’s diagram is also used to describe the deep perception of information by humans. Philosophers accepted that it is possible to study the world around us from four points of view: physical, emotional, rational, and spiritual. Physical representation corresponds to changes of a real system (events Oi). The human mind categorizes a perception into different clusters called states Pi which cover a mixture of intellectual and emotional stimuli assigned to a specific situation that occurred in a real system. The packet of such stimuli specifies the state of our perception that we try to characterize by words through thinking (generally by symbols according to the grammar). Within this process, the string of symbols Φi is produced to best capture the perception of the state Pi. Thinking has limited (often sequential) instruments for the expression of the complexity of the world but it tries to provide an answer to the question “how”? Thinking is a conscious activity that can be unlike other conscious activities shared, written, or published, and it is a good background for further understanding of the whole. The higher (spiritual) level means the link to the whole (higher order) and to other available knowledge like comparisons with similar situations in history and with knowledge from the other specializations, etc. This part of cognition looks for an answer to the question “why” the system behaves like that? It can be called wisdom, which is a higher level of knowledge stated generally as the spiritual level of the perception. Let us give one easy example of music: an orchestra playing some composition. In the physical world we can see a lot of instruments that create sounds of different frequencies (Oi). If the sound is coordinated, we can extract some feeling and emotions as a state of our mood (Pi) that the music placed us in. Intellectual analysis can describe our mood by words (Φi ) and think about links to a higher order (Ii), that is, what the author wanted to say if he composed this song, etc. Understanding of all of the details yields to a better perception of the details of the song if we listen to it once again. A better perception a better understanding more details and so on. The perception process could be repeated until a stable knowledge is accumulated and the song is understood to a given distinguished level. We can go deeper and suppose that due to our model we can expect new features of reality and our perception can aim to this direction. Such a function could be called the insight sensor, which can direct us to correct details based on accumulated information. The more accumulated information, the higher details we can register. From the system point of view, we can not only go into more deeper layers but we can also extract an archetypal knowledge that is given as the stable state of continual information observation, perception, processing, and creation of the appropriate model of reality on a
Chapter 1 • Introduction to information physics
9
given distinguishing level. Archetypal knowledge represents a clearly defined piece of knowledge that can be understood as the base component from which complexity can be composed. It is a new way for the description of a reality rather than the creation of a more and more complex high-dimensional model. The goal behind this discussion is to have available a set of simple archetypal models carrying extracted bases of knowledge. For example, in psychology we can go deeper and deeper in describing a person’s behavior. But there are other possible ways to specify archetypal psychological personalities and suppose that the psychological profile of everyone is weighted with a mixture of these archetypal models. Each specialization can be tackled in this way. Instead of building a complex model, the archetypal knowledge could be extracted, and the diversity can be studied through an archetypal knowledge composition. Archetypal knowledge in music, for example, means the extraction of different moods represented by a set of features of the song. All other songs could be a mixture of archetypal songs together with the different moods they produce. Similarly this applies to architecture, religion, etc. We can continue with our description of epistemology and suppose that there is only one reality Oi studied by many specialists. Each of them has their own insight sensor registering different details. If someone has their insight sensor working on a given level of resolution, the expert has a chance to understand themselves and share their information Φi . String of symbols Φi is the only instrument we have at our disposal to communicate and to express our accumulated knowledge. It is necessary to understand both the syntactical part (it is easier task) and the semantical content that requires the insight sensor to be working on a given distinguishing level. The sharing of information among experts extends their horizons and yields to combinations of knowledge from different areas (sharing archetypal knowledge). The characteristic called exaptation could be achieved, which means to make use of the knowledge from different specializations to make progress in our area of interest.
1.6 Information scenarios Let us imagine that we study, for example, a city as a complex system, and an architect understands the city from the urbanist point of view and looks at the urbanist details (in his perception, the set of urban archetypal models exists), a sociologist uses its insight sensor and observes a lot of details about population, a business specialist looks at the market potential, etc. Every expert can create his Fredge’s diagram of the reality (knowledge component), which means to identify different states or practical scenarios Pi in its specialization as the optimized simplification of observed (cut out) reality Oi. With respect to the identified scenarios Pi, the clear understanding (rational description) of Φi is available, and each scenario should be included into the context of higher knowledge Ii. The multidimensional Fredge’s diagram assigned to different scenarios can be constructed. All Fredge’s diagrams are connected through the reality Oi, which is of course unique for all scenarios.
10
Information Physics
The question is how to create and harmonize scenarios prepared in different specializations? The proposed methodology relies on the extracted archetypal knowledge Ii. In our discussion, we suppose that the system is time independent. For time-varying systems, the concept can be easily extended. Let us observe parallel M information variables (O1, O2, . . . OM) of a real system. With respect to the system complexity, it is possible to identify jth specialization that represents typically one of following areas: transportation, energy, business, etc. Within jth specialization, the experts can find the set of L different scenarios (Sj,1, Sj,2, . . . Sj,L). Scenarios (Sj,1, Sj,2, . . . Sj,L) must be prepared beforehand. They provide syntactic strings Φj;i for jth specialization in accordance with a current situation (O1,O2, . . . OM). With the help of selected scenario Sj,i, the control signals for each scenario are computed to optimize the real system behavior. We suppose that there exists a base of N-archetypal knowledge (K1, K2, . . . KN) common for all specializations that best characterizes strong features of the real system. All scenarios Sj,i(K1, K2, . . . KN) are supposed to be dependent on this knowledge. At the same time, the quality assessment functions Qi;j Si;j ðK1 ; K2 ; . . .KN Þ assigned to each scenario Sj,i(K1, K2, . . . KN) are also dependent on this vector. The accumulated knowledge Ij evaluates the criterial functions for cross-disciplinary assessment of all possible scenarios and proposes their changes to guarantee better system performance. The methodology of scenario’s coordination can include the following five steps: 1. Identification of knowledge base vector (K1, K2, . . . KN) from available data (O1, O2, . . . OM): In transportation, we can use strategic traffic detectors, in environmental areas selected climatic sensors, etc. The base vector must be a low-dimensional one, and all available data must be aggregated into it. 2. Selection of scenario Sj,i(K1, K2, . . . KN) in accordance with knowledge base (K1, K2, . . . KN): In transportation, for example, the best suited scenario is selected based on the, for example, set of strategic detector readings. 3. System management according to the selected scenario using real data Φj;i : In transportation, for example, the selected scenario provides the urban traffic control strategy based on measured traffic data and appropriate control signals are distributed to traffic lights. 4. Quality assessment function Qj;i Sj;i ðK1 ; K2 ; . . .KN Þ depends on the selected ith scenario in jth specialization Sj,i (K1, K2, . . . KN): In transportation, for example, the normalized average travel time or the congestion length, can be used as the control assessment function. 5. Cross-disciplinary ith scenario selection for each jth specialization to achieve the most P appropriate (weighted) sum of quality assessment functions j;ðiÞ Qj;i Sj;i ðK1 ; K2 ; . . .KN Þ : The transportation scenario is selected, for example, to be as ecofriendly as possible to take care of both transport and environment specializations. The above methodology describes the most general example parametrized by vector (K1, K2, . . . KN). In the real praxis we are not concerned about ( j, i)th scenario definition, the
Chapter 1 • Introduction to information physics
11
design of scenario’s selection, data collection, transmission, and processing (control system). The long-term knowledge of each specialization has solved this problem, and we can only register how many scenarios Sj,i exist in different areas, and how to combine them with each other. Further, we suppose that it is possible to determine the (normalized) assessment function Qj;i Sj;i , where (j, i)th pair means the selected i-scenario for jth specialization jAf1; 2; 3; . . .g. If the problem has been linear, we would have selected such ( j, i)th combination that simply P maximizes the sum j Qj;i Sj;i . Because there exists interdependencies between scenarios’ assessment functions and because it is difficult to find experts that can provide holistic assessment of all specializations, it is necessary to solve this task more rigorously. As an illustrative example, we suppose three specializations: transportation (j 5 1), energy (j 5 2), environment (j 5 3), and each of them has two scenarios: transportation S1,1 S1,2, energy S2,1 S2,2, and environment S3,1 S3,2. We have at our disposal experts that can evaluate only two specializations: transportenergy, transportenvironment, and energy environment. One solution of how to combine the pieces of knowledge to find the best combination of scenarios is by using the fuzzy-linguistic approach as in Ref. [50].
1.7 Information channel Until now we have assumed a nonlimited information channel. Let us propose that our channel can transfer maximally φMAX bits per second. We can use the well-known Lorenz transformation: x1 2 vUt x2 5 qffiffiffiffiffiffiffiffiffiffiffiffi 2 1 2 vc2
(1.11)
and rewrite it for the information channel: Ia1 2 ΦUt Ia2 5 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 2 ΦΦ2
(1.12)
MAX
where Ia1 , Ia2 means the information amount in bits of two subsystems 1 and 2 commonly connected through the information channel. The changes of subsystem 1 are transmitted to subsystem 2 by information flow Φ in bits per second. If the information flow (changes of subsystem 1) is relatively slow, then the information amount Ia2 on the side of subsystem 2 corresponds to the information amount Ia1 of subsystem 1 and vice versa. In case the changes are more frequent than they can flow through the information channel ΦMAX , then the different information can be observed on both subsystems. Similarly, we can study the time difference: t1 2 cv2 U x1 t2 5 qffiffiffiffiffiffiffiffiffiffiffiffi 2 1 2 vc2
(1.13)
12
Information Physics
Transforming it into information channel, we have: t1 2 Φ2Φ U Ia1 MAX t2 5 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 2 ΦΦ2
(1.14)
MAX
Due to a limited channel, time interval t1 in subsystem 1 is longer than the same time interval t2 of subsystem 2. If the event takes in first subsystem time t1 , then second subsystem measures this event’s length as. In physics, we have the unique limit of the speed of light. In informatics, we can have a lot of different limits of information channels between subsystems. Such constraints create a rich information environment where the limits must be taken into account in the design of complex information systems.
2 Classical physicsinformation analogies 2.1 Electricsinformation analogies In the current state of information analogies, an electrical circuit with its electric current [coulombs per second] and voltage [Joules per coulomb] represents the analogy of an information model with quantities: • Information content: I [Joules per bit] defines the model of a real system together with its appropriate features and suitable control strategies (extracted uncertainty of data flow, data interpretation, model structure identification, model parameters’ estimation, mixture of multimodels, model’s verification and validation, and model-based control). • Information flow: Φ [bits per second] describes the syntax strings of data flow assigned to a real system (data collection and signal transmission to/from a real system). The information model only works with available knowledge and creates a suitable representation of a real system. In the electric analogy, Joules per bit means the energy required for finding the most suitable model for a real system.
2.2 Magneticinformation analogies The magnetic circuit with the magnetic flux ϕ [Joule-seconds per coulomb] and with the magnetomotive force (mmf) F [Joule-seconds per coulomb] represents the action assigned to one coulomb. Action is an attribute of dynamics of a physical system from which the equation of real motion can be derived. In this text, we will rather use the rate of magnetic flux dϕ dt in [Joules per coulomb], which is related to the energy flow (energy transmission) carried by a coulomb. In information analogies it means the changes of a real system to achieve more energy. Such an approach is like the Kauffman’s principle of self-organized agents [6] continuously looking for more and more sophisticated ways of obtaining energy. The electricmagnetic transformation can be modeled by a gyrator as it is shown in Fig. 2.1. On the left side there are electrical parameters: v—electric voltage, i—electric current and on the right site there are magnetic parameters: F—magnetomotive force (mmf) and dΦ/dt—rate of magnetic flux. According to Wikipedia, a gyrator is a passive, linear, lossless, two-port electrical network element proposed in 1948 by Bernard D.H. Tellegen as a hypothetical fifth linear element after the resistor, capacitor, inductor, and ideal transformer. An important property of a gyrator is that it inverts the currentvoltage characteristic of an Information Physics. DOI: https://doi.org/10.1016/B978-0-323-91011-8.00007-X © 2021 Elsevier Inc. All rights reserved.
13
14
Information Physics
FIGURE 2.1 Gyrator model of electric/magnetic transformer [54].
electrical component or network. In the case of linear elements, the impedance is also inverted. In other words, a gyrator can make a capacitive circuit behave inductively. For a winding of N turns, we can write: v5N i5
dΦ dt
F N
(2.1)
(2.2)
The magnetic circuit (right side of Fig. 2.1) represents in an information analogy a modification of a physical system based on the available information (left side of Fig. 2.1). With respect to this, it is possible to define two new information parameters assigned to a real physical system (to distinguish the right side of Fig. 2.1, we use the term knowledge-based): • Knowledge-based action A [Joule-seconds per bit]: Magnetomotive force (mmf) can be interpreted in magneticinformation analogy as an action that describes the amount of energy that could be obtained from a real system based on one bit of information during 1 second. The principle of least “knowledge-based action” can be applied to obtain the optimized behavior of a complex system. The solution requires finding the path that has the least value. Information flow Φ from an information model identifies actions in the real world. On the other hand, new actions (in the real system) can generate new information flows or, in other words, a new modification of the information model. • Knowledge-based energy flow E: The rate of magnetic flux dΦ/dt in the magneticinformation analogy can be interpreted as knowledge-based resource extraction. The information content I, represented by the information model, enables us to realize changes of the real physical system (changes in structure, organization, processes, etc.) in such a way that new resources of identified energy could be extracted. Parameter E describes the speed of flow of the extracted energy in [Joules per second]. The presented approach of electricalinformation and magneticinformation analogies allows for the connection of the real physical world with its information model and describes the benefit of the available information.
Chapter 2 • Classical physicsinformation analogies
15
2.3 Information elements In information systems we do not use the value [joules per bit] for information content but rather [success events per bit] or shortly [events per bit]. Success event means the number of events or processes done in the information system due to received bits of information. It is evident that a relation between the information flow Φðt Þ and the information content I(t) can have a lot of time-dependent forms. In accordance with electrical analogies, we can define the information impedance Z(t) expressing the acceptance of the information flow Φðt Þ in the studied system: I ðt Þ 5 Z ðt Þ Φðt Þ
(2.3)
Considering time dependence, we can expect three types of impedances. First of them, the information resistance R yields into linear dependency between I(t) and Φðt Þ: I ðt Þ 5 R Φðt Þ
(2.4)
It gives an information that the transmitted information flow Φðt Þ has a direct impact on the studied system—the number of bits per second is linearly dependent on the number of success events in the studied system. Resistance in our analogy means some barrier that is necessary to overcome for obtaining the information content. Without resistance, no information content can be measured. For information flow acceptance, some time is required to study the whole message. The duality between an information flow and an information content can be hereby demonstrated. The information inductance L has the form: I ðt Þ 5 L
dΦðt Þ dt
(2.5)
It says that the time change of information flow Φðt Þ (acceleration/deceleration of transmitted bits per second) is linearly proportional to the number of success events in the system. And, the information capacitance C can be given as: Φðt Þ 5 C
dI ðt Þ dt
(2.6)
It means that the time change of information content I(t) (increase/decrease of success events per bit in the system) is proportional to the information flow Φðt Þ. Due to the time dependence of all of the above mentioned quantities I ðt Þ; Z ðt Þ; Φðt Þ, we can use mathematical instruments known in a theory of electrical circuits—Laplace,
16
Information Physics
Fourier, or z-transform—and rewrite these quantities, for example, in jω-domain as follows [22]: I~ ðjωÞ 5 F I ðt Þ Z~ ðjωÞ 5 F Z ðt Þ ~ ðjωÞ 5 F Φðt Þ Φ
(2.7)
~ ðjωÞ are Fourier’s functions assigned where F fg means Fourier transform and I~ ðjωÞ; Z~ ðjωÞ; Φ to the information content, information impedance, and information flow, respectively. Then, the Eqs. (2.32.6) could be expressed in jω-domain: ~ ðjωÞ I~ ðjωÞ 5 Z~ ðjωÞ Φ ~ ðjωÞ I~ ðjωÞ 5 R Φ ~ ðjωÞ I~ ðjωÞ 5 jω L Φ ~ ΦðjωÞ 5 jω C I~ ðjωÞ
(2.8)
All mathematical instruments developed in the past for dynamic electric circuits can be applied with success for the modeling of dynamic information circuits.
2.4 Extended information elements There exist some events with nontypical information characteristics. These information sources can have the information content assigned to the event u but it cannot be presented because the event is very personal, secret, etc. Such a situation can be characterized as: I ðuÞ 6¼ 0 ΦðuÞ 5 0
(2.9)
There could exist ways of how to make such information partially visible, for example, through nonlinear information processing that can cause the modulation of this event into other events. If you would like to know about this kind of information, you cannot ask for it. It is necessary to discuss the story in which this information appears. On the other hand, the information source may not have any relevant information content, but it presents the event u through the information flow (fake news). In such a case we use the following description: I ðuÞ 5 0 ΦðuÞ 6¼ 0
(2.10)
There are other situations where any reasonable function between information flow and content cannot be found: I ðuÞ 6¼ 0 ΦðuÞ 6¼ 0 I ðuÞ 6¼ f ðΦðuÞÞ
The information component with such a feature is called information norator [44].
(2.11)
Chapter 2 • Classical physicsinformation analogies
Nullator
(A) Transmittance
(B) Conductance
(C) Nullator
17
Norator
(D) Norators
FIGURE 2.2 New information components: (A) information transmittance, (B) information conductance, (C) information nullator, and (D) information norator.
On the other side, the information nullator guarantees zero values for every situations: I ðuÞ 5 0 ΦðuÞ 5 0
(2.12)
Transmittance Ri or conductance Gi assigned to ith component represents a linear dependence between information flow and content (analogy to well-known Ohm’s law) (Fig. 2.2): ΦðuÞ 5
I ð uÞ 5 I ðuÞ Gi Ri
(2.13)
Other information components can include different information sources. Contentoriented information source means that information content is a constant independent of any requested information flow. Flow-oriented information source can be defined as a constant information flow even if the information content is varying.
2.5 Information mem-elements The circuit theoretician Chua [2] introduced the basic concept of electrical components together with the links between them as shown in Fig. 2.3. There are six different mathematical relations connecting the pairs of four fundamental electrical circuit variables: 1. 2. 3. 4.
qðt Þ—charge, ϕðt Þ—magnetic flux, iðt Þ—electric current, vðt Þ—voltage.
From the electrical variables definition, we know that the charge is the time integral of the current. Faraday’s law tells us that the flux is the time integral of the electromotive force or voltage. There should be four basic circuit elements described by the relationship between variables: resistor, inductor, capacitor, and memristor.
18
Information Physics
Flux
Current
Charge
Voltage
FIGURE 2.3 Chua’s concept of electrical quantities.
Chua’s concept is famous due to an envisioned new electrical component named “memristor” that provides a functional relation between charge qðt Þ and flux ϕðt Þ. The equations for mem-element (memristor, memcapacitor, meminductor) can be generalized [5153]: y ðt Þ 5 g ðwðt ÞÞ x ðt Þ
(2.14)
dwðt Þ 5 x ðt Þ dt
(2.15)
where y(t) and x(t) are the terminal mem-element’s variables, and w(t) is the internal state variable. We can describe basic information mem-elements as follows: • Information Memristor: y(t) represents the information content Ii ðt Þ, x(t) the information flow Φi ðt Þ, and w(t) the information in [bits], • Information Meminductor: y(t) represents the information flow Φi ðt Þ, x(t) the information content Ii ðt Þ, and w(t) the information action in [Joule-seconds per bit], • Information Memcapacitor: y(t) represents the information content Ii ðt Þ, x(t) the information flow Φi ðt Þ, and w(t) the time-domain integral of information in [bits]. The butterfly-shaped hysteresis loop is expected to occur at a sufficiently large input of qðt Þ or ϕðt Þ [18]. Information mem-elements as a part of information circuit can cause the hysteresis that were introduced, for example, in the theory of catastrophes [40].
3 Information circuits In this chapter, the different application areas will be discussed. I believe that capturing of the processes in the world around us with the help of information and knowledge subsystems, organized into various interconnections, especially with feedbacks, can lead to the controlled dissemination of macroscopic work (as described by Stuart Kauffman [6]) and after overcoming certain difficulties lead to the description of the behavior of living organisms or our brain [31]. From the following applications we can see the possibility for linking the physical world with the world of informatics, because every information flow must have its transmission medium, which is typically a physical object (e.g., physical particles) or a certain property of such an object [17]. The case again is like an information content, which also must be encoded through a real, physical system. The operations defined as the information systems can then be depicted in a concrete physical environment. Such an approach yields to discovering improved knowledge in information physics [28]. In the following sections of the book we will use representations of wave probabilistic flow ψφ and wave probabilistic content ψI as shown in Fig. 3.1. In this chapter, we will limit ourselves only to their real values easily marked as φ and I.
3.1 Telematics Telematics can be found in a wide spectrum of user areas [38], from an individual multimedia communication through to the intelligent use and management of large-scale networks (e.g., transport, telecommunications, and public service). Advanced telematics provides an intelligent environment for knowledge society establishment and provides an expert knowledge description of complex systems. It also includes legal, organizational, implementation,
FIGURE 3.1 Representation of wave probabilistic flow ψφ and wave probabilistic content ψI . Information Physics. DOI: https://doi.org/10.1016/B978-0-323-91011-8.00004-4 © 2021 Elsevier Inc. All rights reserved.
19
20
Information Physics
FIGURE 3.2 Information gate—the control based on the selected scenario.
and human aspects. Telematics can be extended into more complex areas, for example, smart cities or smart regions [30]. Interoperability and cooperation are essential characteristics, which a lot of heterogeneous subsystems must possess to be integrated [33]. It is understandable that the concept of smart cities/regions yields to an integration of different networks (energy, water, transport, waste, etc.) where the integrated networks must undergo a synergy among the different network sectors to fulfill predefined performance parameters [32]. Designing a control system across several sectors form integrated smart networks as a by-product of this approach. Smart approaches to complex systems is an example of a multidisciplinary problem which must—in addition to the technical and technological elements—include the areas of economics, law, sociology, psychology, and other humanistic soft disciplines. Only a systemic combination of these elements can achieve the goal, which is generally expected from the smart systems. Within the field of telematics, transport telematics connects information and telecommunication technologies with transport engineering to achieve better management of transport, travel, and forwarding processes by using the existing transport infrastructure. In the transport telematics example, we can imagine a transportation system in a closed urban area that must be controlled by coordinated strategies of traffic lights. First, it is necessary to build a model of the urban area using historical traffic data. The model can explain the traffic behavior on a given resolution level. If we collect online traffic data from traffic detectors, the model must decide which is the most appropriate scenario (traffic control strategy) to use. In our terminology it means that we have the information gate with input information—Φ1 ; I1 given by collected online traffic data from the urban area Φ1 and the historical knowledge I1 . The information gate processes the input data Φ1 ; I1 and based on the traffic model, decides which control scenario I2 is the best one for current situation. According to the scenario the appropriate control signals Φ2 are distributed to the traffic lights. Such an information gate is a typical example of the electrical analogy where we are working with the input/output data flows and the information contents as it is shown in Fig. 3.2. The values Φ2 ; I2 are transformed into the real traffic system (analogy to electric/magnetic transformer) where they cause the knowledge-based actions A1 (new cars’ flows). There are many possibilities, but we expected that the model Φ2 ; I2 has recommended using the best variant of how to optimize (Fig. 3.3) the saved energy flow E1 (minimum air pollution, minimal cars’ stops, maximal speed of traffic flow, etc.) of the urban area.
Chapter 3 • Information circuits
21
FIGURE 3.3 The information transforms into a knowledge-based action and energy flow.
FIGURE 3.4 The model of changes on the side of the real system.
FIGURE 3.5 The information extraction from the real system.
We noted that there could be some changes to the real system made without the information model. Local knowledge can be applied to react on some unexpected events. On the other hand, we can also expect a reduction of dimensionality to the studied system because of only a partial observation (cutoff). Such changes can also be modeled by the gate shown in Fig. 3.4. The model transforms the original values E1 ; A1 into the new ones E2 ; A2 . Next, we observed the behavior of the traffic subsystem E2 ; A2 . We identified new pieces of information that can be measured on the real physical system. For example, we have at our disposal the online GPS (global positioning system) with positions assigned to some vehicles. By their processing as shown in Fig. 3.5, we obtain a more detailed model of the physical system Φ3 ; I3 . The extracted information (analogy to magnetic/electric transformer) from the real traffic system Φ3 ; I3 can be used as an input information feedback to the original traffic model Φ1 ; I1 . The better input information Φ1 ; I1 , the better control strategy Φ2 ; I2 and so on. The illustrative example presents the links among information gates and the behavior of a real physical system. The advantage of electric/magnetic analogy is that it can easily be combined with both the real and the information components of the studied system. We can order these components into more complex structures including feedbacks. Using this approach, we can model complex systems covering both their real and virtual parts.
22
Information Physics
3.2 Brain adaptive resonance In Ref. [41], the information gyrator shown in Fig. 3.6 was firstly introduced. The gyrator is composed of information components discussed in Fig. 2.2. In the brain functions, the rhythms and quasiperiodicity of processes in neural networks play an outstanding role. This is the reason why the adaptive resonance theory (ART) including resonant effects has been studied by many authors for a long time [45]. The periodicity in the transfers of signals between the long-term memory (LTM) and short-term memory (STM) creates the possibility of resonance system structure. LTM with information content representing expectations and STM covering sensory information in resonance processes offer effective learning. Nonlinear adaptive resonance creates conditions for new knowledge or inventory observation. The information model of the processes between the LTM and STM is given in Fig. 3.7. Where a top-down expectation I2 is flowing to the STM as Φ1 flow, the bottom-up flow Φ2 is bringing the more relevant information content I1 to LTM. A set of equations can be expressed in a matrix form (1.10), where T is the cascade matrix defining the linear information connection between LTM and STM. The information capacitance is a natural model of LTM or STM memories. Symbols Ii represent information contents decreasing the entropy in the approximation of linear dependence on relevant data (signal) flow. Then: • • • •
I1 is in the model ART input content from STM, I2 is in the model ART output content installed in LTM, Φ1 is input information flow from STM, Φ2 is output information flow to LTM.
FIGURE 3.6 “Information gyrator” composed of “information nullors,” “information norators,” and “information conductances” [44].
Chapter 3 • Information circuits
23
FIGURE 3.7 Schematic expression of ART [41].
FIGURE 3.8 Resonant connection with information gyrator [41].
The information gyrator was used for the representation of the brain linear and nonlinear resonance phenomena. If the output of the information gyrator is terminated by the information capacitance of LTM, then the input of this gyrator behaves as an information inductor. If the input signal flow Φ is connected to STM represented by the capacitance C1t (Fig. 3.8), then the resonant connection with the resonant frequency is obtained: f0 5
1 D2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2π C1t C2t
(3.1)
Information models of processes between LTM and STM in brain neural networks enable a description of the learning under resonance of top-down expectations with the matching of these expectations against bottom-up data. It yields into the development of resonant states on frequency f0 between bottom-up and top-down processes. In the case of resonance, they reach an attentive consensus between what is expected and what is there in the outside world. As Grossberg [45] and others believe that all conscious states in the brain are resonant states. These resonant states move learning of sensory and cognitive representations to a higher quality level.
24
Information Physics
3.3 Knowledge cycle The inspiration for derivation of the knowledge cycle came from the well-known Otto thermodynamic cycle [5]. The first case describes using information flow φ and information content I to model the process of the product development and its placement on the market. Let us start at the point A in Fig. 3.9, where we have some initial conditions—input information flow φA and input information content IA . For the construction of the initial state A, we need to transform the potential IA into measurable information flow φB often mentioned as “know-how.” This transformation is done at the cost of losses of information content because it costs some effort (money). Point B with its φB ; IB is a starting point to present the new product to an investor and ask him for additional financial support E1 that increases the possibilities of the realization of the business plan. The financial support E1 transforms the system into point C, φC ; IC with the same knowledge but with a higher potential for construction. The most important part of the knowledge cycle is the realization of the business plan represented by a transition from point C to point D. During this phase, the knowledge φC is transformed into a higher value of ID . It is understandable that during the transition from point C to point D, the original knowledge φC is gradually exhausted. In point D, we can withdraw money or use the energy obtained. Then, the system returns into point A and it waits for new business ideas. We can easily imagine that the company is working in parallel with many other products located in different positions of the knowledge cycle. It can yield into dynamic information stability introduced for the explanation of living organisms. The emergence and extinction of various products, or living organisms, leads to a constant balanced movement on the knowledge curve but always with other products or individuals. It is an analogy to uniform motion in Newton’s equations. The static information stability known as the second law of thermodynamics is characterized by maximum entropy that leads to random motion. In the wave diagram it means that a sum of all random components is close to zero. The area under points A, B denotes the input information power, the total area under points C, D the output information power, and the area enclosed by points A, B, C, and D represents the gained information power (benefit).
FIGURE 3.9 The knowledge cycle as an analogy of the heat pump.
Chapter 3 • Information circuits
25
Fig. 3.9 is in principle the analogy of the heat pump because the movement along the knowledge curve is counterclockwise. In this case, it is assumed that some energy is inserted inside and more energy is obtained outside. This corresponds to the product development description. The clockwise movement in Otto’s or more generally Carnot’s cycle [5] describes the thermal engine how to transform the thermal energy into movement. For us it means the use of invested money E2 in point D and their transformation into the information flow φC in point C. Loss of energy E1 is caused by unusable information. The path from point B to point A means practical construction with respect to obtained information flow. This is how scientific and research institutions work. Looking at the closed knowledge cycle, an analogy with the magnetic field is offered. Such a field would affect other similar cycles. In Fig. 3.9 only one cycle is shown, but there may be more than one in parallel. The success of one cycle (larger area among points A, B, C, and D) will be reflected in the imitation of this success. It reminds one a little of the globalization of companies and the establishment of links to a number of subcontractors. In science, the emergence of a successful school around a prominent scientific personality could be explained by this theory. Such an evolutionary field could interpret the gradual increase in complexity in the search for new energy sources. The evolutionary field can be analogical with the magnetic field because it has no sources and it is a by-product of the knowledge cycle. The circulation around the knowledge cycle (profit rate) introduces frequency and together with the area of cycle (total profit per cycle) defines the strength of evolutional field. Let us take two other illustrative examples of soft systems: the first one is the description of a living organism and the second one the creation of a new company. Fig. 3.10 shows the simplified knowledge circle with four areas. Area A is characterized by (initial) investment (negative I) and the necessity for uploading of work (negative φ). For the living organism, this phase is typical for raising children where parents must invest a lot of money and effort to take care of them. For a new company it is the initial investment plus a lot of working hours of the company founders. Efforts are beginning to appreciate in area B where the company is already beginning to make more and more money (positive I), even though it needs some effort, but it is
FIGURE 3.10 The circle knowledge cycle.
26
Information Physics
shrinking. In the case of children, they are already becoming self-sufficient, but still require some effort. Phase C is the most pleasant, because the company earns money (positive I) and gives space for extension of other activities (positive φ). In the case of people, it is the active age when they are financially without problems and they have the opportunity to manage all their activities. The final phase D of the cycle is characterized by a reduction in freedom and the need for additional investment (negative I). This is typically the senior stage of life, where opportunities are diminished (smaller and smaller φ) and one requires some help (negative I). In the case of the company, it still produces, but the products are already subsidized. However, it is already clear that product renewal and new ideas must come so that the whole knowledge cycle will start again in area A. In accordance with this principle, the basic feature of life is collective behavior. Around the described knowledge cycle, at one point, there are a large number of people at different stages of life (from children to the elderly). If we look at the whole population, we still see the same picture of the circle over time, even with other actors. It is like watching a river that seems the same to us, but different drops of water flow there at any given moment. Another question is what is inside the circle (gray fill) in Fig. 3.10. There may be subelements that follow each other or cancel each other out, so that the resulting circle can be created. It is equivalent to the application of an electric field to a material where positive and negative charges are created only on the surface, and inside the forces are canceled out. It is the same with the magnetic field and partial currents inside the material [10].
4 Quantum physicsinformation analogies As we stated in a previous section, the complex information network with both real (analogy to magnetic part) and virtual (analogy to electric part) components including their serial, parallel, or feedback ordering can be created. Such an approach can be extended to quantum complex networks. In this case, the model can be enriched using features such as phase interference and entanglement among all real and virtual components. Quantum information science was introduced in Ref. [35] and extended to information circuits and networks in Ref. [22]. In Ref. [55], a wave information flow φ and wave information content I were introduced. The x-axis of complex domain is the real-world part. A positive value of the x-axis is an information gain (benefit) achieved by the information values (information flow, information content). Alternatively, a negative x-value shows its loss. Positive values signify that the system distributes an information flow or information content, whereas the negative values consume it. It is important for us to be able to measure these values easily because they are real (endogenous) parameters. The imaginary part is determined by the y-axis, and it is connected to an information environment or its system surroundings. A positive value of the y-axis signifies a beneficial impact on its surroundings giving a clearer environment or better mood. The negative y-values signify a loss of environment such as pollution or ill humor. These (exogenous) parameters are not part of the studied system because they are given by the reaction of or acceptance of its surroundings. Within a group of components, some of them can organize the environment (y-values) to obtain the real-world results projected on the x-axis. In the noetic field theory [80], the ordering principle of the unified field is not a fifth fundamental force of physics but rather a force of coherence (topological switching) modeled in quantum system theory by phase parameters. Just as quantum mechanics was invisible to the tools of Newtonian mechanics, so until now has the regime of the unified field (phase synchronicity) been invisible to the tools of quantum mechanics.
4.1 Quantum events An occurrence of one event u 5 1 can be represented by a unique wave function as: ψðu 5 1Þ 5 1 e jϕ1;1
Information Physics. DOI: https://doi.org/10.1016/B978-0-323-91011-8.00010-X © 2021 Elsevier Inc. All rights reserved.
(4.1)
27
28
Information Physics
And the nonoccurrence of the event u 5 0 by a different function: ψðu 5 0Þ 5 1 e jϕ1;0
(4.2)
where the number 1 or 0 means that the event u has happened or not happened, and a phase ϕ describes how the environment accepts the event u (the link between the event and its environment). The phase difference can be caused either by measurement equipment, such as quantum mechanics, or by an error of the human observer (he/she wears glasses and badly registers the trials), or by natural resistance against the event (the receiving system does not freely accept the event u). These situations cause basic discrepancies between the reality (the event u has happened) and the environment (the event u is or is not registered by its environment). Using this method of thinking, we can add a phase parameter to each realization of unique event u. Generally, the phase can be different for each registered event because the environment can be time varying, and the conditions cannot be the same in different time intervals. If we conduct more trials in which the event u happened (and did not happen), we could count the statistics as an average of these complex numbers. If all events are accepted in the same way, we can compute the frequency of trials that are correct (probabilistic description). On the other hand, if each trial has different phase parameters this situation is reflected by a value of modulus and phase of the sum of the wave representations, and the final wave probabilities could be computed as: ψ1 5
pffiffiffiffiffi jϕ1 p1 e
(4.3)
ψ1 5
pffiffiffiffiffi jϕ0 p0 e
(4.4)
Bracket or Dirac notation [35] is a standard notation for describing quantum states in the theory of quantum mechanics composed of angle brackets and vertical bars. The quantum system representation in Bracket or Dirac notation can be written as: ψ 5 ψ0 j0i 1 ψ1 j1i
(4.5)
Interpretation of the Eq. (4.5) can be a bit strange because it tells us that even though we observed m separately by right fallings of events u from N trials, in summary, if we look on N trials as a whole (not separately) we will observe less or more than m by right fallings due to the phase interferences among trials. The Holistic view expresses a slightly different reality than the sum of the individual observations. Monitoring of unrealized possibilities and opportunities in our environment is exploring its major part. What has not happened is a necessary complement to what has happened, and it is important to realize that it is sometimes more important than what has happened.
Chapter 4 • Quantum physicsinformation analogies
29
It is also possible to apply the same principle of correlation of two time intervals tΔ;1 , tΔ;2 of two events 1 and 2. The wave probabilistic functions assigned to these time intervals could be expressed as: ψðtΔ;1 Þ~
pffiffiffiffiffiffiffiffiffiffiffiffiffi jφ pðtΔ;1 Þ e Δ;1
(4.6)
ψðtΔ;2 Þ~
pffiffiffiffiffiffiffiffiffiffiffiffiffi jφ pðtΔ;2 Þ e Δ;2
(4.7)
Let us compute the probability of the time interval of both sequential events. These values correspond to AND intersection [21] logics (the event 1 has length tΔ;1 as well as the event 2 has length tΔ;2 ): P tΔ 5 tΔ;1 1 tΔ;2 5 PðtΔ;1 - tΔ;2 Þ 5 5 ψðtΔ;1 Þ ψTðtΔ;2 Þ 1 ψTðtΔ;2 Þ ψðtΔ;2 Þ~ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ~ PðtΔ;1 Þ PðtΔ;2 Þ cosðφΔ;1 2 φΔ;2 Þ
(4.8)
Due to the synchronicity, the composition of time intervals (4.8) need not be strictly linear— it can be more or less probable with respect to the phase difference of wave functions (4.6) and (4.7).
4.2 Quantum objects As mentioned previously, if we observe the events separately, we will find different statistics than if we look at them as a whole. On the other hand, it means that the world is very complex, and its reduction causes an inaccuracy of its description. The composition of partial information pieces must be done very carefully, and all possible positive/negative dependencies represented by phase parameters must be included in the composition process. Let us define N discrete events Ai ; iAf1; 2; . . . ; N g of a sample space S, with defined timedependent probabilities PðAi ; tÞ; iAf1; 2; . . . ; N g. The quantum state jψ; tin represents a description of the quantum object given by superposition of N discrete events at location η and time instant t:
ψ; t 5 ψðA1 ; tÞ jA1 iη 1 ψðA2 ; tÞ jA2 iη 1 ? 1 ψðAN ; tÞ jAN in η
(4.9)
with N wave probabilistic functions defined as [21]: ψðAi ; tÞ 5 αi ðtÞ e jυi ðtÞ ; iAf1; 2; . . . ; N g;
(4.10)
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi where αi ðtÞ 5 PðAi ; tÞ is the modulus, and υi ðtÞ is the phase of a wave probabilistic function. We suppose that the reference phase is assigned to event A1 at time t 5 0 is typically chosen as υ1 ð0Þ 5 0.
30
Information Physics
~ tiη If we take into consideration k quantum objects, the corresponding wave function jψ; is given by the Kronecker product defined as [35]:
ψ; ~ t 5 ψ; t ψ; t n12 :: ψ; t n1k ; η n11
(4.11)
where the modulus of weighting of complex parameters gives probabilities that measurements on the set of k quantum objects at time t will yield into a sequence of predefined events. Their phases represent possible correlations between all events [4]. The quantity of information in bits can be measured, for example, by von Neumann entropy [35]: sðρÞ 5 2 trðρ log2 ðρÞÞ
(4.12)
where tr(.) means trace operator and ρ density operator: ρðx; t Þ 5 ψðx; t Þ 2
(4.13)
This measures an amount of uncertainty contained within the density operator taking into account wave probabilistic features like entanglement. The quantum system theory extends the basic rules of classical system theory in a similar way as quantum mechanics extends Newtonian physics. For special cases given by the limitation of phase parameters to zero, the quantum system can easily be transformed into the classical system. The instrument of quantum modeling is appropriate for practical use if the phases of all quantum objects are referred to one reference. The same assumption holds for phase parameters in, for example, Fourier transform. We have phase reference value equal to zero and the phase of each wave is given in relation to this origin. For example, if we study quantum events in different times it is important to suppose the time invariance—the phases are dependent on a time difference between quantum events and not on absolute time interval. Similar situations can happen for quantized values of measured parameter—phases do not depend on a specific value but only on the distance between quantized samples.
4.3 Two (non-)exclusive observers Let us try to apply the principles of quantum informatics in the macro world. Imagine that we are flipping a coin, so that every toss comes out as heads or tails. Someone else, who is assigned the role of an observer, is counting the frequency of the individual coin tosses and is estimating the probability of the phenomenon of it landing heads or tails in a simple manner, by counting the number of times it has fallen as heads or tails in the past, and by dividing that number by the number of the observed or registered tosses. If the observer performs this activity for a sufficient length of time, the resulting probability will be tossing heads 50% of the time, while the probability of tossing tails will also be 50%, if all of the tosses are done in a proper manner and if the coin has a perfect shape (disk) and uniform density.
Chapter 4 • Quantum physicsinformation analogies
31
Now let us try to extend this simple example further for possible variants involving errors by the observer and let us imagine what would happen if our observer were imperfect and made errors when observing. The observer, for example, might wear thick glasses and have difficulty telling heads from tails, with the result that from time to time, he/she would incorrectly register a toss as heads or tails, and this would then show up in the resultant probability as a certain error. Because there is only one observer, we automatically, and often even unconsciously, assume that his/her observations are exclusive. Exclusivity means that when our observer registers a toss of heads, he/she automatically does not register a toss of tails, and to the contrary, when registering a toss of tails, he/she does not at the same time register a toss of heads. Thanks to this property, the sum of the resultant probabilities of heads and tails always equals 100% regardless of the size of the observer’s error. The error of the observer shows up only by increasing the probability of one side of the coin, while at the same time lowering the probability of the opposite side by the same value. Now let us assume that we are observing the same phenomenon of coin tossing, but now with two observers who are not consulting each other about their observations. There might be two persons, one of whom watches for and registers only tosses of heads and the other only tails. Each of our two observers is counting the frequency of tosses of his or her own side of the coin, meaning that they each divide the number of their respective sides of the coin by the total number of tosses. The results are the probabilities of tossing heads or tails, and if both observers work without any errors, the result will be the same as in the case of one observer, except that more people will be participating in getting the result. Now let us expand our case with two observers so that it reflects errors on their parts. Just as in the last case, both observers might be wearing thick glasses and might have difficulty telling heads from tails. In the case of two observers, we can no longer assume that their observations are exclusive, because as we said, we are assuming that they are not consulting their observations with each other. What might happen in this situation? At a given moment, one observer could see a toss of heads registering that phenomenon, and the second observer might independently evaluate the coin toss as tails registering tails. Or the other way round: the first observer will see that the toss was not heads, and the other that the toss was not tails. In that situation, a coin toss is registered, but it is registered neither as heads nor as tails. Logically, as an outcome of these two situations, the sum of the resulting probabilities will not equal the desired 100%, but will be either greater than 100% in the first case or less than 100% in the second case. From a mathematical perspective, this would mean violation of the fundamental law of probabilities and that the sum of the probabilities of all possible phenomena in a complete system must be equal to 100%. How can we get around this problem? We can help ourselves by imagining the geometry of a triangle and its use in the theory of probability. Let us first assume, in accordance with Fig. 4.1, that the triangle is a right-angled triangle, and that the length of the square root of the probability of tossing heads is depicted on the y-axis, and the length of the square root of the probability of tossing tails being shown on the x-axis.
32
Information Physics
FIGURE 4.1 A right triangle—in this case of tossing coins, it must be true that “C” 5 1 (i.e., 100%). p(H) is the probability of tossing heads and p(T ) is the probability of tossing tails.
If we use the Pythagorean theorem that the sum of the squares of the legs equals the square of the hypotenuse, we can say that the length of the hypotenuse of the right triangle in this case must be equal to 1 (i.e., 100%). This would correspond to a geometrical interpretation of the required property that the sum of the probabilities of tossing heads and of tossing tails must be equal to 1. At the same time, this geometric analogy characterizes probabilities as squares of the lengths of sides of a triangle. The right angle of the triangle is then an indication of the exclusivity of the observations. Now let us deal with the geometric interpretation of the errors of our two observers. Under the condition that the length of the hypotenuse of a right triangle must always be equal to 1, we can model the error rates of our observers using the angle between the triangle’s legs, so that the square root of the probability determined by the first observer (including his or her errors) will be depicted on the x-axis and the square root of the probability found by the second observer (including that observer’s errors) will be depicted on the y-axis. Mathematically, we can apply the law of cosines to whatever kind of a triangle this produces as it is shown in Fig. 4.2, instead of using the Pythagorean theorem that applies only to exclusive observations resulting in a rightangled triangle. What does this situation mean, and how can it be interpreted generally? The two observers are independent of each other, without being aware of the fact and without sharing any information with each other. Their (virtual) interconnection is represented geometrically by
Chapter 4 • Quantum physicsinformation analogies
33
FIGURE 4.2 In this nonright-angled triangle, in our case of coin tosses, “C” still must be equal to 1. p(H) represents the probability of tossing heads as registered by the first observer, and p(T ) is the probability of tossing tails as registered by the second observer. The angle β models the errors of the observers.
the angle between the x- and the y-coordinates, representing the mutual imperfection of their observing. The more perfect their observations are, the less they are dependent. In the case of perfect observers, this dependence disappears completely, corresponding geometrically to a right triangle. Now let us examine the parallel between a signal breakup into harmonic components and the probability theory. Probability values have analogies to energies and can be modeled as the squares of the values assigned to individual phenomena (concrete values). By the square roots of the probability of event phenomena, one may interpret how dominant a given phenomenon is in a random process or how often the phenomenon occurs. In this conception, phase indicates the degree of structural links between the individual phenomena, that is, by analogy of the shift with respect to the defined beginning. This beginning may be a phenomenon with a zero phase, to which we relate all of the structural links of the other phenomena. Unlike classical information science, where the state of a system, or more precisely, information about its state, is described with the use of a probability function, in quantum information science, the information about the state of the system is described using a wave probabilistic function.
34
Information Physics
Let us define discrete events A and B of a sample space S, with defined probabilities P ðAÞ; P ðBÞ. The quantum state jψi represents the description of the quantum object given by the superposition of these events [3]: jψi 5 ψðAÞ jAi 1 ψðBÞ jBi;
(4.14)
with wave probabilistic functions defined as follows: ψðAÞ 5 αA e jυA ; ψðBÞ 5 αB e jυB ;
(4.15)
pffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffi where αA 5 P ðAÞ; αB 5 P ðBÞ are modules, and υA ; υB are the phases of a wave probabilistic function. In accordance with this general principle, we can see that we obtain the classical theory of probability by raising the complex wave function to the second power, whereby we automatically lose the phase characteristic of our model. What do these ideas have to do with quantum physics? In the case of quantum physics, there is a definite model of behavior of a studied system, which we affect by our method of measurement. This means that the result of the measurement is not a description of the original system, but of a new behavior of the system influenced by our measuring. We get something that can be compared with our observer with thick glasses, that is, a model that is dependent on the observer. To find a model of the behavior of the original system (without intervention by measuring), we must eliminate the error of the observer, that is to say, we must introduce phase parameters to our model that correct the intervention of the method of measurement.
4.4 Composition of quantum objects Let us study the quantum compositions on illustrative examples. For simplicity, we can imagine the quantum system is composed of three subsystems A, B, and C each of which has its output 1 or 0. Subsystems are characterized by the following wave probabilistic functions: ψðAÞ 5 αA j0iA 1 β A j1iA
(4.16)
ψðBÞ 5 αB j0iB 1 β B j1iB
(4.17)
ψðC Þ 5 αC j0iC 1 β C j1iC
(4.18)
Practically, these subsystems could be the sensors of dangerousness in our body. The inner correlation between alarm (falling 1) and no alarm (falling 0) is sometimes a bit complicated—in real situations, the sensor cannot solve the problem, and there does not exist a required output (no 0, no 1). On the other hand, both outputs 0, 1 can fall simultaneously (we cannot decide whether 1 or 0). Such cases are typical for neural networks with high redundancy and can be modeled by quantum information.
Chapter 4 • Quantum physicsinformation analogies
35
Now, we can study the situation of observing three couples of subsystems simultaneously (A and B), (A and C), and (B and C). Wave probabilities assigned to each combination are to be computed by using the Kronecker operation [35]: ψðA; BÞ 5 αA αB j0; 0iA;B 1 αA β B j0; 1iA;B 1 β A αB j1; 0iA;B 1 β A β B j1; 1iA;B
(4.19)
ψðA; C Þ 5 αA αC j0; 0iA;C 1 aA β C j0; 1iA;C 1 β A αC j1; 0iA;C 1 β A β C j1; 1iA;C
(4.20)
ψðB; C Þ 5 αB αC j0; 0iB;C 1 αB β C j0; 1iB;C 1 β B αC j1; 0iB;C 1 β B β C j1; 1iB;C
(4.21)
Let us present how the composition of wave probabilities works if we do logical AND operations on the level of real outputs. The situation that no pair signalizes an alarm is given: ψ j00iA;B - j00iA;C - j00iB;C 5 αA αB αA αC αB αC
(4.22)
Suppose that the alarm is signalized if at least the one falling 1 is detected on the one pair of subsystems (A, B) or (A, C). Such a situation corresponds to wave representation: ψ 01iA;B , 10iA;B , 01iA;C , 10iA;C 5 αA β B 1 β A αB 1 αA β C 1 β A αC
(4.23)
In our brain it may signalize, for example, the first level of warning. On the other hand, not so strong a reaction of our body can happen in case of falling at least the one 1 on pairs (A, B) or (B, C): ψ 01iA;B , 10iA;B , 01iB;C , 10iB;C 5 αA β B 1 β A αB 1 αB β C 1 β B αC
(4.24)
Finally, the system of our body starts to struggle for its existence in case both of the abovementioned logical outputs (4.23) and (4.24) are activated simultaneously. The wave function that represents this state is given: ψ 01iA;B , 10iA;B , 01iA;C , 10iA;C - 01iA;B , 10iA;B , 01iB;C , 10iB;C 5 5 ðαA β B 1 β A αB 1 αA β C 1 β A αC Þ ðαA β B 1 β A αB 1 αB β C 1 β B αC Þ
(4.25)
Another example shows XOR logical operation between values (A, B) in (4.19): ψðA"BÞ 5 ðαA αB 1 β A β B Þj0iXOR 1 ðαA β B 1 β A αB Þj1iXOR
(4.26)
where " is a symbol for XOR logical operation. If A, B have the same values (0,0) or (1,1), the XOR output is zero, and if they have different values (0,1) or (1,0), the XOR output is 1. It is also good to include in our discussion the example of logical feedback. For illustration, we replace the original wave function ψðAÞ given in (4.16) by the new one ψnew ðAÞ that is created as OR logical operation between j0iXOR ; j1iXOR of (4.26) and j0iA ; j1iA of (4.16): ψnew ðAÞ~αA ðαA αB 1 β A β B Þj0iA 1 1 αA ðαA β B 1 β A αB Þ 1 β A ðαA αB 1 β A β B Þ 1 β A ðαA β B 1 β A αB Þ j1iB
(4.27)
36
Information Physics
The illustrative examples demonstrate the logical feedback and the role of phase parameters in complex system modeling. We can imagine a lot of stable long-term feedback mutually linked through their phase parameters in a similar way that the different quantum events are correlated.
4.5 Mixture of partial quantum information The question, that still remains open and needs further explanation, is the right interpretation of phase parameters of wave probabilities. Let us present an illustrative example of five unique observations of states j0i; j1i (with phase information): ψ~e jϕ0;1 j0i1 1 e jϕ0;2 j0i2 1 e jϕ1;1 j1i3 1 e jϕ1;2 j1i4 1 e jϕ1;3 j1i5 5 sffiffiffiffiffi sffiffiffiffiffi 2 jϕ0 3 jϕ1 e j0i 1 e j1i 5 e jϕ0;1 1 e jϕ0;2 j0i 1 e jϕ1;1 1 e jϕ1;2 1 e jϕ1;3 j1i 5 5 5
(4.28)
We can look at the combination of wave properties (wave probabilistic functions assigned to different states) j0i; j1i and at the corpuscular behavior (falling of different states j0i; j1i). This interpretation given for macroscopic systems is very close to ghost waves studied in quantum mechanics [43]. According to this theory, ghost waves continuously create the space of possibilities according to the quantum wave principles in which the particle behaves (falling on one of the states). Because we can register only by falling on 0 or 1 (modulus of wave parameters), the phases could only be measurable based on statistics (an aggregation of the set of many realizations). Final phase parameters ϕ0 ; ϕ1 describe the dependence between statistics of falling on 0 or 1. Unfortunately, they do not carry phase information about the unique phases ϕ0;1 ; ϕ0;2 ; ϕ1;1 ; ϕ1;2 ; ϕ1;3 . Such information is lost during an aggregation. In real complex systems like our brain, there is enormous redundancy encoded into its structure. We can suppose that the resulting model can put together the mixture of partial pieces of incomplete information through OR logical operations. Let us suppose two additional partial information models of pair (A, B): ψ1 ðA; BÞ 5 γ 1 j0; 0iA;B 1 γ 2 j1; 1iA;B
(4.29)
ψ2 ðA; BÞ 5 δ1 j0; 1iA;B 1 δ2 j1; 1iA;B
(4.30)
The parallel paths in the brain’s network correspond to partially incomplete information, which should be included into the original model (4.19). Because it is only a partial information, it must be added into the original model very carefully to enrich the model’s mixture but not to weaken it with incomplete information.
Chapter 4 • Quantum physicsinformation analogies
37
For this process, the wave functions η1 ; η2 must be determined. The mixture of partial information given by (4.29, 4.30) with the original model (4.19) defines the new actualized wave probabilistic function: ψ~ ðA; BÞ 5 ψðA; BÞ 1 η 1 ψ1 ðA; BÞ1 η2 ψ2 ðA; BÞ 5 5 η1 γ 1 1 αA αB 0; 0iA;B 1 η2 δ1 1 αA β B 0; 1iA;B 1 β A αB 1; 0iA;B 1 1 η1 γ 2 1 η2 δ2 1 β A β B 1; 1iA;B
(4.31)
Theoretically, we can have a lot of partial information, and all pieces could be incorporated into the resulting mixture model. This process can be explained, for example, as the learning process and a step by step incorporation of new experiences into the quantum model. A similar principle has already been applied in the theory of multimodels [18], where partial information was represented by a set of differential equations (time-varying models) and the final model was created as a mixture of these components.
4.6 Time-varying quantum objects Let us start the discussion of time-varying quantum objects with an easy example of two separate events A and B falling both with the same probabilities of 0.25 and with a time-varying phase parameter e jωt assigned to the event B. The time-varying wave probabilistic function of a pair (A, B) is then given: 1 1 Ψðt Þ 5 ψA ðt ÞjAi 1 ψB ðt ÞjBi 5 jAi 1 e jωt jBi 2 2
(4.32)
The union probability of events A and B under the normalization condition is equal to: 2 2 1 1 P ðA , BÞ 5 ψA ðt Þ1ψB ðt Þ 5 1 e jωt Ah0; 1i 2 2
(4.33)
The phase rotation ω t represents the change of an attraction between events A and B in time t (wave functions are time continuous). One extreme is given in times t1 when cosðω t1 Þ 5 1, and the join probability is P ðA , BÞ 5 1. It means that in times t1 the event (A or B) surely occurs. There is zero probability that the event A or B is not found. The second extreme is happening in times t2 when cosðω t2 Þ 5 2 1 and the join probability P ðA , BÞ 5 0. In times t2, the event (A or B) never occurs. There is zero probability that some events are found. In this manner, the events A, B are synchronized and asynchronized in periodic time waves with angular speed (frequency) ω 5 2 π f despite the separate observations of A or B that behave randomly with the given probability.
4.7 Quantum information coding and decoding In the analogy to the well-known rule of quantum physics, we come to the similar result that energy is proportional to angular speed ω. Synchronized/asynchronized waves can be
38
Information Physics
interpreted as an energy necessary for switching on/off the events (A or B). We can say nothing if the phases are stable or they are quickly time varying. We cannot recognize such quick phase changes due to the aggregation process. Better comprehension of this principle can open the door to a new way of quantum information coding. The frequency ω 5 2 π f could represent the structure of changes—an information is not carried by probabilities to A or B, but through their link. Unfortunately, the changes cannot be measured by statistics because the average destroys the phase information. We can imagine the right- or left-phase modulation represented by time-varying wave functions: Left: ψ~e2jðΔ1ωtÞ 1jðΔ1ωtÞ
Right: ψ~e
(4.34) (4.35)
Switching between right and left polarization can transmit encoded information that can be decoded easily. Parameter Δ means the quasi-spin that represents an initial value. Suppose two quantum events jAi, jBi in time zero, the phase of jAi is supposed to be zero, and the phase Δ of jBi is either 0; 1 π2 ; 2 π2 ; 1 π or 2π. In case of Δ 5 0, the intersection of events jAi, jBi is maximized. For Δ 51 π2 and Δ 5 2 π2, the intersection is zero, but due to the time evolution, they aim to have a positive or negative dependency, respectively. For Δ 5 π, the quantum events jAi, jBi are negatively dependent. Time changes can be interpreted as the transmission of virtual pieces of energy between events, as known in physics. Another question asked related to quantum information coding is how to use the inner quantum states. Due to quantum features, we have not only two states 0, 1 like in classical informatics but due to phase parameter we have four possible variants of q-bit: registered 0, registered 1, registered neither 0 nor 1 (empty set), and registered either 0 or 1 (both variants). Even if we cannot register the pure values 0 or 1, it is still possible to distinguish between the other two variants: empty set (neither 0 nor 1) and both variants (either 0 or 1). It means that the inner states could also be used for quantum information coding. In our brain, there exist a lot of paths with similar characteristics. Due to the high redundancy of the neural networks, we can find many parallel paths with the same measurable probabilistic characteristics (same modulus of input/output probabilities). The observer cannot distinguish among them; however, each path can carry a different phase. Switching among parallel paths leads to information coding known in radioelectronics as phase modulation.
4.8 Quantum data flow rate Let us define the set of L basic quantum subsystems with wave representations: ψSi ðt Þ 5 MSi ðt Þ e jϕSi ðtÞ ;
iAf1; 2; . . . ; Lg
(4.36)
Chapter 4 • Quantum physicsinformation analogies
39
We can define the time-series representation of each ψSi ðt Þ by constant modules Mi;k and phase components vi;k in the following way: ψSi ðt Þ 5
N X
Mi;k e jν i;k t
(4.37)
k51
In signal processing, ν components represent harmonic frequencies obtained by Fourier transform. In quantum system theory, we can interpret ν i;k as the data flow rate of ith subsystem in [bits/s]. In the case ν i;k is higher, the kth component of ith subsystem is able to exchange more data per second with its environment. Quantum circuit is characterized by a wave information flow Φ and a wave information content I as wave probabilistic functions. In the following parts, we will not use the complex description (4.37) but only simple phase dynamics of data flow rates ν Φ and ν I : ψΦ ðt Þ 5 ψΦ ðt Þ e jðν Φ t1ϕΦ Þ ;
ψI ðt Þ 5 ψI ðt Þ e jðν I t1ϕI Þ
(4.38)
Phases ϕφ and ϕI represent static interactions and ν Φ and ν I dynamic data exchange among circuit components. Assume further that we can distinguish between positive and negative data flow rates ν, which means a clockwise or anticlockwise rotation in phase space. Let us agree with an assumption within quantum system theory that positive values of ν mean ordering. In contrast, negative values represent disordering (chaotic). Connecting the same value of order and disorder yields zero phase. If the data flow rate is zero, the subsystem interacts only randomly without any sophisticated structure of dependencies. Maximal value of data flow rate ν max specifies a threshold of communication possibilities of the subsystem.
4.9 Holographic approach to phase parameters A new class of physics was introduced in Ref. [78], where it was proposed that the physical entropy would be a combination of two magnitudes that compensate each other. The observer’s ignorance is measured by Shannon’s statistical entropy, and the algorithmic entropy measures the disorder of the observed system (the smallest number of bits needed to register it in the memory). Atlan [79] defined that the system’s order is a commitment between the maximum information content (possible variety) and maximum redundancy. The ambiguity can be described as a noise function that can be manifested in a negative way (destructive ambiguity) with the classical meaning of disorganizing effect or in a positive way (autonomy producer ambiguity) that acts by increasing the relative autonomy of a part of the system, reducing the system’s natural redundancy and increasing its informational content.
40
Information Physics
We can extend Zurek’s approach [78] to a complex domain, where physical entropy is a variable that can be decomposed into both x- and y-axes. The x-axis represents the maximum variety (algorithmic entropy), and the y-axis can be interpreted as the maximum redundancy (statistical entropy). This implies a certain ambiguity—bit capacity of a physical system put by Shannon or semantic content (meaning). An alternative inspiration for the interpretation of a complex domain comes from the definition of identity [36] that can be broken down into endogenous (regularity, goals, species) and exogenous (openness, acceptances) components. The x-axis can project endogenous and the y-axis can project exogenous components, respectively. For physical interpretation of phase parameters, we achieve the inspiration in a holographic approach described in Refs. [55,56]. Assume the real observation relief (an objective reality) is specified as: sðx Þ 5 A 1 B cosðωx x Þ
(4.39)
Value A is a constant signal, and B is the amplitude of cosine changes in the x-axis with frequency ωx . Now, suppose the observer uses a cosine signal cosðωt t Þ with wavelength λt . If the observer is located directly at a point x, it watches the complex signal where the objective reality is modulated on light waves: ψðx; t Þ 5 ½A 1 B cosðωx x Þ e jωt t
(4.40)
In case the observer is watching the scene from a distance z, we must take into account the diffraction rules. The image of reality seen from a distance z (observed signal) can be calculated using the FresnelKirchhoff diffraction integral [57]. For our simplified relief (4.39), the observer watches: ψz ðx; t Þ 5 A e jωt t 1
1 B e jðωt t1ωx x1Φz Þ 1 e jðωt tωx x1Φz Þ 2
(4.41)
The new phase parameter can be computed from the geometry [58]: Φz 5
1 ω2 λt z 2π x
(4.42)
In a real observation process, the phase parameters are not available to us. We can measure only the probability or energy spectrum: Pz 5 ψz ðx; t Þ ψz ðx; t Þ 5 A2 1 2 A B cosðΦz Þ cosðωx x Þ 1 1
1 2 1 B cosð2 ωx x Þ 1 B2 2 2
(4.43)
Chapter 4 • Quantum physicsinformation analogies
41
By omitting the distortion components (e.g., higher frequencies), we can extract useful information from the reality observed: Pz A2 1 2 A B cosðΦz Þ cosðωx x Þ
(4.44)
From the above equation, it is evident how the phase parameter Φz can bias an observation process. For better clarification, we can rewrite Eq. (4.41) in a “bracket” notation:
ψz ðx; t Þ~A j1i 1 B e jΦz cosðωx x Þ
(4.45)
An intersection operation represents the coexistence of constant and cosine signals together. The phase parameter Φz in our approach defines the quality of the observation process and how the image of reality is distorted due to imperfect observations. Wave nature disappears if the phase is zero. It can occur if the observer is located directly at the place of measurement ðz 0Þ, or if the measurement is done with infinite resolution ðλt 0Þ. Unfortunately, the measurements in quantum physics are limited by Planck distance and wavelength of light so the phase cannot be zero.
4.10 Two (non-)distinguished quantum subsystems Let us have two quantum subsystems A, B described by wave probabilistic functions ψA ð:Þ; ψb ð:Þ. Let us assign features (e.g., a set of parameters) p1 or p2 to subsystems A and B. Firstly, we suppose that we are able to distinguish between quantum subsystems A and B. The quantum system in this case is represented by the following wave probabilistic function with given parameters p1 and p2 : ψA;B ðp1 ; p2 Þ 5 ψA ðp1 Þ ψB ðp2 Þ:
(4.46)
In case we are not able to assign the right features p1 or p2 to the subsystems A or B, we must apply the principle of quantum indistinguishability [4]. This means that we have to take into account all variants of possible arrangements: ψA;B ðp1 ; p2 Þ 5 ψA ðp1 Þ ψB ðp2 Þ 6 ψA ðp2 Þ ψB ðp1 Þ;
(4.47)
where 6 characterizes the symmetry or nonsymmetry of both variants. Let us suppose that we have generalized “gravitation energy” between our two subsystems UA;B ðp1 ; p2 Þ. How much energy will be used to connect A and B under the condition of quantum indistinguishability? From (4.47), we can compute the probability density: 2 2 ρðp1 ; p2 Þ 5 ψA ðp1 Þ ψB ðp2 Þ 6 2 ψA ðp1 Þ ψB ðp2 Þ ψA ðp2 Þ ψB ðp1 Þ 1 2 2 1 ψA ðp2 Þ ψB ðp1 Þ
(4.48)
42
Information Physics
The mean value of connection energy is given: U A;B CA;B 6 XA;B ;
(4.49)
where CA;B is a classical energy integral, and XA;B is the exchange integral—a consequence of quantum indistinguishability. CA;B and XA;B can be computed by using (4.48) under symmetry condition [4]: ð ð
2 2 ψA ðp1 Þ ψB ðp2 Þ UA;B ðp1 ; p2 Þ dp1 dp2 ;
(4.50)
ψA ðp1 Þ ψB ðp2 Þ ψA ðp2 Þ ψB ðp1 Þ UA;B ðp1 ; p2 Þ dp1 dp2 :
(4.51)
CA;B 5 V1 V2
ð ð XA;B 5 V1 V2
We can mark the distance between subsystems A and B as R 5 p1 2 p2 . Then the Eq. (4.48) with a minus sign represents the binding of subsystems A and B in analogy with a hydrogen atom in physics.
4.11 Quantum information gate With respect to electricalinformation analogies, we can define the wave information flow and the wave information content such as the wave probabilistic functions dependent on parameters p (for the sake of simplicity, we suppose that all quantities are time independent): ψφ ðpÞ 5 ψφ ðpÞ e jν φ ðpÞ
(4.52)
ψI ðpÞ 5 ψI ðpÞ e jν I ðpÞ
(4.53)
We can define the active information power on the level of wave probabilistic functions in the following way [4]: PI ðpÞ 5 ψφ ðpÞ ψI ðpÞ 1 ψφ ðpÞ ψI ðpÞ 5 ψφ ðpÞ ψI ðpÞ cos ν I ðpÞ 2 ν φ ðpÞ :
(4.54)
Reactive information power replaces cos(.) by function sin(.) in (4.54). Let us suppose the quantities as follows: ψΦ 5 αΦ;1 jΦ1 i 1 αΦ;2 jΦ2 i 1 ? 1 αΦ;N jΦN i;
(4.55)
ψI 5 αI;1 jI1 i 1 αI;2 jI2 i 1 ? 1 αI;N jIN i;
(4.56)
where Φ1 ; . . . ; ΦN and I1 ; . . . ; IN are possible values of information flow and information content, respectively. Complex parameters αΦ;1 ; . . . ; αΦ;N and αI;1 ; . . . ; αI;N represent wave probabilities taking into account both probability of the falling relevance flow/content value together with their mutual dependences [15].
Chapter 4 • Quantum physicsinformation analogies
43
The information power [23] can be expressed through wave probabilistic functions as follows (under the assumption of distinguishability): ψPI 5 ψΦ ψI 5 αΦ;1 αI;1 jΦ1 ; I1 i 1 ? 1 αΦ;1 αI;N jΦ1 ; IN i 1 ? 1 αΦ;N αI;1 jΦN ; I1 i 1 ? 1 αΦ;N αI;N jΦN ; IN i
(4.57)
where symbol means Kronecker
operation [15,16] for vectors transformed into multiplication, each i, jth component Φi ; Ij represents a particular value of information power that characterizes the falling/measuring of the information flow Φi and the information content I j . Multiplication of different combinations of the information flows and contents Φi ; Ij , jΦk ; Il i can achieve the same (or similar) information power Kr : Φi Ij Φk Il Kr :
(4.58)
It can be seen that interferences of wave probabilities can emerge, and wave resonances among the wave parameters are also possible. Finally, an information power in renormalized form can be expressed as: ψPI 5 β 1 jK1 i 1 β 2 jK2 i 1 ? 1 β r jKr i 1 ? :
(4.59)
This approach yields to the resonance principle between the received/transmitted information flow and information content. It is supposed that each quantum information gate has its wave input/output information flow Φi and content Ij . With respect to this statement, we can, therefore, define the wave input/output information power PIin ; PIout assigned to such a gate. It enables modeling a deep perception and new soft system categories both for input/output parameters of each quantum information gate. Let us have the illustrative example with N variants of the available information flows φ~ i and the information contents I~ i assigned into the event or process (we do not distinguish in this example between the source and recipient of information): φ~ 1 ; φ~ 2 ; . . . :φ~ N ; I~ 1 ; I~ 2 ; . . . I~ N
(4.60)
The model of the different combinations of information flows and information contents can be created. Due to the assumption of linearity, we can sum them up to achieve the final values of the information flows and the information contents as follows: jΦ1 i 5 j0; 0; . . . ; 0i 5 j0i jI1 i 5 j0; 0; . . . ; 0i 5 j0i
(4.61)
jΦ2 i 5 j1; 0; . . . ; 0i 5 φ~ 1
jI2 i 5 j1; 0; . . . ; 0i 5 I~ 1
(4.62)
jΦ3 i 5 j1; 1; . . . ; 0i 5 φ~ 1 1 φ~ 2
jI3 i 5 j1; 1; . . . ; 0i 5 I~ 1 1 I~ 2
(4.63)
...
44
Information Physics
jΦN i 5 j1; 1; . . . ; 1i 5 φ~ 1 1 φ~ 2 1 ? 1 φ~ N
jIN i 5 j1; 1; . . . ; 1i 5 I~ 1 1 I~ 2 1 ? 1 I~ N
(4.64)
The quantum information flow and the information content can be represented as a superposition of above-defined N-variants: ψΦ 5 αΦ;1 jΦ1 i 1 αΦ;2 jΦ2 i 1 ? 1 αΦ;N jΦN i;
(4.65)
ψI 5 αI;1 jI1 i 1 αI;2 jI2 i 1 ? 1 αI;N jIN i;
(4.66)
Widely accepted criterion is the wave information power that requires optimization of both information flow and information content. For example, the (i, j)th combination yields the following information power:
Pi;j 5 Φi; Ij 5 φ~ 0 1 0 1 ? 1 φ~ i 1 ? I~ 0 1 0 1 ? I~ j 1 ?
(4.67)
The values jΦ1 i and jI1 i need not be strictly zero as it is given in Eq. (4.61). We can suppose that there exists some positive or negative information background ðjΦ1 i 5 φ~ 0 ; jI1 i 5 I~ 0 Þ given, for example, by common culture, education, experience, and evolution. Observers do not perceive the information background behind the events, but it has an impact on the events’ energy. If we imagine a negative information background, it takes more energy to achieve the positive information content that orders the system. Maybe we can define the term information ecology that guarantees the positive information background.
4.12 Quantum learning The quantum information approach is appropriate for quantum logic that combines the parallel existence of different events or processes. For example, in our brain we have a lot of incomplete information obtained in different time intervals under specific conditions. The question is whether we can put all relationships together, connect parallel information, and create the knowledge picture of the reality by incorporating all available information. It is evident that we work with random values, and it is necessary to use tuned feedbacks to maximize wave probabilistic functions that represent the synergy between reality and our model. On the other hand, some nonactual events or processes can be forgotten as soon as possible to clear the conscious space and not use the bad information. For this process, we need to continuously compare our cognition model with the real word and delete some pieces of information (events, processes, and links). We can suppose the interconnection of events A and B: ψA;B 5 αA αB j00iA;B 1 αA β B j01iA;B 1 β A αB j10iA;B 1 β A β B j11iA;B
(4.68)
This equation can be interpreted as a set of wave probabilistic functions αA αB , αA β B , β A αB , β A β B assigned to all of the variants of the connected A, B subsystems—neither A
Chapter 4 • Quantum physicsinformation analogies
45
nor B occur, A nonoccur and B occur, A occur and B nonoccur, both A and B occur. With respect to the observed reality the feedback links could be applied to optimize the wave function of the best variant. If the synergy between A and B is advantageous, then the quantum learning system should maximize the value of β A β β . If it is appropriate in further processing to forget event A and to use only event B, then the value of αA β B should be maximized. We can use this instrument for modeling the synergies among different parts of reality. Maybe this methodology moves towards achieving wisdom through the continual quantum learning process.
This page intentionally left blank
5 Features of quantum information 5.1 Quantization We assume two quantum subsystems that can reach two values: A 5 0, A 5 1, B 5 0 and B 5 1 with linear phases that take the form ψ ~ e jmðΔ12πkÞ , where the symbol ~ means equality up to the normalization factor, Δ is the phase, and “j” represents an imaginary unit. The wave probabilistic function must achieve the single-valuedness [14] also for the phases ðΔ 1 2 π k Þ, where k is an integer [20]. Mathematically we arrive at the following wave probabilistic functions: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA 5 0Þ; ψðB 5 0Þ 5 P ðB 5 0Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ψðA 5 1Þ 5 P ðA 5 1Þ e jmðΔ12kπÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ψðB 5 1Þ 5 P ðB 5 1Þ ejmðΔ12kπÞ
ψðA 5 0Þ 5
(5.1) (5.2) (5.3)
We assume that observer No. 1 monitors the state A 5 0 and observer No. 2 monitors the state B 5 1. The probability union that A 5 0 or B 5 1 is given as: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 P ððA 5 0Þ , ðB 5 1ÞÞ 5 P ðA50Þ 1 P ðB51Þ ejmðΔ12kπÞ 5 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 5 P ðA 5 0Þ 1 P ðB 5 1Þ 1 2 P ðA 5 0Þ P ðB 5 1Þ cosðm ðΔ 1 2 k πÞÞ
(5.4)
which is the quantum equivalent of the classical well-known probabilistic rule: P ððA 5 0Þ , ðB 5 1ÞÞ 5 P ðA 5 0Þ 1 P ðB 5 1Þ 2 P ððA 5 0Þ - ðB 5 1ÞÞ:
(5.5)
The quantum rule (5.4) enables both a negative and a positive sign due to a phase parameter. It is evident that the intersection of probabilities in the quantum world can also be negative P ððA 5 0Þ - ðB 5 1ÞÞ , 0 despite the fact that probabilities P ðA 5 0Þ $ 0; P ðB 5 1Þ $ 0 are positive. The quasi-spin was firstly introduced by Svitek in Ref. [16]. If it is an integer mAf0; 61; 62; 63; . . .g, we can guarantee a positive sign in (5.4). Such quantum subsystems are called the information bosons. The information fermions are characterized with half integer quasi-spin mA 0; 6 12 ; 6 32 ; 6 52 ; . . . , and we must admit the negative sign in (5.4). In a similar way, we can also deduce the information quarks analogous to quantum physics with its special properties [24].
5.2 Quantum entanglement One of the most remarkable phenomena of quantum physics is quantum entanglement. This phenomenon has no parallel in classical physics, and it cannot even be generated using classical methods. Information Physics. DOI: https://doi.org/10.1016/B978-0-323-91011-8.00006-8 © 2021 Elsevier Inc. All rights reserved.
47
48
Information Physics
We can take a closer look at this phenomenon using our simple example with coin tossing. Suppose we have two parallel systems and that the outcome of each system is either throwing heads or tails. If the probability from measuring the first system is a 50% chance of throwing heads or a 50% chance of throwing tails, the output from the measurement of the second system is determined entirely by the value of the measurement of the first system. In other words, the output of the second system is 100% entangled with the output of the first system, because if the output of the first system is heads, for the second system we will most certainly, with a 100% probability, actually get a reading of tails and vice versa. If the output of the first system is tails, the output of the second system will definitely be heads. This conclusion applies regardless of the distance between the two systems. Let us define the abovementioned example through the join probability: P ððA 5 0Þ , ðB 5 1ÞÞ 5 P ðA 5 0Þ 1 P ðB 5 1Þ 1 2
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA 5 0Þ P ðB 5 1Þ cosðϕÞ;
(5.6)
where ϕ is the phase difference between wave functions ψðA 5 0Þ and ψðB 5 1Þ. Let us suppose now that [19]: P ððA 5 0Þ , ðB 5 1ÞÞ 5 0:
(5.7)
This case can occur for the following values of ϕ: ! 1 P ðA 5 0Þ 1 P ðB 5 1Þ ϕ 5 acos 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : 2 P ðA 5 0Þ P ðB 5 1Þ
(5.8)
If, for example, P ðA 5 0Þ 5 P ðB 5 1Þ 5 1=2, then ϕ 5 π represents the 100% entanglement. As a result of the entanglement (5.7), we can write that the following events will surely happen: P ððA 5 1Þ - ðB 5 0ÞÞ 5 1:
(5.9)
We can also start with the following probability, instead of (5.7): P ððA 5 1Þ , ðB 5 0ÞÞ 5 0:
(5.10)
Then the entanglement yields into: P ððA 5 0Þ - ðB 5 1ÞÞ 5 1:
(5.11)
Measuring the first quantum object (the probability of measuring the event A 5 0 is 50% and the probability of measuring A 5 1 is also 50%) fully determines the value, which will be measured on the second object. Eqs. (5.9) and (5.10) yield the well-known Bell state introduced in Ref. [35], which is used in many applications, for example, in quantum cryptography.
Chapter 5 • Features of quantum information
49
The quantum entanglement could also be described through the environmental attributes. Such interpretation can explain the strange situation of quantum mechanics that unique photon (electron) is correlated with itself, and it fulfills the quantum rules. In our interpretation, we can say that in the case of the unique photon (electron), it is correlated with its environment. Timespace synchronicity based on the entanglement can be shown on the following timeposition (t 2 x) diagram: Time/position t1 t2 t3
x1 X X X
x2 X X u2
x3 X X X
x4 X X X
x5 u1 X X
x6 X X X
Events u1 and u2 are entangled. Such entanglement can be very complex and can connect the events realized at very different positions (x5, x2) and different times (t1, t3). If there is a timespace entanglement of 100%, there is 100% of synchronicity between events u1 and u2. We can imagine other variants of quantum entanglement such as the entanglement among elements located in different distinguishing levels. Such an approach yields into the definition of the holographic universe or fractal dimensions. Other variants can look at an entanglement between an observer and an observed reality.
5.3 Quantum environment Deep thinking about quantum system science brought us once again to the idea of the system environment (in the past, it was called ether). One of the interpretations of phase probabilistic functions says that the ether could be responsible for events’ correlations. Ether can be represented in the easiest example by events’ observers where each of them registers a different part of reality. The environment causes the inner link among observers. It can be supposed that the environment has other important features such as information forgetting. With the help of the idea of the event’s environment, the phase parameters could be better explained. Let us assume the event ua that exists under a condition of its a-environment ψa ðua Þ. And the event ub exists in its (different) b-environment ψb ðub Þ. If both environments meet the same features, a composition of events ua , ub can be modeled as the sum of wave probabilistic functions in the common environment: ψðua ; ub Þ 5 ψa ðua Þjua i 1 ψb ðub Þjub i
(5.12)
But what will happen if the environments assigned to the events ua , ub are not the same? The joining of the events then leads to the new wave probabilities ψ~ a ðua Þ, ψ~ b ðub Þ characterizing the features of the new merged environment: ψ~ ðua ; ub Þ 5 ψ~ a ðua Þjua i 1 ψ~ b ðub Þjub i
(5.13)
50
Information Physics
We can imagine that the wave functions ψa ðua Þ, ψb ðub Þ are perpendicular (the events are completely independent) in their original environment. But after merging both events ua , ub , the new environment can cause the different wave probabilistic functions ψ~ a ðua Þ and ψ~ b ðub Þ. This means that in the new environment these events are dependent even if they were independent previously. The discussion can be naturally extended into a single event measurement ua ðt Þ in different time intervals. Though the events ua ðt1 Þ and ua ðt2 Þ are generated from the same source ψa ðt Þ, their realization is under different conditions represented by wave functions ψ~ a ðt1 Þ and ψ~ a ðt2 Þ:
ψ~ a ðt1 ; t2 Þ 5 ψ~ a ðt1 Þ ua ðt1 Þ 1 ψ~ a ðt2 Þ ua ðt2 Þ
(5.14)
A deep understanding of wave probabilistic functions’ transformation due to an environment can bring new knowledge into quantum information science.
5.4 Quantum identity With respect to this discussion, we can imagine that an information of the environment’s reaction can be further used for the definition of the quantum identity. Let us define two binary subsystems A and B characterized by the wave probabilities ψðA 5 0Þ, ψðA 5 1Þ, ψðB 5 0Þ, and ψðB 5 1Þ. In Refs. [20,21], the product probabilistic rule for two wave functions was presented: P ððA 5 1Þ - ðB 5 1ÞÞ 5
1 ½ψ ðA 5 1Þ ψðB 5 1Þ 1 ψðA 5 1Þ ψ ðB 5 1Þ; 2
(5.15)
The symbol ψ expresses a complex conjugate of ψ. This equation is in compliance with the nondistinguishability principle because the replacement of A and B has no influence on the probability P ððA 5 1Þ - ðB 5 1ÞÞ. We can imagine that A represents the real subsystem and B its image or in other words: how this subsystem or event is accepted by its environment. This feature was first introduced in Ref. [36] as a subsystem identity. It is reasonable to suppose that the environment is spending no energy to make changes of the original subsystem A, which means: ψðA 5 1Þ 5 ψðB 5 1Þ :
(5.16)
In the event that the environment fully accepts the subsystem A, both subsystems A and B are identical (they have the same phases), and we can rewrite (5.15) as a standard Copenhagen result [4]: 2 P ðA 5 1Þ 5 ψðA51Þ :
(5.17)
Chapter 5 • Features of quantum information
51
The acceptance of the subsystem A by its environment (modeled by its image B) can be differentiated by phase parameters. We note that the phase difference between ψðA 5 1Þ and ψðB 5 1Þ as Δφ. Then (5.15) can be given: P ððA 5 1Þ - ðB 5 1ÞÞ 5 ψðA 5 1Þ 2 cosðΔφÞ:
(5.18)
There are many variants of Δφ for the subsystem identity modeling. The full acceptance is modeled by Δφ 5 0. The phase difference Δφ 5 π represents a negative acceptance (the environment is blind to it and rejects it) that yields to the negative sign of P ððA 5 1Þ - ðB 5 1ÞÞ.
5.5 Quantum self-organization Let us start with the description of characteristics called Fragility meaning that in the case of system destruction, the system has an immune reaction that has the ability to go against the destruction, for example, manage new ordering of its components. Taleb [34] spoke about antifragility—the ability of the system even to increase its characteristics in case of destruction. Such a characteristic is close to the term graceful degradation or system resiliency, which should also be part of the complex system theory. We would like to propose an example about easy self-organization that can be applied, for example, for humanmachine interface (HMI). The HMI offers only a few functionalities for beginners. After its mastering, the offer of functionalities can be enlarged to include more and more functions. In the case of loss of ability to use the advanced functions, the HMI step down to a lower level and the game is repeated. Such an approach enables tailor-made services based on the user’s ability to use them. The theoretical model of fragility seems to be applied to the term of subsystems’ self-organization. Let us assume to have a sum of wave functions of the whole system (the maximized normalized probability): 2 P 5 ψ1 1ψ2 1. . .: 5 1
(5.19)
In the case of any changes, represented by a loss of system component or by some broken link, the whole energy (5.19) is reduced. The system is going to find a new system structure to be as close as possible to maximized energy. This kind of optimization is coded in the behavior of living organisms—they try to survive under bad conditions and adapt themselves to the situation. In artificial intelligence, this feature can be organized by an “agent” responsible for system adaptation to achieve the best energy optimization under specific conditions. Let us deal with an illustrative example of three subsystems A, B, C not admitting any negative probabilities P(A), P(B), or P(C), which means that the subsystems can only store or carry energy (they do not exhaust it). Further on, we suppose that all the three subsystems can fulfill the normalization condition: P ðA , B , C Þ 5 1:
(5.20)
52
Information Physics
As there is no link between either the subsystems A and C or between the subsystems B and C, we must admit only both positive and negative joint probability P ðA - BÞ: P ðA , B , C Þ 5 P ðAÞ 1 P ðBÞ 6 P ðA - BÞ 1 P ðC Þ 1:
(5.21)
We can write the Eq. (5.21) in more universal wave probabilistic form: P ðA , B , C Þ 5 P ðAÞ 1 P ðBÞ 1
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðAÞ P ðBÞ cosðϕÞ 1 P ðC Þ 1;
(5.22)
where ϕ is the phase difference between wave functions ψðAÞ and ψðBÞ. Generally, we can also define one-directional links [29] (phase ϕ characterizes the source of the link): P ðA , B , C Þ 5 P ðAÞ 1 P ðBÞ 1
pffiffiffiffiffiffiffiffiffiffi P ðAÞ cosðϕÞ 1 P ðC Þ 1;
(5.23)
P ðA , B , C Þ 5 P ðAÞ 1 P ðBÞ 1
pffiffiffiffiffiffiffiffiffiffi P ðBÞ cosðϕÞ 1 P ðC Þ 1:
(5.24)
In case there is no link between A and B, the energy assigned to the probability ð1 2 P ðC ÞÞ is distributed between A and B: P ðAÞ 1 P ðBÞ 5 1 2 P ðC Þ:
(5.25)
If we start to model a positive link between A and B (cooperation model characterized by a classical probabilistic rule with the negative sign of P ðA - BÞ in (5.21)), we can write: P ðAÞ 1 P ðBÞ 5 1 2 P ðC Þ 1 P ðA - BÞ
(5.26)
It is evident that the right side of this form is increased. It means that both A and B can gain additional energy due to “the common cooperation principle” at the expense of P ðC Þ. For available maximum P ðAÞ 5 1 and P ðBÞ 5 1, the link must achieve P ðA - BÞ 5 1 and P ðC Þ 5 0. A negative link between A and B can also bring a negative influence modeled by a positive sign of P ðA - BÞ in (5.21). It means that the negative link yields to the weakening of both subsystems A and B and to the strengthening of the subsystem C. The minimum P ðAÞ 5 0 and P ðBÞ 5 0 is fulfilled for P ðA - BÞ 5 P ðC Þ 2 1. If subsystem C is able to use the lost energy from A and B, then such a situation is characterized by P ðA - BÞ 5 0 and P ðC Þ 5 1. If not, then the negative value of P ðA - BÞ means energy dissipation. In case P ðC Þ 5 0, all energy assigned to the subsystems A and B is dissipated into the environment and so P ðA - BÞ 5 2 1. The positive and negative links among different subsystems create the emergent behavior identified in complex systems. The more subsystems there are the more links among them and thus more emergencies that can have significant influences on the modeled system as a whole.
Chapter 5 • Features of quantum information
53
Self-organization rules should be explained through the probability (energy) maximization principle. We can search for (positive or negative) links among different subsystems to maximize simultaneously each subsystem (the egoistic behavior) P ðAÞ; P ðBÞ; P ðC Þ and also the system as a whole (the alliance behavior [38]) P ðAÞ 1 PðBÞ 1 P ðC Þ. Various criteria for optimization can be studied, for example, tuning parameters of links for optimal distributions of energies within the complex system.
5.6 Quantum interference Many complex systems are typically characterized by high levels of redundancies. The surrounding complex reality can be modeled either by a very complicated model or approximated by a set of many different and often overlapping easier models, which represent different pieces of knowledge. Wave probabilistic models could be used to set up the final behavior of a complex system. Phase parameters can compensate for overlapping information among models, as it was first presented in Ref. [18]. The Feynman rule [4,5] says that all paths (or in our case each of the models) contribute to a final amplitude (or in our case to a final model) by its amplitude with a different phase. In classical examples, the more models, the more possible trajectories of future complex system behavior. This problem is mentioned in literature as “the curse of dimensionality.” But for wave probabilistic models, some trajectories could be due to phase parameters mutually canceled and others, by contrast, strengthened. If we take a sum of all trajectories assigned to all wave models, this sum can converge into the “right” trajectory of the complex system. With respect to the Feynman path diagram, the more available models could not note the complexity increase. Let us show this principle on the following illustrative example of three binary subsystems A, B, C characterized by wave probabilities ψðA 5 0Þ, ψðA 5 1Þ, ψðB 5 0Þ, ψðB 5 1Þ, ψðC 5 0Þ, and ψðC 5 1Þ. The whole quantum system can be described (under the distinguishability assumption) as: ψ 5 ψðAÞ ψðBÞ ψðC Þ 5 γ0;0;0 j000i 1 γ 0;0;1 j001i 1 1 γ 1;1;1 j111i;
(5.27)
where γ i;j;k 5 ψðA 5 iÞ ψðB 5 j Þ ψðC 5 k Þ is the wave probability function assigned for i; j; kAf0; 1g. It is evident that eight quantum states are possible. We can imagine that due to the interferences of the wave probabilistic functions γ i;j;k , the only two final processes j000i and j111i can take place as written below: γ 0;0;0 1 γ 1;1;1 5 1 γ 0;0;1 1 γ 0;1;0 1 γ 1;0;0 1 γ 0;1;1 1 γ 1;0;1 1 γ 1;1;0 5 0
even though we can measure all eight variants separately with probabilities jγ i;j;k j2 .
(5.28)
54
Information Physics
This illustrative example can be extended into more complex time-varying systems, but the basic principles are the same. The whole is more than the sum of different pieces because it can possess new emergent features caused by interferences of its parts.
5.7 Distance between wave components Let us define a quantum component c (event, function, process, etc.) in a complex domain (wave probabilistic function) with its real x 5 Reðψc Þ and imaginary y 5 Imðψc Þ parts as follows: pffiffiffiffiffi ψc pc ; ϕc 5 jψc j ejϕc 5 pc ejϕc 5 Re ψc 1 j Im ψc
(5.29)
pffiffiffiffiffi Value jψc j 5 pc is a c-component amplitude representing how often it occurs and the c-component phase ϕc representing how it is linked to other components within the system as a whole (a set of all components). Function ψc defines the ordering of all components and relations among them. The wave distance between both wave probabilistic functions ψa and ψb can be defined as follows: d 5 ψb 2 ψa
(5.30)
The parameter d expresses the square root of probability that (“a is active” AND “b is inactive”) OR (“b is active” AND “b is inactive”). It corresponds to XOR logical function. It is clear that d 5 0, which means simultaneous 100% entanglement: (“a is active” AND “b is active”) OR (“a is inactive” AND “b is inactive”). The closer the wave probabilistic functions ψa , ψb are, the similar the behavior pattern of a and b components can be observed. Synchronicity is the simultaneous occurrence of events that appear significantly related. The maximal distance is given by d 5 1, which means a simultaneous 100% entanglement: (“a is active” AND “b is inactive”) OR (“a is inactive” AND “b is active”). The greater the distance, the more different the behavior of a and b components (asynchronicity) is observed. The distance can also be understood as a measure like the physical distance between two points in a space area. The interpretation of the wave distance can be, for example, through common communication. In the case of no communication, the behavior will be random. If the difference goes close to zero, it means that both components are well connected and can behave synchronously. If the distance approaches 1, it means that the components also communicate with each other but behave as asynchronously as possible. The disappearance of information can be arranged by thresholding the resolution of states. If the distance between states is smaller than the selected delta, we consider the states as indistinguishable (they merge). The information can be alternatively taken as conservative (similar to energy), but gradually entangled with other states of the system or its environment. Of course, the number of states is growing, leading to the gradual “thickening of spacetime.”
Chapter 5 • Features of quantum information
55
5.8 Interaction’s speed between wave components Let us consider two components a and b with their interconnections: ψa;b 5 αa αb j00ia;b 1 αa β b j01ia;b 1 β a αb j10ia;b 1 β a β b j11ia;b
(5.31)
This relation can be interpreted as a set of wave probabilistic functions αa αb , αa β b , β a αb , β a β b assigned to all variants of connected a, b components. The well-known Copenhagen interpretation of quantum mechanics is based on probabilities jαa αb j2 , jαa β b j2 , jβ a αb j2 , and jβ a β b j2 . The process of the measurement (decoherence) means that a set of feasible states randomly falls into one combination with respect to its probability. We can introduce a different point of view on the interpretation of wave probabilistic functions and use the distance probabilities jαa 2αb j2 , jαa 2β b j2 , jβ a 2αb j2 , and jβ a 2β b j2 for further analyses. The closer wave functions are, the higher synergy between them is expected. It is analogous to the key and lock. The closer they are, the sooner we can unlock the door. If the distance is equal to 1, the connection can, with difficulty, happen in a short period of time because there is a low chance of two components interacting with each other. The lower the distance means the higher probability that both states happen simultaneously. The distance probabilities are different than the probabilities of the falling one of the states j00ia;b , j01ia;b , j10ia;b , and j11ia;b . Based on these findings, we can hypothesize that the interaction takes place faster between wave functions that are closer to each other. This may mean that the proximity is due to historical experiences because this combination has occurred in the past. The speed of response can therefore play a significant role in quantum decoherence. The learning process may adopt the distances among wave probabilistic functions according to the frequencies of their occurrences. The more often they simultaneously occur, the closer the wave functions are, and the faster (not more often) this couple of states will be selected. Historical experiences could be responsible for prior phase settings among the system’s components (events, functions, processes, etc.). In the case of the two hemispheres of the brain, the right one is responsible for collecting the data and the left one for interpreting it (creating stories). Intuitive interconnection leads to a scenario that is unlikely but has been realized in the past. After long thought and rational comparison of different combinations, we are able to find the most probable variant. A similar dependence can be observed between different sensory perceptions. For example, there may be a short distance between the wave functions assigned to vision and smell. When we remember the sense of smell, we can automatically recall a visual impression.
5.9 Component strength In previous text, we always assumed that the studied system has closed without the possibility of adding or subtracting energy to any component. Unfortunately, for closed systems, the
56
Information Physics
wave function ψc does not cover all possibilities; however, the c-component could be powerful after adding new energy. Let us extend the quantum model to open systems. Based on this fact, we must introduce the new parameter component strength Sc showing how the component is subjected to common interactions. In analogy to physics, a mass is used for gravitation interaction [18], and an electrical charge is defined as an essential part of the electrical field. In a Standard model [10], the quark color is developed for the definition of interactions. The positive value of component strength Sc . 0 means the ability to organize; negative value Sc , 0 leads to disorganization (making chaos). The component strength can be managed by many instruments, such as political influences, financial power, innovation ability, and a new energy source. Strength moment SMc of c-component represents its efficiency (it is a vector because ψc is a vector): SMc ~ψc Sc
(5.32)
This symbol ~ means equality, except for a normalization constant that will be the same for all components. It will calibrate the entire quantum model. If the size of the module of ψc is small, even a large strength will not be reflected in the system. The strength potential energy SWc assigned to c-component can be expressed by squared strength moment (potential energy is scalar): SWc ~ pc Sc2
(5.33)
From the physical analogy, we can define the interactions between a and b components: Sa;b ðdÞ~
Sa Sb S S ~ a b d ψa 2 ψb
(5.34)
The values Sa ; Sb , therefore, can be understood as generalized charges situated in wave probabilistic space as it is shown in Fig. 5.1. The value Sa;b ðd Þ represents a link between two components (events, functions, processes). General charges with different signs are repelled, with the same signs being attracted.
FIGURE 5.1 Representation of wave probabilistic functions together with the strengths of a, b components.
Chapter 5 • Features of quantum information
57
Strength potential expresses how the observer ψ interacts with a selected a-component ψa : SPa ðψÞ ~
Sa ψ 2 ψa
(5.35)
For N different components, we can rewrite the strength potential of the whole system: SP ðψÞ ~
N X a51
S a ψ 2 ψa
(5.36)
Strength potential energy SEc describes how the new (N 1 1) component c perceives it due to interactions to all other N components: SEc ðψÞ~Sc
N X
a51 a 6¼ c
Sa ψc 2 ψa
(5.37)
The smaller a new c-component jψc j, the bigger the influences are caused by the surrounding components it perceives. On one side, it is the energy source, but on the other side, such component is fully dependent on its surrounding (very small resilience). In the case of rapid changes in the outside information environment, the c-component can lose the source of energy and not survive.
5.10 Quantum node For more complicated structures of quantum circuits, we need to invent other rules. In Fig. 5.2, there is a quantum node fψ; Sg with two inputs fψa ; Sa g, fψb ; Sb g and two outputs fψc ; Sc g, fψd ; Sd g. Generally, we can introduce Strength Moment Conservation Law that leads to the analogy of the Information Kirchhoff’s Law: N X k51
ψIN;k Sk 2
M X
ψOUT;n Sn 5 ψ S
n51
FIGURE 5.2 Strength Moment Conservation law and Information Kirchhoff’s law.
(5.38)
58
Information Physics
The node strength moment ψ S is equal to the sum of N inputs’ strengths moment minus M outputs’ strengths moments. Let us show Strength Moment Conservation law on the example of a small company consisting of three employees, whose mutual relations are given by functions ψa ; ψb ; ψc , where the modules show their effectiveness and phases the ability to cooperate. However, each of the employees has a different benefit for the team expressed in terms of strengths Sa ; Sb ; Sc . This benefit can be either a unique knowledge or a connection to a good customer. At the same time, a group of workers consumes energy cost expressed by strength SE and space rental price SS . The assigned wave probabilistic functions ψE ; ψS show their adequacy (modules of wave probabilistic functions) and the compliance of employees with these items (phases of wave probabilistic functions). Because it is an output from the node, the strength moments ψE SE ; ψS SS will have a negative sign. The goal is to determine the performance of such a team, which leads to the application of the Strength Moment Conservation law:
ψa Sa 1 ψb Sb 1 ψc Sc 2 ψE SE 1 ψS SS 5 ψ S
(5.39)
The determination of the parameters ψ and S of the quantum node is based on the normalization conditions of the wave probabilistic functions. A quantum node will be most effective if the phases are synchronized in the same direction. In this case, the potential of the team created is best used. At a higher resolution level, it is, of course, possible to monitor the performance of the distributed network of groups of quantum nodes created in this way.
6 Composition rules of quantum subsystems 6.1 Connected subsystems Let us introduce two dynamical subsystems described by wave probabilistic functions: ψS1 ðt Þ 5 Ms1 e jv1 t ; ψS2 ðt Þ 5 Ms2 e jv2 t
(6.1)
The connection means that both subsystems are firmly connected in one system and together order the near environment. This situation is common in quantum physics, and it is generally represented by the Kronecker operation for vectors transformed into multiplication. We can thus provide the following operation for connected quantum subsystems: ψs1;2 ðt Þ 5 ψs1 ðt Þ ψs2 ðt Þ 5 Ms1 Ms2 e jðν 1 1ν 2 Þt
(6.2)
For this case, phase parameters ν 1 ; ν 2 are added. In analogy with a power line, it is a shortcut connection with zero resistance.
6.2 Disconnected subsystems The disconnection in quantum system theory means that both subsystems are encapsulated inside a superior system without a firm connection. In other words, they are connected with infinite resistance (for a power line, it is the same as a nonend connection). We suppose both subsystems can exchange data. Imagine that the subsystem S1 is an active one and sends its request to a passive subsystem S2 . Subsystem S2 replies with an opposite phase function—it actually replies to the environment of S2 (its image), which has an opposite phase. From the point of view of S1 , the received data is a colored reflection of the subsystem S2 . Wave representation of such operation can be written: ψS1;2 5 ψS1 ðt Þ ψS2 ðt Þ 5 MS1 MS2 e jðν 1 2ν 2 Þt
(6.3)
The rules for disconnected systems S1;2 yields into the phases ν 1 ; ν 2 subtraction. If the activity comes from the subsystem S2 , we can speak about rules for S2 with final phase ðν 2 2 ν 1 Þ. Information Physics. DOI: https://doi.org/10.1016/B978-0-323-91011-8.00001-9 © 2021 Elsevier Inc. All rights reserved.
59
60
Information Physics
6.3 Coexisted subsystems Coexistence in quantum system theory represents the parallel existence of both subsystems S1 and S2 in one environment. The coexistence is expressed by the sum of wave functions: ψS1;2 ðt Þ 5 ψS1 ðt Þ 1 ψS2 ðt Þ 5 MS1 e jν 1 t 1 MS2 e jν 2 t
(6.4)
A union of wave functions describes how we can work with the subsystems S1 and S2 separately or with both of them at the same time.
6.4 Symmetrically disconnected subsystems We expect that each subsystem is able to both send and receive data from the second one. If the first subsystem S1 starts the data transfer and the second S2 replies, we can apply the rule: ψS;1 ðt Þ 5 ψS1 ðt Þ ψS2 ðt Þ 5 MS1 MS2 e jðν 1 2ν 2 Þt
(6.5)
Conversely, if the second subsystem S2 initiates communication and the first subsystem S1 sends its reply, we have wave function: ψS;2 ðt Þ 5 ψS2 ðt Þ ψS1 ðt Þ 5 MS1 MS2 e2jðν 1 2ν 2 Þt
(6.6)
Because both variants are equal (symmetric interaction), we can express the final waveform as a coexistence of both variants: ψS ðt Þ 5 ψS1 ðt Þ ψS2 ðt Þ 1 ψS1 ðt Þ ψS2 ðt Þ 5 2 MS1 MS2 cosððν 2 2 ν 1 Þ t Þ
(6.7)
In quantum physics, we speak about indistinguishable subsystems because both have the same option to communicate. The result can be interpreted as a periodical exchange of a common probability/energy value MS1 MS2 between both subsystems S1 and S2 . This principle is called in quantum physics as an exchange of a virtual particle.
6.5 Symmetrically competing subsystems Sometimes, there is only one state for only one subsystem. If two subsystems start their competition, only one of them can be the winner. This situation can modify the rule for wave probabilistic function in the following way: ψS ðt Þ 5 ψS1 ðt Þ ψS2 ðt Þ 2 ψS1 ðt Þ ψS2 ðt Þ
In quantum physics, it yields to the well-known Pauli exclusion principle.
(6.8)
Chapter 6 • Composition rules of quantum subsystems
61
6.6 Interactions with an environment In this session, we suppose that only one subsystem S1 exists within the environment SE . If the environment is closed (not connected to other systems), then the energy (probability) conversation law must be fulfilled—the energy (probability) of subsystem S1 must be equal to MS21 . It means that the environment SE must have an opposite phase (a mirror image of the subsystem S1 ): ψSE ðt Þ 5 MS1 e2jν 1 t
(6.9)
Because the environment SE is connected with the subsystem S1 , we can write the final wave probabilistic operation between the subsystem S1 and its environment SE as: ψS;E 5 ψS1 ðt Þ ψSE ðt Þ 5 MS21
(6.10)
We can imagine that subsystem S1 organizes itself (positive phase) at the expense of its surroundings SE ; (negative phase). Wave function with opposite phase is a supplement of the subsystem. Connection of the subsystems with its environment leads to a zero phase. The main goal of an arranging process is to transform pure energy (zero phase) into a more sophisticated one at the expense of the energy of the environment.
6.7 Illustrative examples There is a view that a world is composed of miniature (quantum) in some ways illusory worlds. We create our world by how we think, what we focus on, and how we behave and act. The worlds of the various observers (wave functions) interact with each other and create a common information field, which then represents the complex knowledge through which our world is conveyed to us. We will present three examples to illustrate practical applications of our approach [62]. The first example shows the composition rules of three discrete quantum subsystems. Common connection and disconnections of the subsystems are analyzed, and the wave composition rule is applied to compute final probabilities. The second example simulates a set of many observers with different points of view on physical reality, modeled by shifted phase parameters. Quantum system theory can provide us with an approximate best result through a well-known democratic approach. This view is naturally characterized by both modulus and phase of wave probabilistic function. What we call the truth is often the result of some social agreement or even a consensus approximated by the average of different subjective wave functions. The third example extends this approach to different relationships to other subsystems that reflects its own perception of each subsystem. It is possible to model reality while respecting the views and points of view of all subsystems.
62
Information Physics
Example 6.1 Wave composition rules. Let us define three subsystems S1 ; S2 ; S3 with their wave probabilistic functions ψS1 ; ψS2 ; ψS3 given by: ψS1 5 α0 j0i1 1 α1 j1i1
(6.11)
ψS2 5 β 0 j0i2 1 β 1 j1i2 1 β 2 j2i2
(6.12)
ψS3 5 γ 0 j0i3 1 γ 1 j1i3
(6.13)
The symbol jK ii means the state k falling on the subsystem Si . We suppose that S1 and S2 are firmly connected in S1;2 that is represented by wave function: ψS1 ;2 5 α0 β 0 j00i1;2 1 α0 β 1 j01i1;2 1 α0 β 2 j02i1;2 1 1 α1 β 0 j10i1;2 1 α1 β 1 j11i1;2 1 α1 β 2 j12i1;2
(6.14)
Both of them coexist with the subsystem S3 in a common environment. This situation can be formulated: ψS1;2;3 5 α0 β 0 j00i1;2 1 α0 β 1 j01i1;2 1 α0 β 2 j02i1;2 1 1 α1 β 0 j10i1;2 1 α1 β 1 j11i1;2 1 α1 β 2 j12i1;2 1 γ 0 j0i3 1 γ 1 j1i3
(6.15)
After interacting with the environment, we obtain the following probabilities assigned to all possible variants of the final outputs: P1;2;3 5 ψS1;2;3 ψS1;2;3 5 p00 j00i1;2 1 p01 j01i1;2 1 p02 j02i1;2 1 p10 j10i1;2 1 1 p11 j11i1;2 1 p12 j12i1;2 1 p000 j000i1;2;3 1 p010 j010i1;2;3 1 p020 j020i1;2;3 1 1 p100 j100i1;2;3 1 p110 j110i1;2;3 1 p120 j120i1;2;3 1 p001 j001i1;2;3 1 p011 j011i1;2;3 1 p021 j021i1;2;3 1 p101 j101i1;2;3 1 p111 j111i1;2;3 1 p121 j121i1;2;3
(6.16)
For the above computing, we used logical operations: α0 β 0 j00i1;2 α0 β 1 j01i1;2 5
1 α0 β 0 α0 β 1 ðj00i1;2 1 j01i1;2 Þ 2
α0 β 0 j00i1;2 γ 0 j0i3 5 α0 β 0 γ 0 j000i1;2;3
(6.17) (6.18)
By using an intersection or a union operation, we can compute different probabilities, for example: • j00i1;2 OR j000i1;2;3 is p00 1 p000 • ðj00i1;2 OR j000i1;2;3 Þ AND j10i1;2 is ðp00 1 p000 Þ p10 The above example shows how we can compute the final probability for specified structures of the system with different links (connection, disconnection, coexistence) among the components/subsystems.
Chapter 6 • Composition rules of quantum subsystems
63
Example 6.2 Real and observed reality. Let us introduce two extended descriptions of dynamical subsystems: ψS1 ;Ok ðt Þ 5 MS1 e jðν 1 t1Φ1;Ok Þ ; ψS2 ;Ok ðt Þ 5 MS2 e jðν 2 t1Φ2;Ok Þ
(6.19)
Added phase parameters Φ1;Ok and Φ2;Ok represent the phase shifts. In other words, they explain how k-observer perceives the first and the second subsystem from its subjective point of view. If these phases are zeros, we can speak about the actual reality (without any observation error). For M observers, we have at disposal M images of observed reality assigned to each observation process. The question in this example is how to approximate the real reality in the best way? We can apply a democratic principle and use the average of all observations: ψS1 ðt Þ 5 MS1 e jν 1 t
M 1 X e jΦ1;Ok 5 M S1 e jðν 1 t1Φ1 Þ ; M k51
(6.20)
ψS2 ðt Þ 5 MS2 e jν 2 t
M 1 X e jΦ2;Ok 5 M S2 e jðν 2 t1Φ2 Þ : M k51
(6.21)
The final values M S1 ; Φ1 ; M S2 ; Φ2 can be taken as the best approximation of reality. It is evident that the illustrative example can be extended into systems with more components (subsystems) and more sophisticated dynamical links among them (connection, disconnection, coexistence, competition, etc.). We can also model through the presented methodology time-varying observations, taking into account both the learning and forgetting processes. Example 6.3 Different relationships among subsystems. We can generalize the previous example and extend it to a set of N different subsystems with their own relations to other subsystems. For the sake of simplicity, we assume that modulus MSi assigned to ith subsystem is the same for all links. Wave function of multilinks’ ith subsystem can be given in matrix form: 2
3 e jΦi;1 6 e jΦi;2 7 7 ψSi 5 MSi 6 4 5 e jΦi;N
(6.22)
The phases Φi;k are parameters explaining how ith subsystem perceives the kth one. Phase Φi;i describes how ith subsystem sees itself in relation to common reference.
64
Information Physics
Coexistence of N subsystems in an environment can be written by wave function: 2
3 e jΦ1;1 6 e jΦ1;2 7 7 ψs 5 MS1 6 4 5 jS1 i 1 MS2 e jΦ1;N
2
3 e jΦ2;1 6 e jΦ2;2 7 7 6 4 5 jS2 i 1 1 MSN e jΦ2;N
2
3 e jΦN ;1 6 e jΦN ;2 7 7 6 4 5 jSN i e jΦN;N
(6.23)
By using matrix representation, we can provide all manipulations (union, intersection, etc.) in more dimensional space. It is noticeable that each of the phase parameters Φi;k carries useful information.
7 Applicability of quantum models 7.1 Quantum processes Let us imagine that thanks to the complex wave probability function, a situation may arise when we shall be monitoring the probability of the union of several phenomena, that is, either the first phenomenon will occur, OR the second will occur, OR the third will not occur, etc., and that this probability works out to equal zero. Naturally, this situation cannot arise under the classical theory of probability because their probabilities are merely added together, and at best, repeating overlaps of phenomena are subtracted. In the area of wave probability functions, the situation could arise, through the influence of the existence phases, the subtracting of probabilities, and under certain conditions that it is possible to find such a constellation of phenomena, whereby their union works out to zero probability. This, however, automatically means that the inversion phenomenon (intersection) for the given union (in our case, this inversion phenomenon would mean that the first phenomenon does not occur, AND at the same time the second phenomenon does not occur, AND the third phenomenon does occur) will occur with 100% probability, regardless of how the phenomena are arranged spatially. Quantum entanglement is caused by the resonance of wave functions. Thanks to the way this resonance manifests itself, we arrive from a purely probabilistic world to a deterministic world, where there is a disruption of the probabilistic characteristics of various phenomena, and the links between the entangled phenomena become purely deterministic events that even show up in different places (generally even at different times), and for that reason, they are also often designated as spatial (or generally temporospatial) distributed system states. Similarly, one may come to the conclusion that thanks to the principle of resonance, selected (temporospatial) distributed states absolutely cannot occur in parallel, and this leads to an analogy with the Pauli exclusion principle. The selection of group entangled states can, of course, have a probabilistic character, as long as the entanglement is not 100%. This means that parallel behavior occurs only with a certain probability, and this leads to the idea of selecting one variant according to the given probability function. In Ref. [35], we read that the behavior of entangled states is very odd. First, it spreads rapidly among various phenomena, where in this spreading, it makes use of a property known as entanglement swapping. Here is a simple example of this behavior. If we have four phenomena, the first and second being entangled, the third and fourth phenomenon being entangled as well, then as soon as it comes to an entanglement between the first and third phenomenon, the second and fourth are also entangled, without any information being exchanged between them. Notwithstanding, those phenomena can be spatially quite remote from each other. Information Physics. DOI: https://doi.org/10.1016/B978-0-323-91011-8.00002-0 © 2021 Elsevier Inc. All rights reserved.
65
66
Information Physics
7.2 Quantum model of hierarchical networks In many practical applications of complex systems analyses, there is a demand for the modeling of hierarchical networks as it is shown in Fig. 7.1. We can assume that the first layer subsystems A1, A2, A3, and A4 play the key roles represented by the probabilities P(A1), P(A2), P(A3), and P(A4). The second and third layers are responsible for coordination activities: B1 coordinates with A1 and A2, B2 coordinates with A3 and A4, and C1 is responsible for the collaboration between B1 and B2. Let us apply the wave probabilistic approach to the network in Fig. 7.1. We can define wave probabilities assigned to the first layer’s functions: ψðA1Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffi jϕ pffiffiffiffiffiffiffiffiffiffiffiffi P ðA1Þ e 1 ; ψðA2Þ 5 P ðA2Þ e jϕ2 ;
(7.1)
ψðA3Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffi jϕ pffiffiffiffiffiffiffiffiffiffiffiffi P ðA3Þ e 3 ; ψðA4Þ 5 P ðA4Þ ejϕ4 :
(7.2)
The effectiveness of the whole system can be described as follows: P 5 jψðA1Þ 1 ψðA2Þ 1 ψðA3Þ 1 ψðA4Þj2 5 PðA1Þ 1 PðA2Þ 1 P ðA3Þ 1 PðA4Þ 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 PðA1Þ PðA2Þ cos φ2 2 φ1 1 2 PðA1Þ PðA3Þ cos φ3 2 φ1 1 2 PðA1Þ PðA4Þ cos φ4 2 φ1 1 12
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðA2Þ PðA3Þ cos φ3 2 φ2 1 2 PðA2Þ PðA4Þ cos φ4 2 φ2 1 2 PðA3Þ PðA4Þ cos φ4 2 φ3
(7.3)
Based on the Eq. (7.3), we can see that the links (hierarchical coordinations) could be positive or negative with respect to phase parameters φ1 ; φ2 ; φ3 ; φ4 . We can introduce wave probabilities assigned to the components B1, B2, and C1: P ðB1Þ 5 2
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA1Þ P ðA2Þ cos φ2 2 φ1 ;
(7.4)
P ðB2Þ 5 2
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA3Þ P ðA4Þ cos φ4 2 φ3 ;
(7.5)
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA1Þ P ðA3Þ cos φ3 2 φ1 1 2 P ðA1Þ:P ðA4Þ cos φ4 2 φ1 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 P ðA2Þ P ðA3Þ cos φ3 2 φ2 1 2 P ðA2Þ P ðA4Þ cos φ4 2 φ2
P ðC1Þ 5 2
C1
B2
B1 A1 FIGURE 7.1 Hierarchical model of complex system.
A2
A3
A4
(7.6)
Chapter 7 • Applicability of quantum models
67
Optimal management of a hierarchical network consists of the identification of the best arrangement of all subsystems (amplitudes and phases of all components). The coordination process tries to eliminate negative links while supporting the positive links in such a way that the working components A1, A2, A3, and A4 gain as many probabilities (proportional to the energies) as possible. The example presented can be extended into more sophisticated networks with many links and more complicated component arrangements. The structure of the network can also cover serial, parallel, or feedback ordering of its components. In future, research methodology similar to Feynman diagrams [4,5] could be prepared as a part of quantum information physics.
7.3 Time-varying quantum systems The complex quantum systems are analyzed by a methodology that enables us to order different basic gates in the same way as in the systems theory. A general description of a quasistationary quantum system [20] can be defined as follows: 82 3 a1;1 γ 1 ðt 1 1Þ > >
> : an;1 γ n ðt 1 1Þ 2
82 3 c1;1 α 1 ðt Þ > >
> : α n ðt Þ cn;1 2
a1;2 an;2 c1;2 cn;2
3 2 3 2 b1;1 a1;n γ 1 ðt Þ 6 7 6 7 7 6 γ 2 ðt Þ 7 1 6 b2;1 5 4 5 4 an;n bn;1 γ n ðt Þ 3 2 3 2 d1;1 c1;n γ 1 ðt Þ 6 7 6 7 7 6 γ 2 ðt Þ 7 1 6 d2;1 5 4 5 4 cn;n γ n ðt Þ dn;1
b1;2 bn;2 d1;2 dn;2
39 3 2 b1;n β 1 ðt Þ > > 6 7= 7 7 6 β 2 ðt Þ 7 ; 5 4 5> > ; bn;n β n ðt Þ 39 3 2 d1;n β 1 ðt Þ > > 6 7= 7 7 6 β 2 ðt Þ 7 ; 5 4 5> > ; dn;n β n ðt Þ
(7.7)
(7.8)
where the matrices A, B, C, and D are linear time invariant (LTI) evolution n 3 n matrices, and n-valued discrete input time series observed in the time instant t in the wave probabilistic form can be expressed as:
ξ; t 5 β 1 ðt Þ jI1 i 1 ? 1 β n ðt Þ jIn i;
(7.9)
where I1 ; I2 ; . . . ; In is the set of possible values that appears in the studied process and β 1 ðt Þ; β 2 ðt Þ; . . . ; β n ðt Þ is the vector of complex parameters assigned into the input probabilistic discrete values normalized as follows (both normalization conditions are possible depending on the context of the model): jβ 1 ðt Þj2 1 jβ 2 ðt Þj2 1 ? 1 jβ n ðt Þj2 5 1 : jβ 1 ðt Þ 1 β 2 ðt Þ 1 ?β n ðt Þj2 5 1
(7.10)
In the same way, we can define the n-valued output probabilistic discrete process/signal:
ψ; t 5 α1 ðt Þ jI1 i 1 ? 1 αn ðt Þ jIn i
with normalized complex parameters α1 ðt Þ; α2 ðt Þ; . . . ; αn ðt Þ.
(7.11)
68
Information Physics
The constants k1 ðt Þ; k2 ðt Þ guarantee the normalization conditions in each time instant t, and complex parameters γ 1 ðt Þ; γ 2 ðt Þ; . . . ; γ n ðt Þ represent the state-space process/signal (inner parameters):
ζ; t 5 γ ðt Þ jI1 i 1 ? 1 γ ðt Þ jIn i 1 n
(7.12)
A more general model can also be defined through the time-varying evolution matrices Aðt Þ; Bðt Þ; C ðt Þ; Dðt Þ. Because of the difficulty in time evolution modeling, we prefer to use the quasi-stationary model and apply the approaches known in the dynamic system theory, for example, exponential forgetting [27]. Example 7.1 Quantum system with two repeated eigenvalues and one distinct eigenvalue. Let us present an illustrative example of a quantum system with two repeated eigenvalues and one distinct eigenvalue λ1 5 2 12, λ2 5 2 12, λ3 5 2 1, as follows: 2
3 2 0 1 α1 ðt Þ 4 α2 ðt Þ 5 5 4 0 0 0:7 0:9 α3 ðt Þ 2 3 2 3 0:5477 α1 ð1Þ 4 α2 ð1Þ 5 5 4 0:5477 5 0:5477 α3 ð1Þ
3 2 3 0 α1 ðt 2 1Þ 1 5 4 α2 ðt 2 1Þ 5; 2 0:2 α3 ðt 2 1Þ
(7.13)
The initial values were chosen as 1 α1 ð1Þ 5 α2 ð1Þ 5 α3 ð1Þ 5 pffiffiffi ; 3
(7.14)
so that the initial probabilities were equal to P1 ð1Þ 5 P ð1Þ 5 P3 ð1Þ 5
1 3
(7.15)
Fig. 7.2 presents the time evolution of the probabilities assigned to each state. The analysis shows that the evolution of probabilities converges into the final values: P1 5 .42, P1 5 .32, and P3 5 .26. We can extend the quantum modeling from the set of n-values to the set of n-multimodels [18]. Let the sequence with m output values Yz ; zAf1; 2; . . . ; mg be represented by a set of n-models P ðYz jHi Þ; iAf1; 2; . . . ; ng, and let the models be changed over with probability P ðHi Þ. Then, according to the well-known Bayes’ formula, the probability of zth output value can be computed as follows: P ðYz Þ 5
n X
P ðYz jHi Þ P ðHi Þ
(7.16)
i51
Eq. (7.16) holds only if we know both probabilities PðHi Þ and the model components P ðYz jHi Þ; iAf1; 2; . . . ; ng.
Chapter 7 • Applicability of quantum models
69
FIGURE 7.2 Evolution of probabilities assigned to three states: ( represents state P1, 1 represents state P2, and o represents state P3).
Model components P ðYz jHi Þ represent, in our approach, the partial knowledge of the modeled system. In practical situations, the number of model components n is finite and is often chosen as a predefined set of multimodel components P ðYz jHi ; C Þ, where C denotes that the model component is conditioned by the designer’s decision (C meaning the context transition). The probabilities P ðHi Þ mean the combination factors of the model components. In the case where the real model components P ðYz jHi Þ are the same as the designer’s models P ðYz jHi ; C Þ, the Eq. (7.16) is fulfilled. In other cases, the Bayes' formula must be changed so that the designer’s decision is corrected: P ðYz Þ 5
n X i51
P ðYz jHi ; C Þ P ðHi Þ 1 2
X pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðzÞ P ðYz jHk ; C Þ P ðHk Þ P ðYz jHL ; C Þ P ðHL Þ λk:L ;
(7.17)
k,L
zÞ where coefficients λðk;L 5 cosðβ ðzÞ k;L Þ are normalized statistic deviations that could be computed by algorithm [18]. The form (7.17) represents a multidimensional law of cosines that, for the two-dimensional case, could be written as a2 5 b2 1 c 2 1 2 b c cosðϕÞ, where ϕ is the angle between sides b and c [7,8].
70
Information Physics
The probability of zth output value P ðYz Þ can be characterized by a complex parameter ψðYz Þ with the following properties: 2 P ðYz Þ 5 ψðYz Þ ; ψðYz Þ 5
n X
(7.18)
ψi ðYz Þ
(7.19)
i51
ψi ðYz Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jβ ðiÞ P ðYz jHi ; C Þ P ðHi Þ e z :
(7.20)
Because Eqs. (7.187.20) are independent of the selection of the models P ðYz jHi ; C Þ; iAf1; 2; . . . ; ng, these models can be chosen in advance to cover a whole range of probabilistic areas (universal models). The multimodel parameters P ðHi Þ and β z ðiÞ can be estimated from real data (such as amplitude and phase in the Fourier transform) to model real system dynamics. The amplitude and phase representation of multimodels can be expressed as in Fig. 7.3 (the number of a priori models is selected as 4), where amplitudes define the probability of model occurrence, and the phases represent the model composition rule to catch the original dynamics. For a better understanding, an illustrative example is shown. Let two values’ time series Y Af0; 1g be composed from a mixture of three models described by probabilities P ðY jH1 Þ, P ðY jH2 Þ, and PðY jH3 Þ, where each component occurs with probabilities P ðH1 Þ, P ðH2 Þ, and P ðH3 Þ. The probabilities P ðY jHi Þ; iAf1; 2; 3g are defined in Table 7.1, and probabilities P ðHi Þ were chosen: P ðH1 Þ 5 P ðH2 Þ 5 P ðH3 Þ 5
FIGURE 7.3 Complex multimodels’ spectrum.
1 3
(7.21)
Chapter 7 • Applicability of quantum models
Table 7.1
71
Real components P ðYjHi Þ; iAf1; 2; 3g.
Model identification Hi
H1
H2
H3
P ðY 5 1jHi Þ P ðY 5 0jHi Þ
0.9 0.1
0.5 0.5
0.4 0.6
Table 7.2
Designer’s decision of components P ðYjHi ; C Þ; iAf1; 2; 3g.
Model identification Hi
H1
H2
H3
P ðY 51jHi ; CÞ P ðY 50jHi ; CÞ
0.8 0.2
0.6 0.4
0.7 0.3
The designer’s decision (universal models conditioned “C”) is given in Table 7.2. By using the Eqs. (7.187.20) together with the algorithm in Appendix A5 [18], the following complex components can be numerically calculated: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ψ1 ðY 5 1Þ 5 P ðY 5 1jH1 ; C Þ P ðH1 Þ ejβ 1 ð1Þ 5 0:5164; pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ψ2 ðY 5 1Þ 5 P ðY 5 1jH2 ; C Þ P ðH2 Þ ejβ 1 ð2Þ 5 0:4472 e j0:5166 ;
(7.22) (7.23)
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jβ ð3Þ P ðY 5 1jH3 ; C Þ P ðH3 Þ e 1 5 0:4830 e j2:4012 ; pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ψ1 ðY 5 0Þ 5 P ðY 5 0jH1 ; C Þ P ðH1 Þ ejβ 0 ð1Þ 5 0:2582;
(7.25)
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jβ ð2Þ P ðY 5 0jH2 ; C Þ P ðH2 Þ e 0 5 0:3651 e j1:2371 ;
(7.26)
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ψ3 ðY 5 0Þ 5 P ðY 5 0jH3 ; C Þ P ðH3 Þ ejβ 0 ð3Þ 5 0:3162 e j2:1924 :
(7.27)
ψ3 ðY 5 1Þ 5
ψ2 ðY 5 0Þ 5
(7.24)
Based on the Eq. (7.18), the two complex parameters can be computed: ψðY 5 1Þ 5 ψ1 ðY 5 1Þ 1 ψ2 ðY 5 1Þ 1 ψ3 ðY 5 1Þ 5 0:7746 e j0:7837 ;
(7.28)
ψðY 5 0Þ 5 ψ1 ðY 5 0Þ 1 ψ2 ðY 5 0Þ 1 ψ3 ðY 5 0Þ 5 0:6325 e j1:2596 ;
(7.29)
where probabilities of falling 1 or 0 could be calculated as: P ðY 5 1Þ 5 jψðY 5 1Þj2 5 0:6;
(7.30)
P ðY 5 0Þ 5 jψðY 5 0Þj2 5 0:4:
(7.31)
The outcomes of (7.30) and (7.31) are in agreement with the result achieved by the knowledge of model components given in Table 7.1 and by using Bayes’ formula: P ðY 5 1Þ 5 P ðY 5 1jH1 Þ P ðH1 Þ 1 P ðY 5 1jH2 Þ P ðH2 Þ 1 P ðY 5 1jH3 Þ P ðH3 Þ 5 1 5 ð0:9 1 0:5 1 0:4Þ 5 0:6: 3
(7.32)
72
Information Physics
The abovementioned numerical example shows that the theory of multimodels’ composition is feasible. In practical analysis, the amplitudes and phases of the model components will be estimated from real data.
7.4 Quantum information gyrator Consciousness seems to be the superposition of parallel quantum processes (different components of the brain connected through entanglement). Each process represents different knowledge: either historical experience or a future imagination (different scenarios of further development or fantasy). All knowledge coded into the neural network is linked through quantum features. Rational thinking means to reduce the complexity of possibilities’ space to some tackled dimension (falling to the subset). Thinking means going through the imaginable subspace of possibilities to recall forgotten ideas due to its link to other information. Let us imagine the quantum example of the information gyrator presented in Section 3.2 with a set of superposed information flows and contents:
ψðΦ1 Þ~α1;1 Φ1;1 1 α1;2 Φ1;2 1 ? 1 α1;N Φ1;N
(7.33)
ψðΦ2 Þ~α2;1 Φ1;1 1 α2;2 Φ2;2 1 ? 1 α2;N Φ2;N
(7.34)
ψðI1 Þ~β 1;1 I1;1 1 β 1;2 I1;2 1 ? 1 β 1;N I1;N
(7.35)
ψðI2 Þ~β 2;1 I2;1 1 β 2;2 I2;2 1 ? 1 β 2;N I2;N
(7.36)
It means that we have at our disposal many overlapping variants of information flows and contents both in short-term memory (STM) and long-term memory (LTM) that were created for example, under different conditions or in different time periods. Because of opinions and experiences’ diversities, some information contents can be contradictory. The gyrator input can be composed of a set of ith input components I1,i Φ1,i (the left side marked as 1 can represent an STM) and the output of a set of jth component I2,j Φ2,j (the right side marked as 2 can represent an LTM). Theoretically, the set of parallel working gyrators with their unique resonance frequencies is emerging. Each resonance maximizes information content assigned to the combination I1;i ; Φ1;i ; I2;j ; Φ2;j . In summary, it yields into the superposition of different resonance frequencies. From radioelectronics it is recognized that except for pure resonance frequencies assigned to the combinations I1;i ; Φ1;i ; I2;j ; Φ2;j , combined frequencies known as higher harmonic components are also created. The more variants of gyrators I1;i ; Φ1;i ; I2;j ; Φ2;j , the more different frequencies can occur and then the ability of information codings are increasing. Considering all frequencies’ variants, it is evident that a complex web of frequencies can be created. If we take each frequency as the carrier of modulated information, we can bring the speculative hypothesis that our consciousness is modulated in this brain network.
Chapter 7 • Applicability of quantum models
73
7.5 Quantum transfer functions Let us define two N-dimensional quantum systems: N X ψ1 5 αi jΦi i;
M X
ψ2 5 β j Φj ;
i50
(7.37)
j50
together with the additive principle among indexes of combined quantum states jΦi i; Φj as follows:
jΦi i Φj 5 Φi1j ;
(7.38)
where i, j are integers (positive or negative) and, for simplicity, suppose that quantum states are not dependent on time t. Then the combined joint state can be expressed as: X
N1M ψ1;2 5 ψ1 ψ2 5 γ k jΦk i;
(7.39)
k50
where is the Kronecker product and γ k is given as: γk 5
N X
αi β k2i :
(7.40)
i50
Quantum state transform (QST) Q½ and inverse quantum state transform Q21 ½ of the quantum system can be defined as follows: Q½α0 jΦ0 i 1 α1 jΦ1 i 1 α2 jΦ2 i 1 ? 1 αN jΦN i 5 α0 1 α1 η 1 α2 η2 1 ? 1 αN ηN
(7.41)
Q21 α0 1 α1 η 1 α2 η2 1 ? 1 αN ηN 5 α0 jΦ0 i 1 α1 jΦ1 i 1 α2 jΦ2 i 1 ? 1 αN jΦN i;
(7.42)
where Q½ is the complex function of the complex variable η. The quantum state transform Q½ transforms the superposed quantum states into the polynomial function of the variable η and vice versa. The time evolution in the z-transform is characterized by a function of complex variable z; the QST evolution is characterized by a complex function of the variable η. In more general cases, the time-dependent quantum system can be represented by a function of both complex variables z and η. We can imagine the quantum system with the input/output quantum states defined as follows: N X ψIN 5 αi jΦi i; i51
M
X ψOUT 5 β j Φ j j51
(7.43)
74
Information Physics
Let us suppose that our quantum system is characterized by an inner quantum state: K
X ψINNER 5 γ i jΦi i:
(7.44)
i51
Then the output state can be represented by a combination of input and inner states as follows:
ψOUT 5 ψINNER ψIN :
(7.45)
The inner state should be understood
as the
quantum impulse function because it is ψOUT 5 ψINNER in case the “quantum Dirac impulse” equal to the system output
ψIN 5 1 j0i is applied on the input. If the inner state has a finite dimension (in our case K), we speak about the quantum finite impulse response (QFIR). On the other hand, for an infinite dimension the system is called the quantum infinite impulse response (QIIR). The quantum transfer function (QTF) can be defined as the ratio of the transformed output and input [72]: PM
j Q ψOUT j50 β j η QTF ðηÞ 5 5 PN : i Q ψIN i50 αi η
(7.46)
Under the condition (7.38), the nominator and denominator of QTF have polynomial forms. This principle is analogical to LTI conditions in the z-transform. Of course, there exist quantum systems that do not fulfill the condition (7.38). In such a case, the QTF is a general function of η and in special cases it can be approximated by the polynomial form given in (7.46) through, for example, the Padé approximation [71]. We suppose the quantum system has an infinite impulse response (QIIR) if the QTF denominator exists. The QIIR systems given by (7.46) can be divided into stable or unstable parts according to the position of poles. The general QTF can be rewritten in the following form: η 2 η~ 0 ? η 2 η~ M k k 5 0 1?1 N ; QTF ðηÞ 5 η 2 η^ 0 ? η 2 η^ N η 2 η^ 0 η 2 η^ N
(7.47)
where η~ 0 ; . . . ; η~ M are nulls, η~ 0 ; . . . ; η~ N poles, and k0 ; . . . ; kN constants (all nulls, poles, or constants can be complex numbers). In analogy to the z-transform, Eq. (7.47) can be rewritten as: k0 η21 k η21 1?1 N ; QTF η21 5 1 2 η^ 0 η21 1 2 η^ N η21
(7.48)
where η21 points out to the last index of the quantum state. The stability condition of the quantum system has to be defined for all poles: j^ηx j , 1
(7.49)
Chapter 7 • Applicability of quantum models
75
For stable quantum systems, the normalization condition is finite and there exist real probabilities assigned to different superposed quantum states. For unstable quantum systems [at least one pole not fulfilling condition (7.49)], the normalization condition is infinite (the sum of the modulus of complex parameters of all superposed quantum states) and all complex parameters of the output quantum system jψOUT i are infinite. Such a system cannot give us reasonable measured values until it is passed through another quantum system whose nulls can remove the unstable poles of QTF. The question is, where in real life this kind of system exists and how to use this feature. What will happen if the information is modulated on different states, for example, by phase modulation but the states are unstable? Could such information be available by measurements? Do the poles represent something like quantum information black holes? The next question is how such a system behaves on the boundary of stability/nonstability area. On the stability limit, there exists an infinite number of equally probable superposed states, where the probability of each state is, due to the normalization condition, extremely low (limit is zero). This principle can be interpreted as a way of how to create the empty space in which only basic rules and principles like quantization, additive principle, etc. are incorporated (we can call it quantum information vacuum). How can the entanglement change the behavior of a system close to the boundary limit of quantum stability/nonstability? Can we, with the help of entanglement, code the information in the quantum information vacuum system beforehand? Such entangled information should change the behavior of future particles added into the system and influence the quantum system evolution. We suppose to have available two quantum systems defined through the QTF:
21
Q1 η
PM 1
2j Q ψOUT;1 j50 β j η
5 5 P N 1 α η2i Q ψIN;1
(7.50)
PM 2
β η2j Q ψOUT;2
5 PNj50 j 5 : 2i 2 Q ψIN;2 i50 αi η
(7.51)
i50
21
Q2 η
The serial ordering is given as:
i
Q η21 5 Q1 η21 Q2 η21 :
(7.52)
The parallel ordering can be expressed in the form: Q η21 5 Q1 η21 1 Q2 η21 ;
(7.53)
and for the feedback ordering, the QTF can be written as: Q η21 5
Q1 η21 : 1 7 Q1 ðη21 Þ Q2 ðη21 Þ
(7.54)
In the past decade, much energy has been directed into the area of complex networks [6]. In the random network nodes, we have approximately the same number of links, making the
76
Information Physics
Resoluon level
00
01
000 , 001
010 , 011
..... . Resoluon
10
11
level
100 , 0101
110 , 111
.....
FIGURE 7.4 Hierarchy of quantum states on various resolution levels.
FIGURE 7.5 Quantum feedback system.
distribution of connectivity homogeneous. In the scale-free network, the contribution of hubs or highly connected nodes to the overall connectivity is dominating. P(k)-connectivity distribution is defined as the probability that a randomly chosen node in a network has exactly k links. For random networks P(k), the Poisson distribution follows. In scale-free networks, a relatively small number of hubs P(k) dominates, which follows the power law. Let us consider a scale-free network in quantum system notation. The network’s node is represented by a quantum state. The more nodes, the more superposed quantum states exist. Links between the quantum states can be represented through quantum links like the entanglement or generally through phase correlations. Remember that quantum links can be automatically propagated due to quantum swapping. Till now we have assumed all quantum states to be on an equal resolution level. We can now move to quantum hierarchical systems (with architecture similar to the scale-free networks) where we must distinguish among
various resolution levels as given in Fig. 7.4. Now let us suppose the qubit ψIN 5 α0 j0i 1 α1 j1i is sent to the input of a very simple quantum feedback system given in Fig. 7.5. Let us suppose the quantum system S by a quantum impulse response represented as the inner quantum state is defined
ψINNER 5 β 0 j0i 1 β 1 j1i. It is easy to track each feedback loop and identify the quantum output state:
ψOUT 5 ðα0 j0i 1 α1 j1iÞ ðβ 0 j0i 1 β 1 j1iÞ
1 ψOUT 5 ðα0 j0i 1 α1 j1iÞ ðβ 0 j0i1β 1 j1iÞ2 1 ðα0 j0i 1 α1 j1iÞ ðβ 0 j0i 1 β 1 j1iÞ
2 ψOUT 5 ðα0 j0i 1 α1 j1iÞ ðβ 0 j0i1β 1 j1iÞ3 1 ðα0 j0i 1 α1 j1iÞ ðβ 0 j0i1β 1 j1iÞ2 1 3 1 ðα0 j0i 1 α1 j1iÞ ðβ 0 j0i 1 β 1 j1iÞ ...
(7.55)
Chapter 7 • Applicability of quantum models
77
In reality, all superposed states on all resolution levels must be available in the final quantum output state:
ψOUT 5 ψOUT 1 ψOUT 1 ψOUT 1 ? 1 2 3
(7.56)
The generation of hierarchical systems by quantum feedback can yield into more complex circuits, where the links (entanglements) between states on different levels can be formed. Such networks can create interesting structures with many features. This instrument theoretically enables the storage of an infinite amount of information in links. Unfortunately, the deepest resolution levels can be accessible only with a very low probability. There must exist special filters that amplify such states at lower resolution areas. Similar hierarchical structures as those computed by feedback could be obtained by the nonlinear quantum system. Let us suppose we have a quantum polynomial nonlinear impulse function:
ψINNER 5 λN ðβ 0 j0i1β 1 j1iÞN 1 λN21 ðβ 0 j0i1β 1 j1iÞN21 . . . 1 λ1 ðβ 0 j0i 1 β 1 j1iÞ;
(7.57)
where λi ; iAf1; 2; ::; N g are
complex parameters. For input qubit ψIN 5 α0 j0i 1 α1 j1i sent into such a nonlinear system, we can write the quantum output as:
ψOUT 5 λN ðβ 0 j0i1β 1 j1iÞN ðα0 j0i 1 α1 j1iÞ 1 λN21 ðβ 0 j0i1β 1 j1iÞN21 ðα0 j0i 1 α1 j1iÞ 1 ? 1 λ1 ðβ 0 j0i 1 β 1 j1iÞ ðα0 j0i 1 α1 j1iÞ
(7.58)
It is evident that the output (7.58) has a similar behavior to the feedback system (7.56), which yields a scale-free structure of the quantum network.
This page intentionally left blank
8 Extended quantum models A probability space consists of a sample space S and a probability function P(.), mapping the events of S to real numbers in [0,1], such that P(S) 5 1, and if A1 ; A2 ; . . . ; AN is a sequence of disjointed events, then the union rule is fulfilled by: P
[ iAN
!
Ai
5
X
P ðAi Þ
(8.1)
iAN
If the events A1 ; A2 ; . . . ; AN are not disjointed, the following (intersection and union) rules can be used: P ðA1 - A2 - . . . - An Þ 5 P ðA1 Þ P ðA2 jA1 Þ P ðA3 jA1 - A2 Þ . . . P ðAN jA1 - . . . - AN21 Þ P ðA1 , A2 , . . . , AN Þ 5 N N N X X X P ðAi Þ 2 PðAi - Aj Þ 1 P Ai - Aj - Ak 5 i51
i,j N21
1 ? 1 ð21Þ
(8.2)
(8.3)
i,j,k
P ðA 1 - A 2 - . . . - A N Þ
Considering the basic laws of probability (8.3), we need generally NðN2 1 1Þ free parameters. Let us suppose N events Ai ; iAf1; 2; . . . ; N g with defined probabilities P ðAi Þ; iAf1; 2; . . . ; N g and N wave probabilistic functions: ψðAi Þ 5 αi ejυi 5
pffiffiffiffiffiffiffiffiffiffiffi jυ P ðAi Þ e i ; iAf1; 2; . . . ; N g
(8.4)
together with their superposition state ψ as a quantum object:
ψ 5 ψðA1 Þ jA1 i 1 ψðA2 Þ jA2 i 1 ?: 1 ψðAN Þ jAN i
(8.5)
pffiffiffiffiffiffiffiffiffiffiffi with moduli PðAi Þ and phases υi , where the reference phase assigned to event A1 is chosen as υ1 5 0. The intersection and union rules for quantum models were defined in Ref. [21]: 2 N X P ðjA1 i , jA2 i , . . . , jAN iÞ 5 ψðAi Þ i51 ½ψ ðAr Þ ψðAs Þ 1 ψðAr Þ ψ ðAs Þ limffi P ðjAr i - jAs iÞ 5 pffiffiffiffiffiffiffiffiffiffiffi PðAk Þ ! 0 k 6¼ r; s
(8.6)
(8.7)
where symbol ψ expresses a complex conjugate of ψ.
Information Physics. DOI: https://doi.org/10.1016/B978-0-323-91011-8.00003-2 © 2021 Elsevier Inc. All rights reserved.
79
80
Information Physics
The quantum models (8.4) and (8.5) provide only ð2N 2 1Þ parameters—moduli ψðAi Þ and phases ν i of wave probabilistic functions ψðAi Þ 5 ψðAi Þ e jν i . Let us show the dimension limit of the quantum model in the following illustrative example. Example 8.1 N-dimensional distribution and its approximation by quantum model. We will assume N-dimensional distribution for which we need to specify all the values of the N 3 N covariance matrix σi;j . Due to symmetry, we need only NðN2 1 1Þ parameters. Since we cannot determine the covariance matrix σi;j exactly, we, therefore, need to come up with an approximate description, a description that would require fewer parameters. Instead of representing each quantity δi as an N-dimensional vector ai 5 ðai;1 ; . . . ; ai;N Þ P corresponding to δi 5 N j51 aij Xj where fX1 ; . . . ; XN g are independent standard random variables, we select some value k ,, N and represent each quantity δi as a k-dimensional P vector corresponding to δi 5 kj51 aij Xj . For k 5 2, the approximation leads to a quantum model [59]. In quantum (wave) approximation, only (N 2 1) phase parameters among N-dimensional events Ai ; iAf1; 2; . . . ; N g are available.
8.1 Ordering models Suppose the unique ordering of the events fA1 ; A2 ; . . . ; AN g, where each index represents the event’s order in sequence, and the distance between two events l; m is defined as dl;m 5 jl 2 mj. In quantum notation, the phase difference ν l;m 5 ν l 2 ν m represents a correlation (link) between two events. The quantum (wave) model is fully applicable if the correlation between events l; m is dependent only on the distance between them dl;m 5 jl 2 mj and not on their position. For this example, the new phase parameters ν~ i could be introduced: ν~ 1 5 ν 2 2 ν 1 5 ν 3 2 ν 2 5 ?ν N21 2 νN22 5 ν N 2 ν N21 ν~ 1 5 ν 3 2 ν 1 5 ν 4 2 ν 2 5 ? 5 ν N 2 νN22 ... ν~ N21 5 ν N 2 ν 1
(8.8)
From (8.8), it is clear that we can describe the whole system by (N 2 1) phase parameters ðν~ 1 ; ν~ 2 ; . . . ; ν~ N21 Þ. This model is typical for time-invariant subsystems, where the correlation is dependent only on the time differences between two realizations.
8.2 Incremental models Let us suppose the existence of a reference event (a phase of event A1 ) typically equal to ν 1 5 0, from which we measure the correlations (links) to other events. In an incremental
Chapter 8 • Extended quantum models
81
model, due to additivity all other correlations could be computed from phases. For example, a phase difference between events k and (k 1 d) has to be equal to ν d 5 ν k1d 2 ν k . This situation is standard for quantum mechanics where the reference represents zero energy and other energy levels are gradually increased by step functions. Such a system is represented by following phase structure as: ν~ 1 5 ν 1 ν~ 2 5 ν 1 1 ν 2 ... ν~ N 5 ν 1 1 ν 2 1 ? 1 ν N
(8.9)
This model corresponds to the gradual evolution of complex systems. In the beginning, we have only the subsystem S1 . After adding the subsystem S2 , the correlation yields into encapsulation into the new subsystem S1;2 . Now we can imagine adding the subsystem S3 to subsystem S1;2 , which plays the role of a reference for the subsystem S3 . We can continue up to the subsystem SN that will be dependent on the previous encapsulated subsystem S1;2;...;N21 .
8.3 Inserted models The main idea of an extended quantum model is to include into quantum superposition not only events Ai ; iAf1; 2; . . . ; N g but also n additional events’ functions, for example, fk ðA1 ; . . . ; AN Þ; kAf1; . . . ; ng. Then the modified quantum model can be written as: ψðAÞ 5 ψðA1 Þ jA1 i 1 ψðA2 Þ jA2 i 1 ? 1 ψð AN Þ jAN i 1
1 ψð f1 ðA1 ; . . . ; AN ÞÞ f1 ðA1 ; . . . ; AN Þ 1 ? 1 ψð fn ðA1 ; . . . ; AN ÞÞ fn ðA1 ; . . . ; AN Þ
(8.10)
Such an approach brings many possibilities of how to rapidly extend the dimensionality of complex systems. Example 8.2 Dimensionality analysis of binary functions. In the case of N binary events Ai Af0; 1g; iAf1; 2; . . . ; N g, we have 2N different variants of output combinations. If we suppose a binary function applied on each event’s combination, N theoretically we can achieve 22 different variants of the function outputs. Let us suppose an example with N 5 2 which means four combinations of binary events A1; A2; Afð0; 0Þ; ð0; 1Þ; ð1; 0Þ; ð1; 1Þg. By the application of a two-dimensional function f ðA1 ; A2 Þ, we can return 42 5 16 variants of different outputs. For N 5 3, we have eight binary combinations of events A1 ; A2 ; A3 but 256 possible variants of binary functions. This example demonstrates how fast the number of free parameters increases by adding additional binary functions.
82
Information Physics
8.4 Intersectional extended models For practical feasibility, we will restrict ourselves only to events’ intersections fh ðA1 ; . . . ; AN Þ 5 Ak - Am - ? - Ar ; k; m; . . . ; rAf1; 2; . . . ; N g. This extended quantum model can then be rewritten as: ψðAÞ 5 ψðA1 Þ jA1 i 1 ψðA2 Þ jA2 i 1 ? 1 ψðAN Þ jAN i 1 1 ψðA1 - A2 Þ jA1 - A2 i 1 ? 1 ψðAk - Am - . . . - Ar Þ jAk - Am - . . . - Ar i
(8.11)
There are of course other possibilities of how to include additional information into a quantum model; these intersections seem to be more natural and easier to apply. In this case, it is possible to manipulate with different combinations of events’ intersections ( is the Kronecker product):
jAi i Ai - Aj . Ai - Aj jAi - Ak - Ar i jAk - Ar i.jAi - Ak - Ar i
(8.12)
jAi - Ak - Ar i Ap - Aq . Ai - Ak - Ar - Ap - Aq
Such logical rules give us mathematical instruments for wave probabilistic interferences that yield to the modeling of new multidimensional complex systems like quantum entanglement. A superposition of different events together with some events’ intersections yields into the extended intersectional quantum model. We can divide events’ intersections into two groups: inner (microscopic) and outer (macroscopic). Inner (microscopic) intersections represent emergent features of a complex system and are modeled by phase differences of wave probabilistic functions. Due to positive or negative signs, they could go to either inner attraction or repelling of events. These features yield into well-known quantum modeling. Outer (macroscopic) intersections could be seen as an additional observable behavior that could be considered as the new event (quantity) of the studied system. Because of its macroscopic nature, we need to use only the classical probability theory that was developed for the description of macroscopic phenomena. For example, the extended quantum model enables the modeling of links between two different macroscopic intersections through their wave probabilistic phases. Entanglement can then be realized not only among pure events but also among events’ intersections. Example 8.3 The social model of relationships among company employees. We can suppose to have N employees Ai ; iAf1; 2; . . . ; N g with the inner links that are defined psychologically, the ability to take on responsibility, etc. Such characteristics can be scarcely measured. Its observations are limited to inner phase parameters that are extracted only from the holistic behavior of the team. On the other hand, there are outer (macroscopic) links that have strong impacts on the holistic system’s behavior and are easily identifiable. We can state, for example, family
Chapter 8 • Extended quantum models
83
relationships and schoolmates. Such macroscopic links should be placed into our model to catch better details. If I am an employee Ar , I am influenced by all other employees Ak ; k 6¼ r and also by links to additional (macroscopic) employee groups, for example, Ak - Ac - Ad , Aq - Al - At - Ao . Taking into consideration all the relations among the studied group of employees, the holistic social model can be better specified. In many practical applications of quantum models, there is a demand for a description of complex networks. The optimal management of complex systems consists of the best arrangement of all network nodes represented by amplitudes and phases of all components. Example 8.4 The incremental/ordering quantum model. We can assume events A1 ; A2 ; A3 ; A4 represented by probabilities P ðA1 Þ; P ðA2 Þ; P ðA3 Þ; P ðA4 Þ represented by assigned wave probabilistic functions: ψðA1 Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffi jφ pffiffiffiffiffiffiffiffiffiffiffiffi P ðA1 Þ e 1 ; ψðA2 Þ 5 P ðA2 Þ e jφ2
(8.13)
ψðA3 Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffi jφ pffiffiffiffiffiffiffiffiffiffiffiffi P ðA3 Þ e 3 ; ψðA4 Þ 5 P ðA4 Þ e jφ4
(8.14)
The union of all events can be given as follows: 2 P ðA1 , A2 , A3 , A4 Þ 5 ψðA1 Þ1ψðA2 Þ1ψðA3 Þ1ψðA4 Þ 5 1 P ðA2 Þ 1 P ðA3 Þ 1 P ðA4 Þ 1 5 P ðAp 1 Þffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA1 Þ P ðA2 Þ cosðφ2 2 φ1 Þ 1 2 pP ðA1 Þ P ðA3 Þ cosðφ3 2 φ1 Þ 1 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð Þ P ð A Þ cosðφ 2 φ Þ 1 2 P ðA2 Þ P ðA3 Þ cosðφ3 2 φ2 Þ 1 1 2 pP A 1 4 4 1 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 P ðA2 Þ P ðA4 Þ cosðφ4 2 φ2 Þ 1 2 P ðA3 Þ P ðA4 Þ cosðφ4 2 φ3 Þ
(8.15)
Comparing with classical probabilistic rule, we can extract:
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA1 - A2 Þ 5 2 pP ðA1 Þ P ðA2 Þ cosðφ2 2 φ1 Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðA1 Þ P ðA3 Þ cosðφ3 2 φ1 Þ P ðA1 - A3 Þ 5 2 pP ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA1 - A4 Þ 5 2 P ðA1 Þ PðA4 Þ cos φ4 2 φ1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA2 - A3 Þ 5 2 pP ðA2 Þ P ðA3 Þ cosðφ3 2 φ2 Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðA2 Þ P ðA4 Þ cosðφ4 2 φ2 Þ P ðA2 - A4 Þ 5 2 pP ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA3 - A4 Þ 5 2 P ðA3 Þ P ðA4 Þ cosðφ4 2 φ3 Þ
(8.16)
To obtain the ordering quantum model, we provide transformation and compute the following phases: ν~ 1 5 φ2 2 φ1 5 φ3 2 φ2 5 φ4 2 φ3
(8.17)
ν~ 2 5 φ3 2 φ1 5 φ4 2 φ2
(8.18)
ν~ 3 5 φ4 2 φ1
(8.19)
The phases ν~ 1 ; ν~ 2 ; ν~ 3 fully describe the quantum ordering model, and it is not necessary to provide any approximation.
84
Information Physics
Example 8.5 The extended intersectional quantum model. Let us use the previous example and add into this model one additional macroscopic intersection P ðA3 - A4 Þ represented by the wave probabilistic function: ψðA3 - A4 Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jν P ðA3 - A4 Þ e 3;4
(8.20)
The extended intersectional quantum model can be written in “bracket” notation as: ψðA1 ; A2 ; A3 ; A4 Þ 5 ψðA1 Þ jA1 i 1 ψðA2 Þ jA2 i 1 1 ψðA3 Þ jA3 i 1 ψðA4 Þ jA4 i 1 ψðA3 - A4 Þ jA3 - A4 i
(8.21)
The union of all events (8.15) can be enlarged: P ðA1 , A2 , A3 , A4 Þ 5 ψðA1 ; A2 ; A3 ; A 4 Þ ψT ðA1 ; A2 ; A3 ; A4 Þ 5 5 ψðA1 Þ A1 i 1 ψðA2 Þ A2 i 1 ψðA3 Þ A3 i 1 ψðA4 Þ A4 i 1 ψðA3 - A4 Þ jA3 - A4 i T ψ ðA1 Þ A1 iT 1 ψT ðA2 Þ A2 iT 1 ψT ðA3 Þ A3 iT 1 ψT ðA4 Þ A4 iT 1 ψT ðA3 - A4 Þ A3 - A4 iT (8.22)
We can use the following “composition” rules: ψðAi Þ jAi i ψT Aj jAj iT 1 ψ Aj jAj i ψT ðAi Þ jAi iT 5 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 5 P ðAi Þ P Aj cosðϕi 2 ϕj Þ jAi - Aj i
(8.23)
ψðAi Þ jAi i ψT Aj - Ak jAj - Ak iT 1 ψ Aj - Ak jAj - Ak i ψT ðAi Þ jAi iT 5 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 5 P ðAi Þ P Aj - Ak cosðϕi 2 ϕj;k Þ Ai - Aj - Ak i
(8.24)
ψðAi Þ jAi i ψT ðAi - Ak Þ jAi - Ak iT 1 ψðAi - Ak Þ jAi - Ak i ψT ðAi Þ jAi iT 5 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 5 P ðAi Þ P ðAi - Ak Þ cosðϕi 2 ϕi;k Þ jAi - Ak i
(8.25)
The probabilistic union of the enlarged intersectional model can be computed: P ðA1 , A2 , A3 , A4 Þ 5 jψðA1 Þ 1 ψðA2 Þ 1 ψðA3 Þ 1 ψðA4 Þ 1 ψðA3 - A4 Þj2 5 Þ 1ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA2 Þ 1 P ðA3 Þ 1 P ðA4 Þ 1 P ðA3 -p A4ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Þ1 5 P ðA1p ðA1 Þ P ðA3 Þ cosðφ3 2 φ1 Þ 1 1 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA1 Þ P ðA2 Þ cosðφ2 2 φ1 Þ 1 2 pP ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 pP A ð Þ P ð A Þ cosðφ 2 φ Þ 1 2 P ðA2 Þ P ðA3 Þ cosðφ3 2 φ2 Þ 1 1 4 4 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð Þ P ð A Þ cosðφ 2 φ Þ 1 2 P ðA3p Þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA4 Þ cosðφ4 2 φffi 3 Þ 1 1 2 pP A 2 4 2 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 4 1 2 P ðA1 Þ P ðA3 - A4 Þ cosðφ3;4 2 φ1 Þ 1 2 P ðA2 Þ P ðA3 - A4 Þ cosðφ3;4 2 φ2 Þ 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 P ðA3 Þ P ðA3 - A4 Þ cosðφ3;4 2 φ3 Þ 1 2 P ðA4 Þ P ðA3 - A4 Þ cosðφ3;4 2 φ4 Þ
(8.26)
In addition of intersections (8.16), P ðA3 - A4 Þ was extended: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P~ ðA3 - A4 Þ 5 P ðA3 - A4 Þ 1 2 P ðA3 Þ P ðA3 - A4 Þ cosðφ3;4 2 φ3 Þ 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 P ðA4 Þ P ðA3 - A4 Þ cosðφ3;4 2 φ4 Þ
(8.27)
Chapter 8 • Extended quantum models
85
where P~ ðA3 - A4 Þ is a modified extended intersection probability P ðA3 - A4 Þ enriched with inner links to events A3 ; A4 . The third-order intersections appeared due to added wave function (8.20): pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA1 - A3 - A4 Þ 5 2 P ðA1 Þ P ðA3 - A4 Þ cosðφ3;4 2 φ1 Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P ðA2 - A3 - A4 Þ 5 2 P ðA2 Þ P ðA3 - A4 Þ cosðφ3;4 2 φ2 Þ
(8.28)
In a similar way, more variants of macroscopic intersections can be included in extended intersectional quantum models. The more intersections (pieces of information) and the more available additional parameters, then there are more possibilities to model a multidimensional complex system. The benefit of the extended model is that a well-known entanglement cannot only be among the pure events but also among the different intersections or among the combinations of intersections and pure events. These features bring new possibilities for modeling, especially soft systems with enormous links and interconnections.
This page intentionally left blank
9 Complex adaptive systems A system, or a situation, is said to be complex [74] if it is open (exchanges information with its environment), consists of many diverse, partially autonomous, and richly connected components, called Agents, and if it has no central control. The behavior of complex systems is uncertain without being random. The global behavior emerges from the interaction of constituent agents. The autonomy of agents is limited by constraints imposed on them by the system to which they belong. Complexity is discussed in detail in Ref. [73].
9.1 Basic agent of smart services The digital ecosystem is a number of smart services that can be represented by agents in the virtual world (twin) [66]. The Basic Smart Service agent [67] includes four parts as it is shown in Fig. 9.1. Let us shortly describe its principle for energy applications. In other domains, the situation will be similar. Every node of a smart grid (numbered 1, 2, . . .) can be at the same time a producer of energy (solar panels on the roof), a consumer of energy (appliances), storage of energy (batteries), or a purchaser of energy (buy energy from external energy supplier or debt it from other smart grid nodes). This solution enables an optimization among different nodes of complex smart grid networks (different buildings, street lights, transportation, etc.) both on either the demand side (optimization of energy consumption) or on the supply side (tailor-made energy production/purchase or sharing of energy storage). Some nodes could only be simple consumers buying energy from an external energy supplier, others could still produce its own energy, and the more advanced (intelligent) nodes can effectively work with stored/shared/debt energy. Every smart grid node can be modeled as an alliance of up to four agents as it is shown in Fig. 9.1. The whole system can be optimized by negotiation among the nodes’ agents to achieve a good cooperation model among different components. Other networks can also be modeled in the same way as the energy networks by incorporating the basic smart agents (transportation, health, safety and security, etc.) along with their different specifications. The negotiation among all network agents yields into the cross-disciplinary optimization of the whole city area—for example, it could be that there is not enough energy in the morning for the new electrical transport vehicles. A new virtual round table will be required to find the appropriate decisions to solve the problem, for example, by changing opening time for offices, schools, etc. Information Physics. DOI: https://doi.org/10.1016/B978-0-323-91011-8.00005-6 © 2021 Elsevier Inc. All rights reserved.
87
88
Information Physics
FIGURE 9.1 Basic Smart City agent.
9.2 Smart resilient cities The contemporary cities face unprecedented challenges relating to ongoing urbanization, climate change, extreme weather events, and last but not least, the rapid advances in technology. Clearly, new approaches to building and managing smart cities in the 21st century need to be guided by principles of flexibility, adaptability, complexity, and inclusiveness. The Smart City concept [64] tries to use modern technologies in suitable ways to invoke synergic effects between various subsystems (transportation, logistics, safety and security, energetics, building administration, etc.) with reference to the energetic intensity and the quality of life of its citizens. The key point of this definition is that for a city to be “smart”, it must be adaptive, resilient, and sustainable. That is basically what the term “smart” means. Smart individuals, businesses, societies, and nations, very much like natural ecosystems, are adaptive, resilient, and sustainable. Therefore the cities should also be “smart”. Adaptability, resilience, and sustainability are emergent properties of complex adaptive systems. It follows that cities that aspire to be smart must undergo a thorough digital transformation, that is, the acceptance of advanced digital technology and, in particular, multiagent systems (MAS), artificial intelligence (AI), internet of things (IoT), social websites, and a variety of digital sensors for measuring the effectiveness of services to citizens and visitors.
Chapter 9 • Complex adaptive systems
89
The essence of the current problem with cities is that the complexity of their political, social, and economic environment has increased exponentially, while their administration and technological infrastructure has remained rigid and therefore unable to operate effectively under any new volatile, dynamic conditions (condition of complexity). As a consequence, citizens are frustrated, there is an increase of health hazards from pollution, precious resources are wasted, and the natural environment is damaged. Key problems that must be addressed are as follows: 1. Wasteful, uncoordinated, and environment-damaging services to citizens and visitors, 2. Inadequate communication channels between citizens, visitors, and the city administration, which prevent citizens and visitors to convey their real requirements and expectations, 3. Inadequate performance monitoring and lack of knowledge on how to effectively manage a city under conditions of complexity, 4. Outdated strategic planning. Consider an urban settlement such as a town, a city, a megacity, or a city-state. Typically, it provides a large variety of services, which may include: • • • • • • • • • • • • • • • • • • • •
Education Health care Social services Roads and transport Food supply Water supply Drainage Waste disposal Rubbish collection Economic development Planning Protecting the public (from crime, fire, elements, etc.) Libraries Environmental health Tourism Leisure and amenities Planning permissions Housing services Collection of council tax Local elections
Every one of these services can be improved by using ecofriendly resources and by the scheduling of these resources in real-time. Even greater improvements may be made by coordinating the delivery of services using advanced digital technology, ensuring that services compete for all available resources and
90
Information Physics
when necessary share them. At present, every vital service is managed individually, and the coordination of services is almost nonexistent. Meetings called ad hoc (if at all) are used to attempt coordination of the different services. While this may have been quite adequate in the past, when city authorities worked under stable operating conditions, it is currently unacceptable. There is a considerable waste of resources causing significant damage to the natural environment and an increased health hazard from pollution. This is primarily due to the recent exponential increase in the complexity of the political, social, and economic environment of the city, which increases the frequency of unpredictable disruptive events. These events consist of changes in policy (due to political instability), changes in demands for services (due to the influx of immigrants, diversity of needs, excessive tourism), failures of overstretched resources, and delays (caused by accidents, traffic jams, unexpected shopping sprees, festivities, tourist seasons, epidemics, terrorist attacks, etc.). At the same time, there is a marked increase in the awareness of citizens of environmental issues and of health hazards, resulting in frustration and, frequently, in demonstrations and demands for improvements. Important contributing factors are the lack of creativity and imagination among city administrators; a rigid, departmentalized structure of the administration, preventing the effective coordination of services; and the use of conventional information technology unable to cope with the new dynamics. In addition, our research identified that very few cities, if any, provide effective communication channels for the interaction of citizens and visitors with city authorities. As a result, city councils are rarely well informed about the requirements and expectations of those for whom cities exist and tend to rule rather than facilitate. No city seems to know how to effectively organize living, the working, and leisure of its citizens and visitors under conditions of unpredictable changes in their political, social, and economic environment. It is imperative to spread the knowledge of managing complexity as wide as possible. The issue of smart cities is a complex field that cannot be covered by one expert within one organization. Therefore cooperation between different professionals is required, with a mutual understanding of the various perspectives of the city. There is a difference to the view of the smart city of an architect, lawyer, financier, transport, informatics, politician, energy, sociologist, etc. The primary objective of the whole community in the field of smart cities should be to create a communication platform for the cooperation of all relevant areas. It is possible to apply the smart city concept towards defined goals or generic targets. The defined goals are optimization of energy consumption, improvement of air quality, reduction in noise levels, regulation of the transportation systems, etc. Alternatively, the generic targets support the identity of a given place and urbanistic structure, that is, its own historic, cultural, ecological, or esthetic essence. The knowledge obtained can also be applied to the other size types of territorial units, such as smart villages or smart regions. The meaning of the term “smart”, therefore, is to be seen in a balanced relationship between man and technical systems. Smart solutions must make cities more humane and not only technologically advanced. To solve the problem of smart cities, interdisciplinary
Chapter 9 • Complex adaptive systems
91
teams, including experts in humanities (sociology, psychology, the environment, etc.), naturally arise. Currently, many methodologies are used to evaluate city smartness—the smart city index. It involves evaluating the degree of digitization of different processes; evaluating subsets of the functional subsystems such as mobility, energy, security; or assessing information links between smart service operators and their users. Based on different methodologies, the annual assessment of the best smart cities takes place at the global level. The key factor in all smart solutions is the human being. As part of the smart city concept, we must therefore talk about the humanmachine interface (HMI) between technical systems and humans, which should be intuitive and easy to understand for all categories of the city’s population (children, disabled people, etc.). Interaction with the human factor can be verified using different types of simulators on a selected sample of inhabitants and designing these systems to be as user-friendly as possible. Social networks are an example of a new type of communication interface that ensures effective interconnection between different population groups. Using targeted information, we can influence their behavioral patterns and improve bidirectional communication between city management and citizens. Because every citizen perceives the quality of life differently, it must be possible to choose to exist without a need to use new technologies to communicate with the municipality and government agencies. The perspective for the energy network nodes (such as buildings, means of transport, smart stops, and lamppost) is no longer only appliances but some nodes can also be energy sources (solar panels) or energy storages (batteries). From this point of view, the flows in the energy network are not one directional but are bidirectional. The smart energy network (smart grids) must be enhanced by the information network, with algorithms that optimize energy streams in the given territory. Some energy nodes are smart buildings and households using information communication technologies to obtain knowledge about operations. This information is combined with external knowledge, such as weather forecasts and buildings in a similar situation. Smart buildings are already designed using digital technologies and parametric programs like BIM (Building Information Modeling) or VDC (Virtual Design and Construction). Ideally, these models will be used for advanced facility and property management. Specific types of intelligent buildings are created within smart factories (Industry 4.0) or as part of urban agriculture (Agriculture 4.0)—for example, objects of the so-called smart vertical farm. Individual BIM models can be further integrated into a single land model and create a knowledge base for streets, districts, and entire city—CIM (City Information Modeling). For the complete knowledge base of the city, of course, data and information from other areas, such as transport, environment, security, water and waste management, and health care, must be provided. Only by integrating them, we can talk about the knowledge platform (ontology model) of a smart city. Future developments will require completely new approaches to smart mobility, which will gradually become a service (MaaS—Mobility-as-a-Service) with specific guaranteed parameters, as is the case today for telecom operators. A new phenomenon is also city logistics
92
Information Physics
that should respond to the actual online demand. Autonomous vehicles using artificial intelligence (AI) algorithms to optimize the serviceability and logistics of the entire territory are a great challenge for the future of smart cities. It is not just about transporting people and goods, but also about robotic vehicles designed for street cleaning, snow removal, and other activities that people are doing today. These vehicles will, of course, use alternative fuels, probably electricity. Smart City Management uses a variety of sensors, starting with physical detectors, cameras, and ending with satellite pictures (weather prediction, city temperature maps, emission maps). It should be noted that even a vehicle or a mobile phone in this concept becomes an intelligent sensor providing important data. The public lighting infrastructure can accommodate sensors to ensure the availability of telecommunication services throughout the city. New technologies enable better project management and public participation in urban development. New forms of participatory methods of citizen engagement are being promoted. The presentation of variant solutions can be shown using various advanced visualization tools like virtual or augmented reality (VR, AR). For example, simulation models, where the advantages and disadvantages of individual variants can be traced so that they can make more qualified decisions at city management level. From a technical point of view, the Internet of Things (IoT), Internet of People (IoP), Internet of Energy (IoE), Industrial Internet of Things (IIoT), or Internet of Services (IoS) is used for common communication. From a theoretical point of view, Cyber-Physical System (CPS) or, alternatively, in the case of a smart city the Social-Cyber-Physical System (S-CPS) is created. From a complex system’s point of view, the following features should be fulfilled [65]: • Interoperability: The city has interconnected particular parts of its S-CPS such as buildings; engineering network; integrated transport system; public space elements such as street lighting, sorted waste containers, school, hospitals, and accommodation capacity complexes; water and energy supply infrastructure; citizen social networks; store; or logistic supply centers. • Virtualization: The city as a system (or system of systems) will be visually mirrored in its virtual copy (Twin City Model). It is possible to follow the physical processes of the city, thanks to sensors, which are directly interconnected with this virtual model. • Decentralization: The possible decentralization of management enables the adoption of independent decisions in terms of the city and creates alliances with intelligent components to solve various situations and to escalate the resilience of the whole system in the same time. • Decision-making in near to real-time: The city management systems will be able to make decisions based on obtained data and the immediate analysis in near to real-time. • Smart service orientation: In terms of the interaction between specific city parts, citizens, visitors, and businesses, the active service business product is offered by means of a service network. The smart service providers are focused to CX (Client eXperience) with the intention to predict the needs of the future client. • Modularity: The flexible adaptation of the city to change as directed by the client will be solved by the interchange of unsatisfactory modules or by adding new modules, which meet new requirements.
Chapter 9 • Complex adaptive systems
93
Large-scale supercomputers, including cloud computing, are increasingly used to process large volumes of data (Big Data). City management, thanks to current data, moves from the original predefined dynamic plans to adaptive control algorithms that ensure the coordination of entire territorial units. Different simulation tools are used to validate individual strategies. In virtual space, it is much easier to model responses to different types of extraordinary events. Verified strategies can then be projected into real-estate management through actuators, which may be both physical infrastructure facilities and navigation or assistance services, and the prospective operation of autonomous systems such as unmanned vehicles. With regard to the above remarks, the interconnected subsystems will have an impact on all the processes of the economy, so we are talking about Society 4.0 or Thinking 4.0. This is actually the (fourth) revolution, not just an evolution. Current technologies allow the emergence of new business models based on mutual sharing, such as bike-sharing or car-sharing. However, it is also possible to use well-proven business models. PublicPrivate Partnership (PPP) is a public servicefunded arrangement and operated through a partnership between a public organization and one or more private companies. For some forms of PPP (sometimes referred to as private finance initiative), the required capital is provided by a private investor on the basis of a contract with the contracting authority. This private investor, the concessionaire, on the basis of the concession contract, provides the requested public service for a contractual term. Its investment by the contracting authority gradually pays for this service, taking into account its quality (e.g., in the case of unpaid transport infrastructure), or grants a private partner the right to collect payments for the provision of the service directly from users (e.g., for toll infrastructure). The Energy Performance Contracting (EPC) method is based on financing the energy investment projects, that is, facilities for the supply and utilization of energy (usually heat and electricity) in buildings and other objects, from any energy savings achieved. In this case, the costs associated with the implementation of the project are on the contractor’s side. They also bear full responsibility for the appropriate choice of technology, delivery, and subsequent operation. A major phenomenon of smart cities is the gradual strengthening of their resilience against various natural disasters such as terrorist attacks, but also cyberattacks or blackouts. New technologies allow for improved prevention methods based on a better understanding of the individual processes in the city, as well as more advanced interventions in the event of these emergencies occurring. In practical terms, demand agents and resource agents, which can negotiate among themselves, represent all the requirements and resources needed. Such a holistic model plays the role of a “dynamical digital resource demand market place” with limited timevarying resources. Due to this quality, the Smart Resilient City (SRC) is a new vision of a city as a digital platform and ecosystem of smart services where agents of people, things, documents, robots, and other entities can directly negotiate with each other on resource demand principals providing the best solutions possible. It creates a smart environment providing the possible self-organization of individuals, groups, and the whole system objectives in a sustainable or, if needed, resilient way.
94
Information Physics
SRC must be designed with respect to its graceful degradation, in which some of its parts are disrupted in predefined steps. The system will lose some of its functionality, but it will be reconfigured to keep its most important functions. An example of this may be the transport system where, in the event of a failure of the central control system, subnodes are autonomously controlled. Even if these do not function, drivers still have to follow the road traffic signs. This is not an optimal way of managing traffic: but the flow of traffic does not stop. After the central control is restored, the system goes back to its original configuration. Similarly, it is possible to proceed to the design of a comprehensive smart city management system combining all scenarios for all the different types of foreseeable degradation. It is clear that the concept of smart resilient cities is an interdisciplinary issue that presents a new way of managing the city with the help of all available knowledge and technologies that were unimaginable until recent times. In this concept, it is possible to use models of different situations that can occur in the city, analyze them, and ensure the implementation of the most optimal response with respect to any given criteria. In general, the interconnection of subcomponents brings into the field of information physics new and unanticipated benefits. Only future development will show the full importance of the sustainable development of urban agglomerations. Generally, technology can be bought, but the smart city system cannot—it is to be built for years, taking into account the specifics of the territory, its history, cultural traditions, economic possibilities, etc. It is by no means a quick short-time project but a long activity lasting for several decades, where information physics can significantly help.
9.3 Intelligent transport systems Telematics is a result of convergence and following progressive synthesis of telecommunication technology and informatics [25]. The effects of telematics are based on the synergism of both disciplines and can be taken as an application of information physics. Telematics can be found in wide-spectrum user areas from individual multimedia communications towards intelligent use and the management of large-scale networks (e.g., transport, telecommunications, public service). The Transport Telematics/Intelligent Transport Systems (ITS) connects information and telecommunication technologies with transport engineering to achieve better management of transport, travel, and forwarding processes by using the existing transport infrastructure. The processes are defined by chaining system components through information links. The chains of functions (processes) are mapped in physical subsystems or modules, and information flows between functions specifying the communication links between subsystems or modules. The functions’ groupings yield into the definition of "market packages", taking into account the market availability of modules/applications.
Chapter 9 • Complex adaptive systems
95
Generally, the ITS architecture is presented as a system abstract, which was designed to create a uniform national or international ITS development and implementation environment. In other words, it should give us a direction to produce interoperable physical interfaces, system application parts, data connections, etc. The ITS architecture reflects several aspects of the examined complex system, and therefore can be differentiated as [25]: • Reference architecture—defines the main actors/terminators of the ITS (the reference architecture yields to the definition of the boundary between the ITS system and the environment of the ITS system), • Functional architecture—defines the structure and hierarchy of ITS functions (the functional architecture yields to the definition of the functionality of the whole ITS system), • Information architecture—defines information links between functions and actors/ terminators (the goal of the information architecture is to provide cohesion between the different functions), • Physical architecture—defines the physical subsystems and modules (the physical architecture could be adapted according to user requirements, e.g., legislative rules and organization structure), • Communication architecture—defines the telecommunication links between physical devices (correctly selected communication architecture optimizes telecommunication tools), • Organization architecture—specifies competencies of single-management levels (correctly selected organization architecture optimizes management and competencies at all management levels). ITS architecture covers the following "makro-functions": • Provides electronic payment facilities - toll collection system based on GNSS/CN (Global Navigation Satellite System/Cellular Network), DSRC (Dedicated short-range communications), etc. • Provides safety and emergency facilities - emergency call, navigation of rescue services, etc. • Manages traffic - traffic control, maintenance management, etc. • Manages public transport operations - active preferences of public transport, etc. • Provides advanced driver assistance systems - car navigation services, etc. • Provides traveler journey assistance - personal navigation services, etc. • Provides support for law enforcement - speed limit monitoring, etc. • Manages freight and fleet operations - fleet management, monitoring of dangerous goods, etc. • Supplies archive information - location-based information, etc. Each market package defines a group of subsystems, actors/terminators, and data links (logical and physical) dedicated to cover functions coming directly from these elements. Basic market package sets are as follows: • Transport management • Management of integrated and safety systems
96
• • • • •
Information Physics
Traffic information Public transport Commercial vehicles’ management Data management and archiving Advanced vehicle safety systems
The ITS designer must also take into account the economic aspects. Strong emphasis is placed on the ITS effectiveness definition because the cost is an essential issue. Therefore internationally reputable methodology of CostBenefit Analysis (CBA) is chosen and connected with the effectiveness definition, so the effectiveness values are represented by, for example, the net present value, internal rate of return, and payoff period.
9.4 Ontology and multiagent technologies The creation of the ontology as the knowledge base at the top requires the formalization of all aspects of the knowledge and enables simple access to this knowledge for all different services and support interactions within the digital platforms and ecosystems. Ontology seems to be one way of overcoming interoperability problems; it is a basic building block for the Semantic Web, usually in the form of the RDF (Resource Description Framework) Schema or OWL (Ontology Web Language). The basic principle of creating a new ontology is using an incremental approach. It should start from the basic generic components (streets, roads, buildings, citizens, etc.), subsequently injecting more complicated cases. Those objects can have a detailed description—a building can be a restaurant, a business center, or a living building with many other properties stored as attributes (number of floors, date of construction, etc.). The ontology can also describe people, their activities or requests, requirements, or business processes. When describing a system structure, one should first determine the hierarchy of each layer. Such hierarchy is described as a pyramid (Fig. 9.2). When dealing with topography, we consider using a three-dimensional mapping system of the proposed area. Having detailed and precise maps is imperative to achieve any advance in traffic organization and management, energy savings, a proper communication infrastructure, or additional optimizations. This would include records of every part of the infrastructures involved with a detailed description of their parameters and how it could be maintained. This leads to an optimal overview of how the system can be managed long term and allows for the creation of models optimizing the “as is” and “to be” situations. We consider a Data platform or a data communication network as a vital step prior to describing the system and its components. The possibilities of the current data infrastructure and its ability for further development/modernization greatly influence which technologies and parts of the systems should be included in the greater system. Different ontology layers can be created (more specific ontologies: energy, ecology, transport, security, etc.). But when creating the new ontology, there is a need to design and consider the “primary data source”, rather than collecting knowledge from existing ontologies.
Chapter 9 • Complex adaptive systems
97
FIGURE 9.2 Hierarchy of system layers.
A system can be considered “smart” only by providing competition and/or cooperation between services on a platform level [76]. The use of ontologies is the most promising method of ensuring semantic interoperability (SI) as a key factor for those systems, which are required to exchange or understand information through a shared meaning. Building SI-based networks is a way to help services communicate with each other and to act on good information. An agent is an entity, which perceives its environment and acts upon it. The theory of multiagent systems (MAS) states that a multiagent system consists of multiple agents, which interact within the environment. Each agent is capable of autonomous action to achieve its delegated objective. There are several different architectures that can be used for the implementation of agents within MAS, and the most typical are logic-based architectures, reactive architectures, beliefdesireintention (BDI) architectures, or layered architectures [77]. Generic BDI reasoning is based on the definition of the following key terminologies: • Beliefs—represent the agent’s perception of the environment, that is, what he believes about the world, • Desires—represent the motivations of the agent, that is, objectives that the agent would like to accomplish, • Intentions—represents what the agent has chosen to do, that is, intentions are desires to which the agent has committed. In the design process, it is possible to use a comprehensive management system where there should be scenarios for the different types of degradation modes: 1. Forecasting mode—forward-thinking simulations, forecasting, what-if analysis; 2. Normal mode—real-time resource allocation, planning, optimization, monitoring, and control under given global and local criteria and simultaneously looking for the best service quality;
98
Information Physics
3. Critical mode—reacting to global disruptive events or critical priorities to guarantee the resiliency, antifragility, or graceful degradation together with the best and fast return to the normal operation mode; 4. Postcritical mode—the process of restoring the system to an acceptable level, which may be better or worse than the original. For managing this digital ecosystem, the managers only need to adjust the set of city Key Performance Indicators (KPIs) that will be considered by the agents in their decision-making.
10 Conclusion In this publication, a new theory of a complex system has been presented by using the analogy with physics—electrics, magnetic circuits, quantum mechanics, etc. The achieved results can be seen as a feasibility of these analogies to tackle difficult complex system models. The main idea is that we recognize the information flow and the information content as the central quantities in informatics as, for example, electric current and voltage are used in electrical engineering. With respect to this idea, the known mathematical instruments of electric/magnetic circuit modeling can be applied to the modeling of complex information systems. Such an approach leads us to a better definition of an event’s source that includes both an information content (quality of the information source) and an information flow (how the information source is distributed). Another result is the clear separation between an information source and an information recipient. The recipient sometimes cannot be able to process the transmitted information, and it uses only its part. The presented model can yield more complicated information circuits. Other parts of the publication explain the quantum approach to the system theory that can be understood as an extension of classical system models. The main idea is that we have at our disposal a lot of incomplete pieces of overlapping information that must be composed together to find the best consistent model for the whole. The incomplete information can be understood as a set of nonexclusive observers’ results, and each of them represents the reality through its limited capabilities. Because they are nonexclusive, each observer registers different pictures of the reality. Quantum models with information pieces can be interconnected in serial or parallel feedback ordering. They can also be time varying with a lot of interesting features, such as entanglement, emergency, or self-organization. The general opinion is that such a mathematical instrument can be applied to many fields, which can also cover the human sciences. In quantum physics, moduli typically represent energy. In our approach, the mathematical instrument of wave probabilities has, therefore, much broader applications than in physics. In system sciences, for example, we can evaluate by quantum models other system features like the ability to create alliances and the ability for adaptation (the quickest response to changes). The presented methodology can be further enlarged not only to entanglement among pure events and events’ functions, but also it can add an enlargement of different system processes and their functions. If we imagine the complexity of such a model [60], we are coming closer to Kaufmann’s issue [6] as a network representation of a complex system. We can also use quantum models for system optimization both on the side of energy consumption and also on the side of possible system benefits. When being used to minimize energy consumption, the system tries to have as many possible negative links that characterize
Information Physics. DOI: https://doi.org/10.1016/B978-0-323-91011-8.00011-1 © 2021 Elsevier Inc. All rights reserved.
99
100
Information Physics
the maximal sharing of cost items. Alternatively, profit maximization means tuning the positive links to bring more synergetic benefits. Taking into consideration minimal consumption and maximization of profits, we can define the new costbenefit analysis of the complex systems similar to the presented examples of smart resilient city or transport telematics. It appears to be a true fact that the future will bring a convergence of the physical sciences, life sciences, and engineering. I would even allow myself to go a bit further, to consider even convergence with the humanities, because I am convinced that the laws of behavior of human society described, for example, in sociology or political science will gain the capacity of being better understood when using the tools of information physics. The quantum system approach can capture Soft Systems Models, which can bring a new quality of understanding to the complex systems. Such an approach can enrich our learning, and we can then speak about quantum cybernetics [21] or quantum system theory [26]. We also have enlarged quantum informatics and introduced positive or negative strengths of components [75] acting on the surroundings. A component may be improbable but have a great positive (leads to better organization of its surroundings) or negative (makes for chaos in its surroundings) effect. If the signs are different, the positive and negative effects cancel each other out. We interpret this situation as a mutual attraction. With the same signs, the positive or negative effects add up according to the components’ arrangements. The presented work should not be treated as a finished work but as the beginning of a journey. It is easy to understand that a lot of the mentioned theoretical approaches should continue and should be tested in practical applications.
Appendix A Mathematical supplement A1 Schrodinger wave function Let us present the complex wave function (a plane wave model) given by: ψ 5 A e jðkx2ωt Þ
(A1.1)
with its first and second derivatives with respect to x: dψ 5 j k A e jðkx2ωt Þ dx
(A1.2)
d2 ψ 5 2k 2 A e jðkx2ωt Þ 5 2 k 2 ψ dx 2
(A1.3)
If we differentiate ψ with respect to time t, we find: dψ 5 2j ω A e jðkx2ωt Þ 5 2 j ω ψ dt 2
(A1.4)
2
d d The operators dx ; dx2 ; . . . and dtd ; dtd 2 ; . . . operating on ψ represent eigenvalue equations because after operating on the function ψ, the operators return the original function ψ modified by some constant factor. If we take into account the Max Planck equation from Ref. [49] (ω is the frequency of radiation):
E5ћ ω
(A1.5)
and by multiplying Eq. (A1.4) by parameter ћ, we can modify it as: jћ
dψ 5ћω ψ dt
(A1.6)
~ then when it operates on ψ, the result If we identify j ћ dtd as the energy operator E, we get back is the energy eigenvalue of the wave. From wave mechanics [49], we know that p5ћ k
(A1.7)
101
102
Appendix A
where p is momentum and k is the wave number. It is easy to rewrite k 5 pћ. Substituting this into (A1.3), we have: d2 ψ p2 52 2 ψ 2 dx ћ
(A1.8)
or by rewriting: 2ћ2
d2 ψ 5 p2 ψ dx 2
(A1.9)
where operator p~ 2 is defined: p~ p~ 5 2 ћ2
d2 dx 2
(A1.10)
From (A1.10), it is evident that momentum operator p~ can be found: p~ 5 6 j ћ
d dx
(A1.11)
If we define the momentum operator (A1.11) as the negative one, then the momentum 1 eigenvalue is positive. By multiplying both sides of (A1.9) by 2m , we get: 2
ћ2 d2 ψ p2 2 5 ψ 2 m dx 2m
In mechanics, energy can be defined as E 5 energy operator can be written in a form:
p2 2m
(A1.12)
and then the alternative expression of
ћ2 d2 E~ 5 2 2 m dx 2
(A1.13)
We can also write the complete Schrodinger equation for nonzero potential in this form: 2
ћ2 d2 ψ 1 V ψ 5 E~ ψ 2 m dx 2
(A1.14)
or we can often see Eq. (A1.14) in an operator form: H~ ψ 5 E~ ψ
(A1.15)
ћ2 d2 2 1V H~ 5 2 2 m dx
(A1.16)
where the Hamilton operator
Appendix A
103
and the energy operator is used: d E~ 5 2 j ћ dt
(A1.17)
One of the interpretations of the quantum wave function could be found in Ref. [47], where the main idea of this approach comes from the complementarity principle of positionmomentum distributions [4] that can be found at the same wave function ψðx Þ. The momentum wave function could be computed as: ψðpÞ 5
1 ð2 π ћÞ
ðN 2N
ixp ћ
ψðx Þ e2
dx
(A1.18)
From the Schrodinger equation, the local conservation law of probability can be expressed in the form of a continuity equation: @ρð~ r; t Þ ! ! 1 r Jð r ; tÞ 5 0 @t
(A1.19)
where J is the probability current, defined by: !
Jð r ; tÞ 5
ћ ћ Φð~ r; t Þ ! ½ψ ðrψÞ 2 ψ ðrψ Þ 5 ρð r ; tÞ grad 2mj m
(A1.20)
From (A1.20), it is apparent that there is no probability current if the phase function Φ does not depend on space parameter r. It can also be said that the probability current is caused by the phase parameter Φ of wave function, and it represents the time change of probability density (flow of probability density).
A2 Bohmian interpretation of wave functions Information events are incorporated in the modulus of wave function R, and information current is described in its phase function Φ. The time and space distribution of potentials together with the mass of medium carrying information (e.g., particle) are the main parts of the Schrodinger wave equation (for simplicity, we assume one dimension in x-axis): jh
@ψðt; x Þ h2 @2 ψðt; x Þ 1 V ðx Þ ψðt; x Þ 52 2m @t @x 2
(A2.1)
It can be said that all physical parameters—energy, mass, and information—are bound by the Schrodinger equation, and all the consequences as information current, entropy, etc., come from this equation.
104
Appendix A
Let us use the form: ψðt; x Þ 5 Rðt; x Þ e jΦðt;xÞ
(A2.2)
to provide its first and second derivations: jh
@ψðt; x Þ @Rðt; x Þ jΦðt;xÞ @Φðt; x Þ jΦðt;xÞ 5j h e h 2 Rðt; x Þ e h @t @t @t
(A2.3)
@2 ψðt; x Þ @2 Rðt; x Þ jΦðt;xÞ 2 j @Rðt; x Þ @Φðt; x Þ jΦðt;xÞ e h 2 5 e h 1 @x 2 @x 2 h @x @x 2
Rðt; x Þ @Φðt; x Þ 2 jΦðt;xÞ e h h2 @x
(A2.4)
From (A2.1), (A2.3), and (A2.4), we obtain two differential equations: @Rðt; xÞ 21 @Rðt; xÞ @Φðt; xÞ @2 Φðt; xÞ 5 2 1 Rðt; xÞ @t 2m @x @x @x 2 @Φðt; xÞ h2 52 2 Rðt; xÞ @t 2m
! @2 Rðt; xÞ Rðt; xÞ @Φðt; xÞ 2 1 V ðx Þ Rðt; xÞ 2 @x 2 h2 @x
(A2.5)
(A2.6)
By manipulating the Eqs. (A2.5) and (A2.6), we have two following results [48]: @Rðt; xÞ2 1 @ @Φðt; xÞ 1 Rðt; xÞ2 50 @t m @x @x
(A2.7)
@Φðt; xÞ 1 @Φðt; xÞ 2 h2 @2 Rðt; xÞ 1 50 1 V ðx Þ 2 @t 2m @x @x 2 2 m Rðt; xÞ
(A2.8)
We remark that Eq. (A2.7) can be understood as the evolution of the probability function. If one uses the probabilistic interpretation of the wave function, then ρðt; x Þ 5 Rðt; x Þ2 5 jψðt; x Þj2
(A2.9)
gives probability. Eq. (A2.7) converges to the classical HamiltonJacobi equation if the term h2 ,,1 2m
is neglected.
(A2.10)
Appendix A
105
Bohm [47] interpreted Eq. (A2.8) that classical potential V(x) is perturbed by an additional “quantum potential”: U ðt; x Þ 5
h2 @2 Rðt; x Þ @x 2 2 m Rðt; x Þ
(A2.11)
From (A2.11), it is apparent that quantum potential U(t,x) is by itself driven by the field Schrodinger Eq. (A2.1). The force created by quantum potential could be given: g ðt; x Þ 5
@U ðt; x Þ @x
(A2.12)
Let us analyze mathematically Eq. (A2.11). The increase in amplitude does not necessarily increase the quantum potential energy because the amplitude appears in the denominator. The second spatial derivative in the numerator indicates that the shape or form of the wave is more important than its magnitude. The Quantum (pilot) field [48] is nonmechanical or organic with no preassigned interactions between the parts. An interesting aspect of the Bohmian quantum mechanics interpretation is the fact that particles are connected through a wavefield over large distances (the presented interpretation is not local). The wavefield reflects the image of “space” ordering, it has no external source, and it is of some form of internal energy, split-off from kinetic energy. There are lot of examples in the literature of physics containing the similarities between Schrodinger’s quantum mechanics and fluid dynamics [48]. It was discovered that a certain complex mapping would convert the two-dimensional time-dependent Schrodinger equation exactly into the format of a viscous compressible NavierStokes fluid. In this interpretation, the square of the wave function is understood as the distribution of vorticity in a viscous fluid and not as the position probability.
A3 Gnostic theory The wave probabilities have some similarities with a gnostic theory [42] that presents an ideal data quantification as: Z 5 Z0 eSΦ
(A3.1)
The additive form of a possible data model is supposed to be A 5 A0 1 S Φ
(A3.2)
Where A0 represents ideal data, S is a scale parameter (for S 5 1, we speak about unified scale), and Φ is an uncertain parameter. Unique gnostic transformation of uncertain data is given as: x 5 Z0 coshðΦÞ y 5 Z0 sinhðΦÞ
(A3.3)
106
Appendix A
where Z0 5 eA0 is transformed ideal data under the following invariance (Minkowskian geometry): Z02 5 x 2 2 y2
(A3.4)
for each fixed Z0 AR1 and for all real values of an uncertain parameter. Due to the geometrical equations: coshðΦ1 1 Φ2 Þ 5 coshðΦ1 Þ coshðΦ2 Þ 1 sinhðΦ1 Þ sinhðΦ2 Þ
(A3.5)
sinhðΦ1 1 Φ2 Þ 5 coshðΦ1 Þ sinhðΦ2 Þ 1 sinhðΦ1 Þ coshðΦ2 Þ
(A3.6)
the transformation of (A3.3) is thus unchanged, although we have renamed (exchanged) the coordinates of x and y. The estimation channel is the Euclidean orthogonal rotation of the gnostical event, which is a dual process to the quantification [42]. The gnostic theory describes both quantification (Q-) and estimation (E-) as characteristics of uncertainty (data weights and irrelevances) represented in the Minkowskian and Euclidean geometries. Data weights quantify data quality, and irrelevances measure data errors using certain Riemannian nonlinear geometries. This nonlinearity leads E-characteristics to robustness with respect to outliers and Q-characteristics to robustness with respect to inliers.
A4 Heisenberg’s uncertainty limit The well-known Fourier transform is defined as: 1 X ðωÞ 5 F½xðtÞ 5 pffiffiffiffiffiffiffiffiffi 2π
ðN 2N
1 x ðtÞ 5 F21 ½X ðωÞ 5 pffiffiffiffiffiffiffiffiffi 2π
x ðtÞ e2jωt dt
ðN 2N
(A4.1) X ðωÞ ejωt dω
In the Short-time Fourier transform case, the function to be transformed is multiplied by a window function, which is nonzero for only a short period of time. Mathematically, this is written as: 1 X ðω; τ Þ 5 pffiffiffiffiffiffiffiffiffi 2π
ðN 2N
x ðtÞ wðt 2 τ Þ e2jωt dt
(A4.2)
where w(t) is the window function, and x(t) is the signal to be transformed. X(τ,ω) is essentially the Fourier transform of x(t)w(t 2 τ), a complex function representing the phase and magnitude of the signal over time and frequency. Let us suppose the rectangle signal defined in a time interval h2 τ2 ; τ2i as follows:
t xðtÞ 5 A rect τ
(A4.3)
Appendix A
107
The frequency spectrum of this pulse signal consists of the main frequency part (main lobe) 2π given in the frequency interval h2 2π τ ; τ i and other frequency parts (side lobes), which are not important for our discussion. Mathematically we can write the spectrum of (A4.3) as: X ðωÞ 5 A τ
sin ω τ2 ω τ2
(A4.4)
The time and frequency representation of the rectangle signal for τ 5 2 is shown in Fig. A4.1. Comparison of time and frequency intervals introduces the well-known timefrequency uncertainty limit, which states that the resolution in the time and frequency domain cannot be achieved simultaneously. In the case that τ is smaller (better resolution in the time domain), the frequency range 2π τ is higher (worse resolution in the frequency domain). This principle is known as Heisenberg’s uncertainty limit [4]. It is also known that time and frequency representations are bound by the Fourier transform (A4.1), which is a nonlocal global transformation, where basic elements are monochromatic harmonic plane waves (infinite both in time and space). From the uncertainty principle, it is difficult to recognize the properties of x ðtÞ from the properties of X ðωÞ. The product of the duration Δt and the bandwidth Δω is bounded by uncertainty Δt=Δω $ 1=4π. The inspiration of the method of how to exceed Heisenberg’s limit comes from quantum physics, where the entangled two-photon states exhibited both momentummomentum and positionposition correlations [63]. Let us take the sum and difference in the Dirac pulses δðt1 1 t2 Þ and δðt1 2 t2 Þ in twodimensional time domain ðt1 ; t2 Þ, then the two-dimensional Fourier transform in twodimensional frequency domain ðω1 ; ω2 Þ is given as follows: F½δðt1 2 t2 Þ 5 δðω1 1 ω2 Þ
(A4.5)
F½δðt1 1 t2 Þ 5 δðω1 2 ω2 Þ
(A4.6)
Pulse signal
Pulse spectrum
1.5
1 0.8 0.6 Spectrum
Signal
1
0.5
0.4 0.2 0
0
–0.2 –0.5 –3
–2
–1
0 Time
1
2
3
FIGURE A4.1 Rectangle signal in time and frequency domain.
–0.4 –15
–10
–5
0 Frequency
5
10
15
108
Appendix A
Let us define two-dimensional Fourier transform of δðt1 2 t2 Þ: F½δðt1 2 t2 Þ 5
1 2π
5
1 2π
ðð
δðt1 2 t2 Þ e2jω1 t1 e2jω2 t2 dt1 dt2 5
ð
2jðω1 1ω2 Þðt1 1t2 Þ 2
e
ð 1 2jðω12ω2 Þðt12t2 Þ 2 dðt1 1 t2 Þ δðt1 2 t2 Þ e dðt1 2 t2 Þ 5 2
5 δðω1 1 ω2 Þ
(A4.7)
Backward Fourier transform fulfills the following form: F21 ½δðω1 1 ω2 Þ 5
1 2π
5
1 2π
ðð
δðω1 1 ω2 Þ ejω1 t1 ejω2 t2 dω1 dω2 5
ð
jðω1 1ω2 Þðt11t2 Þ 2
δðω1 1 ω2 Þ e
ð 1 jðω12ω2 Þðt12t2 Þ 2 e dð ω 1 1 ω 2 Þ dðω1 2 ω2 Þ 5 2
5 δ ðt1 2 t2 Þ
(A4.8)
This principle yields into following consequences: 1. The conjugate variables are: ðt1 1 t2 Þ3ðω1 1 ω2 Þ
(A4.9)
ðt1 2 t2 Þ3ðω1 2 ω2 Þ
(A4.10)
ðt1 1 t2 Þ3ðω1 2 ω2 Þ
(A4.11)
ðt1 2 t2 Þ3ðω1 1 ω2 Þ
(A4.12)
2. The nonconjugate variables are:
3. The nonconjugate variables (A4.11, A4.12) are not limited by Heisenberg’s uncertainty principle. 4. Entanglement in the signal processing area can be defined as the strict existence of Dirac pulses in both Eqs. (A4.5, A4.6) simultaneously. The Eq. (A4.5) is represented by lines in two-dimensional time and frequency domains. The lines are given by equations: t1 2 t2 5 0
(A4.13)
ω1 1 ω2 5 0
(A4.14)
In a similar way, the Eq. (A4.6) can be represented by lines given by: t1 1 t2 5 0
(4.15)
ω1 2 ω2 5 0:
(4.16)
Appendix A
109
This approach yields directly to the precise localization of Dirac pulses (intersection) in both two-dimensional time domain ðt1 ; t2 Þ and frequency domain ðω1 ; ω2 Þ. Due to precise Dirac pulses’ localization, we can approach towards arbitrary time and frequency resolutions. 5. A violation of Heisenberg’s uncertainty principle is caused by the entanglement principle realized in (extended) two dimensions. Perfect entanglement of two systems yields into an ideal resolution in time and frequency domains (Dirac pulses in both time and frequency domains), and nonperfect entanglement yields to a resolution with some uncertainties that can be better than Heisenberg’s limit. 6. The localized Dirac pulses in both time and frequency domains can be used as the ideal window function w(t) in Short-Fourier transform (A4.2). Let us suppose that fixed time resolution is h 2 τ2 ; τ2i. The presented example will try to improve the frequency resolution by adding the inner structure while time resolution h 2 τ2 ; τ2i keeps unchanged. The method/algorithm is explained in the frequency domain because it is easier. The same explanation can be, of course, provided in the time domain while keeping the frequency resolution fixed. We suppose to provide frequency right and left shifts of the original pulse spectrum. The right shift is supposed to be up to 1 πτ , and the left shift is supposed to be up to 2 πτ . From the original pulse spectrum (A4.4), the two new shifted pulse spectra were created. sin ω 1 πτ τ2 ω 1 πτ τ2 sin ω 2 πτ τ2 X 2 ðω Þ 5 A τ ω 2 πτ τ2
X 1 ðωÞ 5 A τ
(A4.17)
(A4.18)
Fig. A4.2 shows these right- and left-shifted functions for τ 5 2. The next step of the algorithm yields into sum XS ðωÞ and difference X D ðωÞ functions of right- and left-shifted spectra as follows: XS ðωÞ 5 X1 ðωÞ 1 X2 ðωÞ
(A4.19)
XD ðωÞ 5 X1 ðωÞ 2 X2 ðωÞ
(A4.20)
Fig. A4.3 shows both the sum XS ðωÞ and the difference XD ðωÞ functions of shifted spectra for τ 5 2. It is evident that the spectra sum and difference yield into a worse frequency resolution. For comparison on Fig. A4.3, the original pulse spectrum XðωÞ is also presented. Looking at Fig. A4.3, we can rewrite the sum and difference spectra in complex forms using amplitudes and phases: X S ðωÞ 5 X S ðωÞ ejφðXS ðωÞÞ
(A4.21)
X D ðωÞ 5 X D ðωÞ ejφðXD ðωÞÞ
(A4.22)
110
Appendix A
Original and right- and left-shifted spectra 1
0.8
Spectrum
0.6
0.4
0.2
0
–0.2
–0.4 –15
–10
–5
0 Frequency
5
10
15
FIGURE A4.2 Original (solid), right- (dashed), and left-(dotted) shifted pulse spectrum signals. Original, sum, and difference spectra 1.5
1
Spectrum
0.5
0
–0.5
–1
–1.5 –15
–10
–5
0 Frequency
5
10
15
FIGURE A4.3 Original (solid), sum (dashed), and difference (dotted) spectra.
Amplitudes in our case represent absolute values. Phase is zero for positive- and π for negative-spectrum functions XS ðωÞ and XD ðωÞ. Let us provide the difference between absolute values of sum and difference spectra as follows (computed spectrum): X H ðωÞ 5 X S ðωÞ 2 X D ðωÞ
(A4.23)
Appendix A
111
Original and computed spectra 1.4 1.2 1 0.8
Spectrum
0.6 0.4 0.2 0 –0.2 –0.4 –0.6 –15
–10
–5
0 Frequency
5
10
15
FIGURE A4.4 Original (solid) and new computed (dashed) spectra.
The function XH ðωÞ is shown in Fig. A4.4 for τ 5 2 as the new computed spectrum. For comparison, Fig. A4.4 also shows the original pulse spectrum XðωÞ. We can see in Fig. A4.4 that the method presented improved the frequency resolution 2π π π (main lobe) from interval h 2 2π τ ; τ i into h 2 τ ; τ i. It means that the frequency resolution is τ τ doubled by keeping the time resolution h 2 2 ; 2i fixed. The cost paid for this improvement is that it is necessary to double the processing channels (right- and left-shifted spectra). The nonlinear operation (A4.23) can be interpreted as the nonperfect entanglement. Two channels (A4.21, A4.22) are mixed through irregular mathematical operations because we applied the operation (A4.23) only on modulus (absolute values). The phase functions were set up to be zero. Such an operation can be understood as the basis for entanglement. The procedure can be repeated again and again. In the second run of algorithms, the double precise pulse spectrum (the result of the first run of algorithms) can be used. This π pulse spectrum is left- and right-shifted up to 6 2τ . The new pulse spectrum given after the second run of algorithms is characterized by quadruple precision in time and frequency resolutions.
A5 Wave multimodels theorem Let n models PðY z jHi ; CÞ; iAf1; 2; . . . ; ng, with m output values Y z ; zAf1; 2; . . . ; mg denote that ith model component Hi is conditioned on the basis of the designer’s decision represented by the parameter C (universal model), and let PðHi Þ represents the probability occurrence of the Hi model, then the probability of zth output value PðY z Þ could be characterized
112
Appendix A
by a complex parameter ψðY z Þ with the following properties: 2 PðY z Þ 5 ψðYz Þ ψð Y z Þ 5
n X
ψi ðYz Þ
(A5.1) (A5.2)
i51
where ψi ðY z Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðYz jHi ; CÞ PðHi Þ ejβz ðiÞ
(A5.3)
and parameters βz ðiÞ are computed through an algorithm described below: βz ð1Þ 5 0; βz ð2Þ 5 γz ð1Þ 2 αz ð2Þ; βz ð3Þ 5 βz ð2Þ 1 γz ð2Þ 2 αz ð3Þ; ...; βz ðmÞ 5 βz ðm 2 1Þ 1 γz ðm 2 1Þ 2 αz ðmÞ; ......; βz ðn 2 1Þ 5 βz ðn 2 2Þ 1 γz ðn 2 2Þ 2 αz ðn 2 1Þ; βz ðnÞ 5 βz ðn 2 1Þ 1 Θz
(A5.4)
where iAf1; 2; . . . ; n 2 2g. The parameters used in algorithm (A5.4) can be computed through the backward algorithm from probabilities PðY z jHi ; CÞ, PðY z jHi Þ, and PðHi Þ as1: μðYz jðHi ; Hi11 , . . . , Hn ÞÞ 5
PðY z ðHi , . . . , Hn ÞÞ 2 PðYz jHi ; CÞ PðHi Þ 2 PðY z ðHi11 , . . . , Hn ÞÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 PðYz jHi ; CÞ PðHi Þ PðY z ðHi11 , . . . , Hn ÞÞ (A5.5)
γz ðiÞ 5 ar cosðμðYz jðHi ; Hi11 , . . . , Hn ÞÞÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! PðYz jHi ; CÞ PðHi Þ 1 μðYz jðHi ; Hi11 , . . . , Hn ÞÞ PðYz ðHi11 , . . . , Hn ÞÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; αz ðiÞ 5 ar cos PðY z ðHi , . . . , Hn ÞÞ
(A5.6)
(A5.7)
where for iAfn 2 1; ng, the initial parameters for the recurrent algorithm (A5.4) are given (we suppose αz ðnÞ 5 0): pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! PðYz jHn21 ; CÞ PðHn21 Þ 1 λðY z jHn21 ; Hn Þ PðYz jHn ; CÞ PðHn Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi αz ðn 2 1Þ 5 ar cos PðYz ðHn21 , Hn ÞÞ
1
We will use the following probabilistic rule: PðY z ðHi11 , . . . , Hn ÞÞ 5 PðY z jHi11 Þ PðHi11 Þ 1 . . . 1 PðY z jHn Þ PðHn Þ
(A5.8)
Appendix A
Θz 5 ar cosðλðYz jHn21 ; Hn ÞÞ;
113
(A5.9)
where the parameter λðY z jHn21 ; Hn Þ is computed as follows: λðYz jHn21 ; Hn Þ 5
PðHn21 Þ ½PðYz jHn21 Þ 2 PðY z jHn21 ; CÞ 1 PðHn Þ ½PðY z jHn Þ 2 PðYz jHn ; CÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : 2 PðY z jHn21 ; CÞ PðHn21 Þ PðY z jHn ; CÞ PðHn Þ
(A5.10)
For proving the wave multimodels theorem, the formula of the total probability described in Khrennikov [7,8] was used for inspiration. In quantum mechanics, an opposite problem was compared and introduced with this theorem. This means that in quantum mechanics, the precise component models PðY z jHi Þ; iAf1; 2; . . . ; ng were known (description of quantum states without interactions), and the context transited result PðY z jCÞ was computed. Let us suppose that we have a complete set of models fH1 ; H2 ; . . . ; Hn g with the property: PðH1 , . . . , Hn Þ 5 1:
(A5.11)
Then the following equations can be derived from the probability rules: PðYz Þ 5 PðYz ðH1 , . . . , Hn ÞÞ 5 PðYz jH1 ; CÞ PðH1 Þ 1 PðYz ðH2 , . . . , Hn ÞÞ 1 1 2 μðYz jðH1 ; H2 , . . . , Hn ÞÞ μðYz jðH1 ; H2 , . . . , Hn ÞÞ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðYz jH1 ; CÞ PðH1 Þ PðYz ðH2 , . . . , Hn ÞÞ
(A5.12)
PðY z ðH1 , . . . , Hn ÞÞ 2 PðYz jH1 ; CÞ PðH1 Þ 2 PðY z ðH2 , . . . , Hn ÞÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : 2 PðY z jH1 ; CÞ PðH1 Þ PðY z ðH2 , . . . , Hn ÞÞ (A5.13)
It can be easily proven by the substitution of (A5.11) into (A5.12). If we suppose that 2 PðY z Þ 5 ψ1 ðYz Þ ;
(A5.14)
then the Eq. (A5.14) can be rewritten in a complex form: ψ1 ðYz Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðY z jH1 ; CÞ PðH1 Þ 1 ejγz ð1Þ PðY z ðH2 , . . . , Hn ÞÞ γ z ð1Þ 5 ar cosðμðYz jðH1 ; H2 , . . . , Hn ÞÞÞ:
Because ψ1 ðY z Þ is a complex value, it could be represented by a module and angle: ψ1 ðYz Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jα ð1Þ P ðY z ðH1 , . . . , Hn ÞÞ e z
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! PðY z jH1 ; CÞ PðH1 Þ 1 μðY z jðH1 ; H2 , . . . , Hn ÞÞ PðY z ðH2 , . . . , Hn ÞÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : αz ð1Þ 5 ar cos PðYz ðH1 , . . . , Hn ÞÞ
(A5.15) (A5.16)
(A5.17)
(A5.18)
114
Appendix A
In the same way as in (A5.12) and (A5.13), the following equations can be expressed as the second step of a derived algorithm (the proof is the same): PðYz ðH2 , . . . , Hn ÞÞ 5 PðY z jH2 ; CÞ PðH2 Þ 1 PðYz ðH3 , . . . , Hn ÞÞ 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 μðY z ðH2 ; H3 , . . . , Hn ÞÞ PðYz jH2 ; CÞ PðH2 Þ PðYz ðH3 , . . . , Hn ÞÞ (A5.19) PðYz ðH2 , . . . , Hn ÞÞ 2 PðYz jH2 ; CÞ PðH2 Þ 2 PðY z ðH3 , . . . , Hn ÞÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : 2 PðYz jH2 ; CÞ PðH2 Þ PðYz ðH3 , . . . , Hn ÞÞ
μðYz jðH2 ; H3 , . . . , Hn ÞÞ 5
(A5.20)
Then Eqs. (A5.15), (A5.16), (A5.17), and (A5.18) can be rewritten in the forms expressed in the following equations using the same methodology as above: ψ2 ðY z Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðYz jH2 ; CÞ PðH2 Þ 1 ejγz ð2Þ PðYz ðH3 , . . .: , Hn ÞÞ
(A5.21)
γz ð2Þ 5 ar cosðμðY z jðH2 ; H3 , . . . , Hn ÞÞÞ
(A5.22)
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jα ð2Þ PðYz ðH2 , . . . , Hn ÞÞ e z
(A5.23)
ψ2 ðY z Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! PðYz jH2 ; CÞ PðH2 Þ 1 μðYz jðH2 ; H3 , . . . , Hn ÞÞ PðYz ðH3 , . . . , Hn ÞÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : αz ð2Þ 5 ar cos PðY z ðH2 , . . . , Hn Þ
(A5.24)
The procedure described in (A5.19A5.24) can be generalized for the ith step of an algorithm, as it is presented in (A5.25A5.30): PðYz ðHi , . . . , Hn ÞÞ 5 PðY z jHi ; CÞ PðHi Þ 1 PðYz ðHi11 , . . . , Hn ÞÞ 1 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 2 μðYz jðHi ; Hi11 , . . . , Hn ÞÞ PðY z jHi ; CÞ PðHi Þ PðYz ðHi11 , . . . , Hn ÞÞ (A5.25) μðYz jðHi ; Hi11 , . . . , Hn ÞÞ 5
PðY z ðHi , . . . , Hn ÞÞ 2 PðYz jHi ; CÞ PðHi Þ 2 PðY z ðHi11 , . . . , Hn ÞÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 PðYz jHi ; CÞ PðHi Þ PðY z ðHi11 , . . . , Hn ÞÞ (A5.26)
ψi ðY z Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðY z jHi ; CÞ PðHi Þ 1 ejγz ðiÞ PðYz ðHi11 , . . .: , Hn ÞÞ
(A5.27)
γz ðiÞ 5 ar cosðμðYz jðHi ; Hi11 , . . . , Hn ÞÞÞ
(A5.28)
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ψi ðYz Þ 5 PðYz ðHi , . . . , Hn ÞÞ ejαz ðiÞ
(A5.29)
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! PðYz jHi ; CÞ PðHi Þ 1 μðYz jðHi ; Hi11 , . . . , Hn ÞÞ PðYz ðHi11 , . . . , Hn ÞÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : (A5.30) αz ðiÞ 5 ar cos PðYz ðHi , . . . , Hn ÞÞ
Appendix A
115
A new situation arises at the n 2 1 and n steps of the algorithm. Let us describe the probability: PðYz ðHn21 , Hn ÞÞ 5 PðYz jHn21 ; CÞ PðHn21 Þ 1 PðYz ; Hn Þ 1 1 2 μðYz jðHn21 ; Hn ÞÞ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðY z jHn21 ; CÞ PðHn21 Þ PðY z ; Hn Þ 5
5 PðY z jHn21 ; CÞ PðHn21 Þ 1 PðY z jHn ; CÞ PðHn Þ 1 1 2 λðY z jðHn21 ; Hn ÞÞ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðY z jHn21 ; CÞ PðHn21 Þ PðY z jHn ; CÞ PðHn Þ
(A5.31)
where the parameter μðÞ is replaced by the parameter λðÞ because the probability PðY z ; Hn Þ was replaced by the probability PðY z ; Hn jCÞ 5 PðY z jHn ; CÞ PðHn Þ in (A5.31). The parameter λðÞ can then be defined as: λðYz jðHn21 ; Hn ÞÞ 5
PðHn21 Þ ½PðY z jHn21 Þ 2 PðYz jHn21 ; CÞ 1 PðHn Þ ½PðYz jHn Þ 2 PðYz jHn ; CÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : 2 PðY z jHn21 ; CÞ PðHn21 Þ PðY z jHn ; CÞ PðHn Þ
(A5.32)
With respect to the abovementioned replacement of probabilities, the parameter αðÞ in step n 2 1 must be changed as follows: αz ðn 2 1Þ 5 ar cos
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! PðY z jHn21 ; CÞ PðHn21 Þ 1 λðY z jHn21 ; Hn Þ PðYz jHn ; CÞ PðHn Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : PðY z ðHn21 , Hn ÞÞ
(A5.33)
In the n step (last iteration), the parameter αðÞ disappears αðnÞ 5 0 and the parameter γz ðn 2 1Þ is replaced by the parameter Θz : Θz 5 ar cosðλðYz jHn21 ; Hn ÞÞ 5 ! PðHn21 Þ ½PðYz jHn21 Þ 2 PðYz jHn21 ; CÞ 1 PðHn Þ ½PðYz jHn Þ 2 PðYz jHn ; CÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 5 ar cos : 2 PðYz jHn21 ; CÞ PðHn21 Þ PðYz jHn ; CÞ PðHn Þ
(A5.34)
The complex function can be rewritten for n 2 1 and n steps as follows: ψn ðY z Þ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðYz jHn21 ; CÞ PðHn21 Þ 1 ejΘz PðYz jHn ; CÞ PðHn Þ:
(A5.35)
By combining the different expressions of ψ1 ðp Y zffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Þ; . . . ; ψn ðY z Þ described above, the complex representations of partial component ψi ðY z Þ 5 PðY z jHi ; CÞ PðHi Þ ejβz ðiÞ for iAf1; 2; : : ; ng together with the expressions of their phases βz ðiÞ can be derived: ψ1 ðYz Þ 5 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðYz jH1 ; CÞ PðH1 Þ 1 ejγz ð1Þ PðYz ðH2 , . . . , Hn ÞÞ 5 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðYz jH1 ; CÞ PðH1 Þ 1 ejðγz ð1Þ2αz ð2ÞÞ ψ2 ðYz Þ 5
116
Appendix A
5
hpffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðYz jH1 ; CÞ PðH1 Þ 1 ejðγz ð1Þ2αz ð2ÞÞ PðYz jH2 ; CÞ PðH2 Þ 1 ejγz ð2Þ
5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðYz ðH3 , . . . , Hn ÞÞ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðY z jH1 ; CÞ PðH1 Þ 1 ejðγz ð1Þ2αz ð2ÞÞ PðYz jH2 ; CÞ PðH2 Þ 1 ejðγz ð1Þ2αz ð2Þ1γz ð2ÞÞ
5
5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðYz ðH3 , . . . , Hn ÞÞ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðY z jH1 ; CÞ PðH1 Þ 1 ejðγz ð1Þ2αz ð2ÞÞ PðYz jH2 ; CÞ PðH2 Þ 1 ejðγz ð1Þ2αz ð2Þ1γz ð2Þ2αz ð3ÞÞ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðY z jH3 ; CÞ PðH3 Þ 1
1 ejðγz ð1Þ2αz ð2Þ1γz ð2Þ2αz ð3ÞÞ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðYz ðH4 , . . . , Hn ÞÞ 5
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jβ ð2Þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PðY z jH1 ; CÞ PðH1 Þ 1 PðYz jH2 ; CÞ PðH2 Þ e z 1 1 PðYz jHn ; CÞ PðHn Þ ejβ z ðnÞ : (A5.36)
From (A5.36), the algorithm for the phase representation βz ðiÞ of complex model probabilities can be easily derived: β z ð1Þ 5 0; β z ð2Þ 5 γ z ð1Þ 2 αz ð2Þ; β z ð3Þ 5 β z ð2Þ 1 γ z ð2Þ 2 αz ð3Þ; ...; β z ðmÞ 5 β z ðm 2 1Þ 1 γ z ðm 2 1Þ 2 αz ðmÞ; ......; β z ðn 2 1Þ 5 β z ðn 2 2Þ 1 γ z ðn 2 2Þ 2 αz ðn 2 1Þ; β z ðnÞ 5 β z ðn 2 1Þ 1 Θz :
(A5.37)
Thus the wave multimodels theorem is proved.
A6 Conditional mixture of quantum subsystems Let us define the basic operations with wave probabilistic functions. We start with the definition of two quantum systems represented by two “bra-ket” forms:
ψ1 5 ψ1 Y1;1 Y1;1 1 ψ1 Y1;2 Y1;2 1 1 ψ1 Y1;n Y 1;n
(A6.1)
ψ2 5 ψ2 Y2;1 Y2;1 1 ψ2 Y2;2 Y2;2 1 1 ψ2 Y2;n Y 2;n
(A6.2)
Appendix A
117
Combination of the two quantum systems (A6.1) and (A6.2) yields the common wave probabilistic function that is defined with the help of the Kronecker product:
ψ1;2 5 ψ1 ψ2 5 ψ1 Y1;1 ψ2 Y2;1 Y1;1 Y2;1 1 . . . . . .
1 ψ1 Y1;1 ψ2 Y2;n Y1;1 1
Y2;n 1 1 ψ1 Y1;n ψ2 Y 2;1 Y1;n
Y2;1 1 . . . . . . 1 ψ1 Y1;n ψ2 Y2;n Y1;n Y2;n
(A6.3)
Let us rewrite Eq. (A6.3) into matrix form in the following way:
ψ1;2 5 Y1;1 2
1 1 61 1 6 4 : : 1 :
2 ψ Y1;1
6 0 Y1;2 : Y1;n 6 4 : 0 3 2 : 1 ψ Y2;1 0 6 : :7 ψ Y2;2 76 0 1 15 4 : : 1 1 0 0
3 : 0 : 0 7 7 5 : : : ψ Y1;n
3 3 2 0 Y2;1
6 7 0 7 7 6 Y2;2 7 5 4 : 5 :
Y2;n ψ Y2;n
0 ψ Y1;2 : 0 : : : :
(A6.4)
The weighted states given in (A6.1) or (A6.2) can be represented in matrix form by row or column vectors as follows:
ψ1;2 5 ψ Y1;1 Y1;1
ψ Y1;2 Y1;2 :
3 2 ψY2;1 Y2;1
6 ψ Y2;1 Y2;1 7
7 ψ Y1;n Y1;n P 6 4 5 :
ψ Y2;1 Y2;1
(A6.5)
The newly introduced matrix P is given for the standard combination of quantum systems as [29]: 2
1 61 6 P54 : 1
3 1 : 1 1 : :7 7 : 1 15 : 1 1
(A6.6)
The component of matrix P on i,jth position can be interpreted as the transition probability between states jY1;i i and jY2;j i caused by an external condition. In the event that some external conditions exist or some transitions of states are not allowed, then a matrix P will model this situation, and the form with general matrix P will represent the conditional combination of the quantum states. The Bell states are a concept in quantum informatics and represent the simplest possible examples of quantum entanglement between two q-bits [5]. The q-bits are usually thought to be spatially separated. They nevertheless exhibit perfect correlations, which cannot be explained without quantum mechanics. Let us define two q-bits represented by two “bra-ket” forms:
ψ1 5 α0 j0i1 1 α1 j1i1
(A6.7)
ψ2 5 β 0 j0i2 1 β 1 j1i2
(A6.8)
118
Appendix A
The combination of the two q-bits (A6.7) and (A6.8) yields the common wave probabilistic function that is defined with the help of the Kronecker product:
ψ1;2 5 ψ1 ψ2 5 α0 β 0 j0; 0i 1 α0 β 1 j0; 1i 1 α1 β 0 j1; 0i 1 α1 β 1 j1; 1i
(A6.9)
It is evident that the complex numbers α0 ; α1 ; β 0 ; β 1 in Eq. (A6.9) that reach the wellknown Bell state cannot be found [5]:
ψ1;2 5 γ j0; 1i 1 γ j1; 0i 1 2 B
(A6.10)
If we use the conditional combination together with the general transition matrix, P is defined as:
P5
p1;1 p2;1
p1;2 p2;2
(A6.11)
We can then write the conditional combination of q-bits in the following form:
ψ1;2 5 ψ1 ψ2 5 p1;1 α0 β 0 j0; 0i 1 1 p1;2 α0 β 1 j0; 1i 1 p2;1 α1 β 0 j1; 0i 1 p2;2 α1 β 1 j1; 1i
(A6.12)
It can be easily understood that the following conditions must be fulfilled to achieve the Bell state: p1;1 5 0 p2;2 5 0 p1;2 α0 β 1 5 γ 1 p2;1 α1 β 0 5 γ 2
(A6.13)
The basic assumption for the Bell state is that transitions p1;1 and p2;2 are denied. So the Bell states could be seen as the practical examples of conditional combination of two q-bits.
A7 Information bosons, fermions, and quarks Let us define the general binary quantum subsystem through the wave probabilistic function as follows: pffiffiffiffiffi ψ 5 p j0i 1 pffiffiffiffiffi p ejmΔ j1i 0
1
(A7.1)
We suppose the phase to be the linear function of a quantized phase Δ. The phase functions must be quantized to achieve single valuedness also for the phases ðΔ 1 2 π k Þ, where k is an integer. pffiffiffiffiffi ψ jAiη 5 0 5 p0
(A7.2)
pffiffiffiffiffi ψ jAiη11 5 1 5 p1 ejmðΔ12kπÞ
(A7.3)
Appendix A
119
The probability union that jAiη 5 0 falling on the first digit or jAiη11 5 1 on the second digit is in the quantum world given as: 2 pffiffiffiffiffi pffiffiffiffiffi P jAiη 5 0 , jAiμ11 5 1 5 p0 1 p1 ejmΔ 5 5 p0 1 p1 1 2
pffiffiffiffiffiffiffiffiffiffiffiffiffi p0 p1 cosðm ΔÞ
(A7.4)
which is the quantum equivalent of the classical well-known probabilistic rule: P Aiη 5 0 , Aiη11 5 1 5 5 P Aiη 5 0 1 P Aiη11 5 1 2 P Aiη 5 0 - Aiη11 5 1
(A7.5)
The quantum rule (A7.4) enables both a negative and a positive sign according to a phase parameter. On the other hand, the classical rule (A7.5) enables only a negative sign.
A7.1 Information bosons with integer spin Let us analyze all variants of the phase parameters, taking into account also k-phase cycling in Eq. (A7.4): P jAiη 5 0 , jAiη11 5 1 5 5 p0 1 p1 1 2
pffiffiffiffiffiffiffiffiffiffiffiffiffi p0 p1 cos½m ðΔ 1 2 k πÞ
(A7.6)
For information bosons with integer spin mAf0; 6 1; 6 2; 6 3; . . .g, we can guarantee a positive sign of (A7.4), no matter which value of k is chosen: P jAiη 5 0 , jAiη11 5 1 5 5 p0 1 p1 1 2
pffiffiffiffiffiffiffiffiffiffiffiffiffi p0 p1 cosðm ΔÞ
(A7.7)
From the intersection rule of wave probabilities, we can alternatively write: P jAiη 5 0 - jAiη11 5 1 5 ψ jAiη 5 0 ψ jAiη11 5 1 1 ψ jAiη 5 0 ψ jAiη11 5 1 5 52
pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi p0 p1 cosðm Δ 1 2 k m πÞ 51 2 p0 p1 cosðm ΔÞ
(A7.8)
The final wave probability function for information bosons can be given as: ψðjAiη 5 0Þ 5 ψðjAiη11 5 1Þ 5
pffiffiffiffiffi p0
pffiffiffiffiffi jmΔ p1 e
(A7.9) (A7.10)
120
Appendix A
A7.2 Information fermions with a half-integer spin For information fermions with a half-integer spin, we can find the negative sign in the following way: m1 pffiffiffiffiffiffiffiffiffiffiffiffiffi ðΔ 1 2 k πÞ 5 p0 p1 cos 2 m1 pffiffiffiffiffiffiffiffiffiffiffiffiffi Δ 1 m1 k πÞ 5 p0 1 p1 1 2 p0 p1 cos 2
PððjAiη 5 0Þ , ðjAiη11 5 1ÞÞ 5 p0 1 p1 1 2
5 p0 1 p1 6 2
m1 pffiffiffiffiffiffiffiffiffiffiffiffiffi Δ p0 p1 cos 2
(A7.11)
where m1 Af 61; 63; 65; . . .g. From the intersection rule of wave probabilities, we can alternatively write: P jAiη 5 0 - jAiη11 5 1 5 ψ jAiη 5 0 ψ jAiη11 5 1 1 ψ jAiη 5 0 ψ jAiη11 5 1 5 5
pffiffiffiffiffiffiffiffiffiffiffiffiffi jmðΔ12kπÞ pffiffiffiffiffiffiffiffiffiffiffiffiffi 2jmðΔ12kπÞ pffiffiffiffiffiffiffiffiffiffiffiffiffi 1 p0 p1 e 5 2 p0 p1 cosðm Δ 1 2 k m πÞ 5 p0 p1 e
8 9 m1 pffiffiffiffiffiffiffiffiffiffiffiffiffi > > > 12 p0 p1 cos Δ with probability 1=2 > > > > > < = 2 5 50 m1 pffiffiffiffiffiffiffiffiffiffiffiffiffi > > > > > Δ with probability 1=2 > 22 p0 p1 cos > > : ; 2 (A7.12)
It means that the plus and minus probabilities given in (A7.12) cancel each other, and it explains the exclusion rule known for indistinguishable fermions. The final wave probability function for information fermions can be given as: ψðjAiη 5 0Þ 5
ψðjAiη11 5 1Þ 5
8 > > < > > :
pffiffiffiffiffi p0
pffiffiffiffiffi 1 p1 e pffiffiffiffiffi 2 p1 e
(A7.13) m j 21
Δ
m j 21
Δ
(A7.14)
Appendix A
121
A7.3 Information quarks with a third-integer spin We can also consider the k-multiple of one-third spin, and we can therefore find the third quantum states assigned to different information quarks as follows: P jAiη 5 0 , jAiη11 5 1 5 m2 m2 m2 pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi ðΔ 1 2 k πÞ 5 p0 1 p1 1 2 p0 p1 cos Δ1 2 k πÞ 5 5 p0 1 p1 1 2 p0 p1 cos 3 3 3 8
m2 pffiffiffiffiffiffiffiffiffiffiffiffiffi > Δ p0 1 p1 1 2 p0 p1 cos > > > 3 > > > pffiffiffi > > < 3 m2 m2 pffiffiffiffiffiffiffiffiffiffiffiffiffi p 2cos Δ 1 Δ sin 1 p 1 2 p p 0 1 0 1 5 3 3 2 > > > pffiffiffi > > 3 m2 m2 > pffiffiffiffiffiffiffiffiffiffiffiffiffi > > p0 1 p1 1 2 p0 p1 2cos sin Δ 2 Δ > : 2 3 3
(A7.15)
where m2 Af 6 1; 6 2; 6 4; 6 5. . .g. Alternatively, we can rewrite the equation for the probability intersection as follows: PððjAiη 5 0Þ - ðjAiη11 5 1ÞÞ 5 ψ ðjAiη 5 0Þ ψðjAiη11 5 1Þ 1 ψðjAiη 5 0Þ ψ ðjAiη11 5 1Þ 5
m m2 m2 m2 pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 5 p0 p1 e j 3 ðΔ12kπÞ 1 p0 p1 e2j 3 ðΔ12kπÞ 5 2 p0 p1 cos Δ12 k π 5 3 3 8 9 m2 pffiffiffiffiffiffiffiffiffiffiffiffiffi > > > > cos Δ with probability 1=3 2 p p > > 0 1 > > > > 3 > > > > > > > > < = pffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi m2 m2 pffiffiffiffiffiffiffiffiffiffiffiffiffi with probability 1=3 5 5 22 p0 p1 cos 3 Δ 1 3 p0 p1 sin 3 Δ > > > > > > > > > > pffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi m2 m2 > > pffiffiffiffiffiffiffiffiffiffiffiffiffi > > > > cos p p Δ 2 3 p Δ with probability 1=3 sin 22 p 0 1 0 1 > > : ; 3 3 (A7.16)
m 2 pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 Δ 5 2 p0 p1 cos 3 3
The wave probability function for information quarks can be written as: ψðjAiη 5 0Þ 5
pffiffiffiffiffi p0
m2 pffiffiffiffiffi ψðjAiη11 5 1Þ 5 p1 e j 3 ðΔ12kπÞ 5
8 > > > > > >
> > m 2 > > j 32 Δ2 3 π > : pffiffiffiffiffi p1 e
(A7.18)
122
Appendix A
It means that three variants of probabilities given in (A7.16) yield into their mutual mixture. It is interesting that the final intersection rule goes to a negative probability. In quarks physics [5], the following two quarks can be found (e 1:6 10219 C is the charge of an electron): • u (up quark) with charge spin 1 23 e, • d (down quark) with charge spin 2 13 e, together with their anti-quarks given as: • u (anti-up quark) with charge spin 2 23 e, • d (anti-down quark) with charge spin 1 13 e. Other features like quark flavor or quark color come out from three variants of wave function (A7.16). Quarks are recognized as the main components of particle physics; for example, a proton is composed of uud quarks (with different colors), and a neutron is composed of ddu quarks (also with different colors). The presented introduction of information bosons, fermions, or quarks comes from a different principle than the well-known “standard model of particle physics” [5] that is defined through the Lie group U(1) 3 SU(2) 3 SU(3). Our methodology is set up only on the strict requirement for single valuedness of the wave function.
A8 Pure and mixed wave probabilistic states The difference between pure and mixed states can be explained as the distinction between classical and quantum uncertainty [46]. Classical uncertainty is governed by the probability theory. Quantum uncertainty expresses the intrinsic uncertainty of nature. Let us consider an example to stress the difference between a pure state and a mixed state. Let us assume that we have complete knowledge of the quantum state given by the wave probabilistic function: jψi 5 α j1i 1 β j0i
(A8.1)
where α; β are complex parameters, and jαj2 ; jβ j2 give the probability of having the system in states j1i and j0i, respectively. We speak about pure states if we know the complex parameters α; β (we know the phase between these two states) of the wave probabilistic function. On the other hand, we speak about mixed states if we only have knowledge about the modules of wave parameters, that is, probabilities: p1 5 jαj2 p2 5 jβ j2
We do not know the wave probabilities α; β.
(A8.2)
Appendix A
123
In the case of mixed states, we cannot write the quantum state as a linear combination of j1i and j0i, instead, we have to use the density matrix operator to describe it (ρM means the density matrix of mixed states): ρM 5 p1 j1ih1j 1 p2 j0ih0j
(A8.3)
which gives the exact information of having the system in state j1i with probability p1 and correspondingly p2 in state j0i. Mixed states can be seen as a way of integrating classical statistics into quantum informatics. Thus the quantum system is really in a pure state, but we do not know which one it is in. The reason is that there is no unique decomposition into pure states. The canonical decomposition into orthogonal projections is unique only if there is no degeneracy in the eigenvalues. If we forget the requirement that projections have to be mutually orthogonal, then even if there is no degeneracy in the canonical decomposition, there will be an infinite number of decompositions into pure states. Let us consider a pure state in the following numerical example: 1 1 jψi 5 pffiffiffi j1 0i 1 pffiffiffi j0 1i 2 2
(A8.4)
The density matrix is given as (ρP means the density matrix of poor states): ρP 5 2 1 jψihψj 5 3 2 1 1 1 0 5 4p1ffiffiffi ð 1 5 4pffiffiffi 1 pffiffiffi 0 1 2 2 2 01 11
3 1 0 Þ 1 pffiffiffi ð 0 1 Þ5 5 2 (A8.5)
B2 2C B C 5B1 1C @ A 2 2
The density matrix of mixed states can be given in the vector representation as follows:
ρM 5
1 2
1 1 0 ð1 0Þ1 ð0 0 1 2
01
1 0
B2 C B C 1Þ5B C @0 1A 2
(A8.6)
It is evident from (A8.5) and (A8.6) that the mixed states lack information with respect to the pure state. The mixed states are identical if they are described by the same density matrix. Wave probabilistic models can be seen as an instrument for the representation of quantum pure states. On the other hand, classical probabilistic models give us only information about mixed states (with incomplete knowledge).
124
Appendix A
The question of how to distinguish between pure and mixed states has been studied in many publications [35,46] with the following result: Trðρ2 Þ 5 1
for pure states
Trðρ2 Þ , 1 for mixed states
(A8.7) (A8.8)
where Tr(.) means matrix trace operator.
A9 Performance parameters If time, performance, or other constrains are assigned to different functions and information links, then the result of the analysis is represented by a table of different, sometimes even contradictory system requirements assigned to each physical subsystem (module) and physical communication links between the subsystems: • Accuracy—the degree of conformance between a true parameter and its estimated value, etc.; • Reliability—the ability to perform a required function under given conditions for a given time interval; • Availability—the ability to perform a required function at the initialization of the intended operation; • Continuity—the ability to perform a required function without nonscheduled interruption during the intended operation; • Integrity—the ability to provide timely and valid alerts to the user when a system must not be used for the intended operation; • Safety—risk analysis, risk classification, risk tolerability matrix, system component certification, etc. Quality of measured performance parameters is an unified approach applicable for all the abovementioned performance parameters: Absolute measuring error (μa ) is the difference between a measured value and the real value or the accepted reference μa 5 xd 2 x s
(A9.1)
where x d is the measured dynamic value and x s is the corresponding real value or accepted reference. Relative measuring error (μr ) is the absolute measuring error divided by a true value given by μr 5
xd 2 xs xs
(A9.2)
Appendix A
125
Accuracy (δ) of a measuring system is the range around the real value in which the actual measured value must lie. The measurement system is said to have accuracy δ if: xs 2 δ # xd # xs 1 δ
(A9.3)
2δ # μa # 1 δ
(A9.4)
or straightforwardly:
Accuracy is often expressed as a relative value in 6 δ%. Reliability (1 2 α) of a measuring system is the minimal probability of a chance that a measuring error μa lies within the accuracy interval ½ 2δ; δ: ð1 2 αÞ # Pðjμa j # δÞ
(A9.5)
where P(.) means the probability value. Error probability (α) of a measuring system is the probability that a measured value lies further from the actual value than the accuracy: α $ Pðjμa j . δÞ
(A9.6)
The reliability of the measuring system is often controlled by the end-user of the measurement system, while error probability is generally assessed by the International Organization for Legal Metrology. Dependability (β) of an acceptance test is the probability that—on the basis of the samplea correct judgment is given on the accuracy and reliability of the tested system: Pðα # Pð2 δ , μa , δÞÞ $ β
(A9.7)
The desired dependability determines the size of the sample; the higher the sample, the higher the dependability of the judgment.
A9.1 Tests of normality With regard to Ref. [68], normal distribution will be expected because using the different kinds of statistics, such as order statistics (distribution independent) for small sample sizes, typical for performance parameters, the result may be fairly imprecise. Testing normality is important in the performance parameters procedure because in analyses containing a lot of data, these data are required to be at least approximately normally distributed. Furthermore, the confidence of the limit assessment requires the assumption of normality. Several kinds of normality tests are available, such as: • Pearson test (chi-squared goodness-of-fit test) • KolmogorovSmirnov test • AndersonDarling and Cramérvon Mises test
126
Appendix A
All the abovementioned tests for normality are based on the empirical distribution function (EDF) and are often referred to as EDF tests. The empirical distribution function is defined for a set of n independent observations X1 ; X 2 ; . . . ; Xn with a common distribution function F(x). Under the null hypothesis, F(x) is the normal distribution. It denotes the observations ordered from the smallest to the largest as Xð1Þ ; X ð2Þ ; . . . ; X ðnÞ . The empirical distribution function, Fn ðxÞ, is defined as F0 ðxÞ 5 0; x , X ð1Þ i Fi ðxÞ 5 ; X ðiÞ # x , X ði11Þ ; n Fn ðxÞ 5 1;
i 5 1; . . . ; n 2 1
(A9.8)
X ðnÞ # x
Note that Fn ðxÞ is a step function that takes a step of height 1/n at each observation. This function estimates the distribution function F(x). At any value x, Fn ðxÞ is the proportion of observations less than or equal to x, while F(x) is the probability of an observation less than or equal to x. EDF statistics measure the discrepancy between Fn ðxÞ and F(x). In the following part, the Pearson’s test (chi-squared goodness-of-fit test) will be introduced as a practical example of EDF tests. The chi-squared goodness-of-fit statistic χ2q for a fitted parametric distribution is computed as follows: χ2q 5
L X ðmi 2n p Þ2 i51
n pi
i
(A9.9)
where L is the number of histogram intervals, mi is the observed percentage in ith histogram interval, n is the number of observations, pi is the probability of ith histogram interval computed by means of theoretical distribution. The degree of freedom for the chi-squared test χ2 is equal to L 2 r 2 1, where r is the parameter number of a theoretical distribution (in the case of normal distribution, r 5 2).
A9.2 Estimation of measuring system’s accuracy, reliability, and dependability Let us assume we have a normally distributed set of n measurements of performance parameters: μa;1 ; μa;2 ; . . . ; μa;n . If the mean value or a standard deviation is not known, we can estimate both the mean value μa and standard deviation sa from the measured data as follows: n 1X μ n i51 a;i vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u n u 1 X sa 5 t ðμ 2μ Þ2 n 2 1 i51 a;i a
μa 5
(A9.10)
Appendix A
127
Let n be a nonnegative integer, α; β are given real numbers ð0 , α; β , 1Þ, and let μa;1 ; μa;2 ; . . . ; μa;n ; μa;y be identically distributed n 1 1 independent random variables. Tolerance limits L 5 L μa;1 ; μa;2 ; . . . ; μa;n and U 5 U μa;1 ; μa;2 ; . . . ; μa;n are defined as values so that the probability is equal to β that the limits include at least a proportion ð1 2 αÞ of the population. It means that such limits L and U satisfy: PfPðL , μa;y , UÞ $ 1 2 αg 5 β
(A9.11)
A confidence interval covers the population parameters with a stated confidence. The tolerance interval covers a fixed proportion of the population with a stated confidence. Confidence limits are limits within which we expect a given population parameter, such as the mean, to lie between. Statistical tolerance limits are limits which we expect a stated proportion of the population to lie between. For the purpose of this chapter, we will present only results derived under the following assumptions: • μa;1 ; μa;2 ; . . . ; μa;n ; μa;y are n 1 1 independent normally distributed random variables with the same mean μ0 and variance σ20 (equivalently μa;1 ; μa;2 ; . . . ; μa;n ; μa;y is a random sample of size n 1 1 from the normal distribution with mean μ0 and variance σ20 ). • The symmetry about the mean or its estimation is required. • The tolerance limits are restricted to the simple form μa 2 k sa and μa 1 k sa , where k is the so-called tolerance factor, and μa and sa are sample mean and sample standard deviations, respectively, given by (A9.10). Under the above given assumptions, the condition (A9.11) can be rewritten as follows: U 2 μ0 L 2 μ0 2Φ $12α 5β P Φ σ0 σ0
(A9.12)
where Φ is the distribution function of the normal distribution with a mean of zero and standard deviation equal to 1: 1 ΦðuÞ 5 pffiffiffiffiffiffiffiffiffi 2π
ðu 2N
e22t dt 1 2
(A9.13)
The solution to the problem of constructing tolerance limits depends on the level of knowledge of the normal distribution, that is, on the level of knowledge of the mean deviation μa and standard deviation sa . In the following part, the accuracy, reliability, and dependability of the measuring system will be mathematically derived for a known mean value and a standard deviation, for a known mean value and unknown standard deviation, and for both an unknown mean value and standard deviation.
128
Appendix A
A9.3 Known mean value and standard deviation We can start with the equation [69]: PfP½μ0 2 zð12α=2Þ σ0 # μa;y # μ0 1 zð12α=2Þ σ0 $ ð1 2 αÞg 5 1
(A9.14)
where μa;y is the measured value, μ0 ; σ0 are known mean values and standard deviation, and zð12α=2Þ is a percentile of a normal distribution (e.g., for α 5 0:05, we can find in statistical table z0:975 5 1:96). Based on (A9.14), we can decide that a measuring system’s accuracy δ 5 zð12α=2Þ σ0 is guaranteed with the measuring system’s reliability ð1 2 αÞ. Because the mean value and standard deviation are known, the measuring system’s dependability is equal to β 5 1.
A9.4 Known standard deviation and unknown mean value Now we expect that the mean value is estimated according to (A9.10). Then we can write the equation [69]: PfP½μa 2 k σ0 # μa;y # μa 1 k σ0 $ ð1 2 αÞg 5 β
(A9.15)
where σ20 is the known variance and k is computed from the following equation: zð11βÞ=2 zð11βÞ=2 Φ pffiffiffi 1 k 2 Φ pffiffiffi 2 k 5 1 2 α n n
(A9.16)
where the function ΦðuÞ was defined in (A9.13) and sample μa computed according to (A9.10). Based on the Eq. (A9.16), we can say that for the predefined values of the measuring system’s reliability ð1 2 αÞ and dependability β and the number of measurements n, the accuracy of measuring system will be: 1 δ 5 z12α2 1 pffiffiffi zð11βÞ σ0 2 n
(A9.17)
A9.5 Known mean value and unknown standard deviation For a known mean value and unknown standard deviation, we can write the equation: 8 2 0 < P P4μ0 2 @zð12α=2Þ :
n χ2ð12βÞ ðnÞ
0 !12 1 A sa # μa;y # μ0 1 @zð12α=2Þ
9 3 !12 1 = A sa 5 $ ð1 2 αÞ 5 β 2 ; χð12βÞ ðnÞ n
(A9.18)
Appendix A
129
where sa is estimated according to (A9.10), χ2ð12βÞ ðnÞ means chi-quadrat distribution with n degree of freedom. Based on the Eq. (A9.18), we can say that for predefined values of the measuring system’s reliability ð1 2 αÞ and dependability β and the number of measurements n, the accuracy of measuring system will be: 0
δ 5 @zð12α=2Þ
n χ2ð12βÞ ðnÞ
!12 1 A sa
(A9.19)
A9.6 Unknown mean value and standard deviation This variant is the most important in many practical cases, but the solution is theoretically very difficult. However, a lot of approximation forms exist based on which the practical simulation could be feasible. We start with the task description PfP½μa 2 k sa # μa;y # μa 1 k sa $ ð1 2 αÞg 5 β;
(A9.20)
where the sample mean value μa and sample standard deviation sa are estimated from n samples according to (A9.10). Howe [70] defines a very simple approximation form for k: 1 n11 2 k zð12α=2Þ n
n21 χ2ð12βÞ ðn21Þ
!12 :
(A9.21)
Bowker [81] defines: "
# 5 z2β 1 10 zβ p ffiffiffiffiffi ffi : 1 k zð12α=2Þ 1 1 12n 2n
(A9.22)
Ghosh [82] defines the next approximation form: k zð12α=2Þ
!12 n χ2ð12βÞ ðn21Þ
(A9.23)
If we take the approximation forms for zx for x . 0.5 [2] (The approximation error is not greater than 0.003.): zx 5 u x 2
2:30753 1 0:27061 ux 1 1 0:99229 ux 1 0:04481 u2x 1
2 ux 5 lnð12x Þ22
(A9.24)
130
Appendix A
and for χ2x ðγÞ [69] (For xAh0:01; 0:99i and γ $ 20, the absolute error of approximation is not greater than 0.001.), (the number of degree of freedom is usually γ 5 n 2 1): pffiffiffi 1 2 1 1 0 1 6 z4x 1 14 z2x 2 32 γ21 1 χx2 ðγÞ 5 γ 1 zx 2 γ 2 1 z2x 2 1 1 pffiffiffi z3x 2 7 zx γ22 2 3 405 9 2 1
1 3 pffiffiffi 9 z5x 1 256 z3x 2 433 zx γ22 4860 2
(A9.25)
or a much simpler approximation form from Ref. [69]: 0
χx2 ðγ Þ 5
i2 1 h zx 1 ð2 γ21Þ1=2 ; 2
(A9.26)
then the analytical equation for the estimation of the measuring system’s accuracy δ based on an estimated mean value and standard deviation of n-sample data with the predefined measuring system’s reliability ð1 2 αÞ and measuring system’s dependability β can be computed.
A10 “M from N” filtering We can suppose N sensors data available where the probability of right error detection is marked as PRD and the probability of noncorrect error detection as PFD . Because of the enormous safety and economical impact in the case of a noncorrect error alert, the method for filtering “M from N” will be presented. Let us have N sensors, and for simplicity let us suppose the probabilities of correct PRD and noncorrect error detection PFD on each sensor to be the same. If this assumption is not fulfilled, the method can be easily extended to a more general case. The hypothesis H0 represents a perfect system behavior (nonsystem error, no sensor error) and hypothesis H1 as a state with a detected error (error of system or error of sensors). In the next equation, the probability of error detection on k sensors of N sensors (N-k sensors do not detect errors) is given in case the system does not display any error (conditioned by hypothesis H0): P½kjH0 5
N k
PkFD ð12PFD ÞN2k
(A10.1)
In the same way, the probability of error detection by k of N sensors is given in case the system is in an error state (conditioned by hypothesis H1): P½kjH1 5
N k
PkRD ð12PRD ÞN2k
(A10.2)
The main idea of “M from N” filtering is in the selection of value M (threshold) defining the minimum number of sensors that detected error. If M sensors detect error, then this
Appendix A
131
error is taken as the real system error and the system starts sending error alert signals. The threshold M should be selected with respect to the following probabilities: PF 5 PD 5
N X N k5M N X k5M
k N k
PkRD ð12PRD ÞN2k
(A10.3)
PkFD
ð12PFD Þ
N2k
where PF ; PD means the probability of a false alert (an error is detected, but the system works without any errors) and the probability of the right detection (the system error is correctly detected). The number of detectors N and the threshold M can be chosen based on the sensor parameters PRD , PFD and the required probabilities PF ; PD . Methods of data fusion and comparison are the main tools for the estimation of the system performance parameters (accuracy, reliability, integrity, continuity, etc.) and can be used for a derivation of an exact definition of false alert and right detection probabilities. This method can be extended to quantum informatics or in other words to advanced quantum detection, taking into account all inner dependencies among quantum sensors.
This page intentionally left blank
Bibliography References [1] Y. Bar-Hiller, R. Carnap, Semantic information, Br. J. Philos. Sci. 14 (4) (1953) 147157. Available from: https://doi.org/10.1093/bjps/IV.14.147. [2] L.O. Chua, Memristor—the missing circuit element, IEEE Trans. Circuit Theory 18 (5) (1971) 507519. Available from: https://doi.org/10.1109/TCT.1971.1083337. [3] D.F. Lawden, The Mathematical Principles of Quantum Mechanics, Dover Publication, Inc., Mineola, New York, 1995. ISBN 0-486-44223-3. [4] R. Feynman, R. Leighton, M. Sands, Feynman Lectures of Physics, Addison Wesley Longman, Inc., USA, 1966. [5] R. Feynman, QED: The Strange Theory of Light and Matter, Addison Wesley Longman, Inc., USA, 1966. [6] S. Kauffman, Autonomous agents, in: J.D. Barrow, P.C.W. Davies, C.L. Harper Jr. (Eds.), Science and Ultimate Reality: Quantum Theory, Cosmology, and Complexity, Cambridge University Press, 2004. ISBN 9780521831130. [7] A. Khrennikov, Linear representations of probabilistic transformations induced by context transitions, J. Phys. A Math. Gen. 34 (2001) 99659981. Available from: https://doi.org/10.1088/0305-4470/34/47/304. [8] A. Khrennikov, Reconstruction of quantum theory on the basis of the formula of total probability, in: Proceedings of Conference Foundations of Probability and Physics-3, American Institute of Physics, Serial Conference Proceedings, 2005, 750, pp. 187219. ˇ [9] P. Moos, V. Malinovský, Information Systems and Technologies, CVUT, Praha, 2009. [10] S.F. Novaes, Standard model: an introduction, Report IFT-P.010/2000, 2000, arXiv:hep-ph/0001283v1. [11] V. Peterka, Bayesian approach to system identification, in: P. Eykfoff (Ed.), Trends and Progress in System Identification, Pergamon Press Oxford, 239304. Available from: http://doi.org/10.1016/b978-008-025683-2.50013-2 [12] C.E. Shannon, A mathematical theory of communication, Bell Syst. Tech. J. 27 (1948) 379623. [13] T. Stonier, Information and Internal Structure of the Universe, Springer-Verlag, London, 1990. Available from: http://doi.org/10.1007/978-1-4471-3265-3. [14] M. Steiner, The Applicability of Mathematics as a Philosophical Problem, Harvard University Press, Cambridge, Massachusetts, 1998. [15] M. Svítek, Wave probabilistic models, Neural Netw. World 17 (5) (2007) 469481. [16] M. Svítek, Quasi-non-ergodic probabilistic systems and wave probabilistic functions, Neural Netw. World 19 (3) (2009) 307320. [17] M. Svítek, Investigation to Heisenberg's uncertainty limit, Neural Netw. World 18 (6) (2008) 489498. [18] M. Svítek, Quantum system modelling, Int. J. Gen. Syst. 37 (5) (2008) 603626. Available from: https:// doi.org/10.1080/03081070701833039. [19] M. Svítek, Wave probabilities and quantum entanglement, Neural Netw. World 5 (18) (2008) 401406. [20] M. Svítek, Applying wave probabilistic functions for dynamic system modelling, IEEE Trans. Syst. Man. Cybern. C Appl. Rev. 41 (5) (2011) 674681. Available from: https://doi.org/10.1109/TSMCC.2010.2093127. [21] M. Svítek, Wave probabilistic functions for quantum cybernetics, IEEE Trans. Syst. Man. Cybern. C Appl. Rev. 42 (2) (2012) 233240.
133
134
Bibliography
[22] M. Svítek, Z. Votruba, P. Moos, Towards information circuits, Neural Netw. World 20 (2) (2010) 241247. [23] M. Svítek, Wave probabilistic information power, Neural Netw. World 21 (3) (2011) 269276. Available from: https://doi.org/10.14311/nnw.2011.21.016. [24] M. Svítek, Quantum subsystems connections, Neural Netw. World 23 (4) (2013) 287298. Available from: https://doi.org/10.14311/NNW.2013.23.018. [25] M. Svítek, Z. Votruba, T. Zelinka, V. Jirovský, M. Novák, Transport Telematics—Systemic View, first ed., WSEAS Press, New York, 2013, p. 305. ISBN 978-1-61804-144-9. [26] M. Svítek, Quantum System Theory—Principles and Applications, VDMVSG, Saarbrucken, 2009, p. 140. ISBN: 978-3-639-23402-2. [27] M. Svítek, Dynamical systems with reduced dimensionality, Edice monografií NNW cˇ .6, Neural Network World, 161 stran, 2006, Praha, ISBN 80-903298-6-1. [28] M. Svítek, More Than Sum of Pieces—Systematic Approach to Knowledge, Academia, 2013, p. 225. ISBN 978-80-200-2286-8 (in Czech). [29] M. Svítek, Conditional combinations of quantum systems, Neural Netw. World 21 (1) (2011) 6773. Available from: https://doi.org/10.14311/NNW.2011.21.030. [30] M. Svítek, Telematic approach into program of smart cities, in: Proceedings of the Seventh Euro American Conference on Telematics and Information Systems, EATIS 2014, 2014, Valparaíso, Chile, ISBN 978-1-4503-2435-9. [31] M. Svítek, Quantum models for brain network, in: Proceedings of the Fourth International Conference on Mathematical, Biology and Bioinformatics, 2012, Moskva, Lomonosov Moscow State University, pp. 170171, ISBN 978-5-317-04214-1. [32] M. Svítek, J. Novoviˇcová, Performance parameters definition and processing, Neural Netw. World 15 (6) (2005) 567577. [33] T. Zelinka, M. Svítek, Multi-path communications access decision scheme, in: Proceedings of the 12th World Multi-Conference on Systemics, Cybernetics and Informatics, vol. III, 2008, IIIS—International Institute of Informatics and Systemics, Orlando, FL, pp. 233237. ISBN 978-1-934272-33-6. [34] N.N. Taleb, The Black Swan: The Impact of the Highly Improbable, Random House, New York, 2010. ISBN 978-1-4000-6351-2. [35] V. Vedral, Introduction to Quantum Information Science, Oxford University Press, 2006. Available from: http://doi.org/10.1093/acprof:oso/9780199215706.001.0001. ˇ [36] J. Vlˇcek, Systémové inˇzenýrství (Systems Engineering), CVUT, Praha, 1999. ISBN 80-01-01905-5 (in Czech). ˇ [37] J. Vlˇcek, et al., Informaˇcní výkon (Information Power), CVUT, Praha, 2002. ISBN: 80-01 02505-5 (in Czech). [38] Z. Votruba, M. Novák, Alliance approach to the modelling of interfaces in complex heterogenous objects, Neural Netw. World 20 (5) (2010) 609619. [39] L.A. Zadeh, Fuzzy sets, Inf. Control 8 (3) (1965) 338353. Available from: https://doi.org/10.1016/S00199958(65)90241-X. [40] R. Thom, Structural Stability and Morphogenesis: An Outline of a General Theory of Models, AddisonWesley, Reading, MA, 1989. ISBN 0-201-09419-3. [41] P. Moos, M. Svítek, Z. Votruba, M. Novák, Information model of resonance phenomena in brain neural networks, Neural Netw. World, 3 (2018) 225239. Available from: http://doi.org/10.14311/ NNW.2018.28.014. [42] P. Kovanic, M.B. Humber, The economics of information—mathematical gnostics for data analysis, 2013, ,http://www.math-gnostics.eu/download/MG19-2015.pdf.. [43] D.S. Bernstein, Ivory ghost, IEEE Control Syst. Mag. (Issue 27)(2007) 1617. [44] J. Braun, Topological analysis of networks with nullators and norators, Electron. Lett. 2 (1966) 427428.
Bibliography
135
[45] S. Grossberg, Adaptive resonance theory: how a brain learns to consciously attend, learn, and recognize a changing world, Neural Netw. World 37 (2012) 147. [46] C.J.S. Clarke, The role of quantum physics in the theory of subjective consciousness, Mind Matter 5 (1) (2007) 4581. [47] D. Bohm, Quantum Theory, Prentice-Hall, Englewood Cliffs, New-Jersey, 1951. [48] D. Durr, S. Teufel, ISBN 978-3-540-89343-1 Bohmian Mechanics, Springer, Berlin, 2009. [49] J.F. Gold, Knocking on the Devil’s Door—A Naive Introduction to Quantum Mechanics, Tachyon Publishing Company, 1995. [50] T. Starek, M. Svítek, Estimation of socio-economic impacts using a combination of fuzzy-linguistic approximation and micro-simulation models, in: J. Mikulski (Ed.), Transport Systems Telematics, Monograph, Communications in Computer and Information Science, vol. 104, Springer, 2010, pp. 400410. ISBN: 978-3-642-16471-2. [51] M.G.A. Mohamed, H.W. Kim, T.-W. Cho, Modeling of memristive and memcapacitive behaviors in metal-oxide junctions, Sci. World J. 2015 (2015) p. 16. Available from: https://doi.org/10.1155/2015/ 910126. Article ID 910126, Hindawi Publishing Corporation. [52] W. Marszalek, On the action parameter and one-period loops of oscillatory memristive circuits, Nonlinear Dyn. 82 (2015) 619628. Available from: https://doi.org/10.1007/s11071-015-2182-2. Springer. [53] S. Shi-Peng, S. Da-Shan, C. Yi-Sheng, S. Young, Realization of a flux-driven memtranstor at room temperature, Chin. Phys. B 25 (2) (2016) 027703. [54] D.C. Hamill, Gyrator-capacitor modelling: a better way of understanding magnetic components, in: Proceedings of the Applied Power Electronics Conference and Exposition, APEC '94, vol.1, 1994, pp. 326332. [55] M. Svítek, Towards to complex system theory, Neural Netw. World 15 (1) (2015) 533. [56] G. Saxby, Practical Holography, Prentice-Hall, 1988. [57] C.M. Vest, Holographic Interferometry, John Wiley and Sons, 1979. [58] E.N. Leith, J. Upatnieks, Reconstructed wavefronts and communication theory, J. Opt. Soc. Am. 52 (10) (1962) 11231130. [59] V. Kreinovich, et al. (Eds.), Beyond Traditional Probabilistic Methods in Economics, ECONVN 2019, Studies in Computational Intelligence, Series vol. 809, Springer Nature Switzerland AG, 2019. [60] M. Svítek, Quantum multidimensional models of complex systems, Neural Netw. World 5 (2019) 363371. Available from: https://doi.org/10.14311/NNW.2019.29.022. [61] M. Svítek, Physics-information analogies, Neural Netw. World 6 (2018) 535550. Available from: https:// doi.org/10.14311/NNW.2018.28.030. [62] M. Svítek, Wave composition rules in quantum system theory, Neural Netw. World 1 (2020). Available from: https://doi.org/10.14311/NNW.2020.30.004. [63] M. D’Angelo, K. Yoon-ho, S.P. Kulik, Y. Shih, Identifying entanglement using quantum “ghost” interference and imaging, arXiv: quant-ph/0401007v2, 2004. [64] O. Pribyl, P. Pribyl, M. Lom, M. Svitek, Modeling of smart cities based on ITS architecture, IEEE Intell. Transp. Syst. Mag. 1 (2019). Available from: https://doi.org/10.1109/MITS.2018.2876553. [65] M. Postranecky, M. Svitek, E.Z. Carrillo, SynopCity Virtual HUB—a testbed for smart cities, IEEE Intell. Transp. Syst. Mag. 10 (2) (2018) 5057. Available from: https://doi.org/10.1109/MITS.2018.2806642. [66] V. Maˇrík, P. Kadera, G. Rzevsi, A. Zoitl, G. Anderst-Kotsis, A.M. Tjoa, et al. (Eds.), Industrial Applications of Holonic and Multi-Agent Systems, HoloMAS 2019, Lecture Notes in Artificial Intelligence, vol. 11710, Springer AG 2019, ISBN 978-3-030-27877-9 (Chapter: S. Kozhevnikov, P. Skobelev, O. Pribyl, M. Svítek, 2019, Development of resource-demand networks for Smart Cities 5.0, pp. 203217), ,https://doi.org/ 10.1007/978-3-030-27878-6_16. .
136
Bibliography
[67] T. Borangiu, D. Trentesaux, P. Leitão, A.G. Boggino, V. Botti (Eds.), Service oriented, holonic and multiagent manufacturing systems for industry of the future, in: Proceedings of the SOHOMA 2019, Studies in Computational Intelligence, Series vol. 853, Springer, Cham, 2020, ISBN 978-3-030-27476-4. [68] Test of normality. ,http://www.caspur.it/risorse/softappl/doc/sas_docs/qc/chap1/sect19.htm.. [69] M. Jilek, Statistical Confidence Intervals, Teoreticka kniznice inzenyra, SNTL, Prague, 1988 [in Czech]. [70] W.G. Howe, Two-sided tolerance limits for normal populations, J. Am. Stat. Assoc. 64 (1969). [71] E.B. Saff, Introduction to Pade approximants, Vanderbilt University. ,www-sop.inria.fr/miaou/anao03/ Padetalk.pdf.. [72] M. Svítek, From quantum transfer functions to complex quantum circuits, Neural Netw. World 21 (6) (2011) 505517. ISSN 1210-0552. [73] G. Rzevski, P. Skobelev, Managing Complexity, WIT Press, Southampton, Boston, 2014. ISBN 978-184564-936-4. [74] G. Rzevski, M. Svítek, S. Kozhevnikov, Smart City as a complex adaptive system, in: Smart City Symposium, Prague, 2020. [75] M. Svítek, Quantum informatics and soft systems modeling, Neural Netw. World 2 (2020) 133144. Available from: https://doi.org/10.14311/NNW.2020.30.010. [76] M. Svítek, R. Dostál, S. Kozhevnikov, T. Janˇca, Smart City 5.0 Testbed in Prague, in: Proceedings of the IEEE Conference on Smart Cities Symposium, Prague, 2020. [77] G. Weiss, Multiagent Systems, second ed., The MIT Press, London, 2013. xlviii, 867 p. ISBN 978-0262018-890. [78] W.H. Zurek (Ed.), Complexity, entropy and the physics of information (Santa Fé Institute Studies in the Science of Complexity Proceedings), vol. 8, Addison-Wesley, Redwood City, CA, 1990. [79] H. Atlan, Entre le Cristal et la Fumée, Essai sur L’Organization du Vivant, Seuil, Paris, 1979. [80] R.L. Amoroso (Ed.), Complementarity of Mind and Body: Realizing the Dream of Descartes, Einstein and Eccles, Nova Science, New York, 2010. [81] A.H. Bowker, Computation of factors for tolerance limits on a normal distribution when sample is large, J. Am. Math. Soc. 17 (1946). [82] D.T. Ghosh, A note on computation of factor for tolerance limits for a normal distribution, Sankhya B42 (1980).
Index Note: Page numbers followed by “f” refer to figures. A Adaptive resonance theory (ART), 22, 23f Agents, 87 Antifragility, 51 Architecture communication, 95 functional, 95 information, 95 organization, 95 physical, 95 reference, 95 ART. See Adaptive resonance theory (ART) Artificial intelligence (AI), 51, 88 B Bayes’ formula, 6869, 7172 Bayes method, 3 Beliefdesireintention (BDI), 97 Bell state, 48, 117118 Bohmian interpretation, 103105 Bosons, 118122 Bracket notation, 28 Brain adaptive resonance, 2223 Building Information Modeling (BIM), 91 C Carnot’s cycle, 25 Catastrophe theory, 18 City Information Modeling (CIM), 91 Classical HamiltonJacobi equation, 104 Complex adaptive systems basic agent of smart services, 87, 88f hierarchical model, 66f intelligent transport systems, 9496 ontology and multiagent technologies, 9698 smart resilient cities, 8894 Confidence interval, 127 Copenhagen interpretation, 55 Costbenefit analysis, 99100
D Decentralization, 92 Decision-making, 92 Digital ecosystem, 87 Digital transformation, 88 Dirac notation, 28 Dynamical system, 12 discrete, 1 time invariant, 2 E Electrics, 13 Empirical distribution function (EDF), 126 Entanglement swapping, 65 Epistemology, 9 Ether, 49 Exaptation, 9 Extended quantum models incremental models, 8081 inserted models, 81 intersectional extended models, 8285 ordering models, 80 F Fermions, 118122 Feynman path diagram, 53 Fourier transform, 106 short-time, 106 two-dimensional, 107108 Fragility, 51 FresnelKirchhoff diffraction integral, 40 Functional architecture, 95 Fuzzy set, 3 G Global positioning system (GPS), 21 Gnostic theory, 105106
137
138
Index
H Heisenberg’s uncertainty limit, 106111 Hierarchical networks, 6667 Higher harmonic components, 72 Humanmachine interface (HMI), 51 Hysteresis map, 18 I Incremental models, 8081 Information analogies electrics, 13 elements, 1516 extended information elements, 1617 magnetic, 1314 mem-elements, 1718 architecture, 95 bosons with integer spin, 47, 119 circuits, 1926 coding, 3738 component strength, 5557 content, 3, 13 decoding, 3738 distance between wave components, 54 elements, 1516 entanglement, 4749 environment, 4950 fermions with a half-integer spin, 120 flow, 3, 13 gate, 4244 gyrator, 72 identity, 5051 interaction’s speed between wave components, 55 interference, 5354 node, 5758 norator, 16 quantization, 47 quarks with a third-integer spin, 121122 self-organization, 5153 source content-oriented, 17 flow-oriented, 17 theory, 3 Information physics, 4
channel, 1112 dynamical system, 12 gate, 57, 5f perception, 79 representation, 34 scenarios, 911 source and recipient, 45 Inserted models, 81 Insight sensor, 8 Intelligent transport systems (ITS), 9496 communication architecture, 95 functional architecture, 95 information architecture, 95 organization architecture, 95 physical architecture, 95 reference architecture, 95 Internet of things (IoT), 88 Interoperability, 92 Intersectional extended models, 8285 ITS. See Intelligent transport systems (ITS) K Kauffman’s principle, 13 Key Performance Indicators (KPIs), 98 Kirchhoff’s Law, 5758, 57f Knowledge cycle, 2426 KolmogorovSmirnov test, 125 Kronecker operation, 35 Kronecker product, 30, 117118 L Linear time invariant (LTI), 67 Long-term memory (LTM), 22, 72 Lorenz transformation, 11 M Many-worlds interpretation (MWI), 5 Mobility-as-a-Service (MaaS), 9192 Modularity, 92 Multiagent systems (MAS), 88, 97 N Normal mode, 97
Index
O Ohm’s law, 17 Ontology Web Language (OWL), 96 Ordering models, 80 Otto thermodynamic cycle, 24 P Pauli exclusion principle, 60, 65 Pearson’s test, 126 Performance parameters, 124130 estimation of measuring system’s accuracy, 126127 estimation of measuring system’s dependability, 126127 estimation of measuring system’s reliability, 126127 known mean value and standard deviation, 128 known mean value and unknown standard deviation, 128129 known standard deviation and unknown mean value, 128 tests of normality, 125126 unknown mean value and standard deviation, 129130 Phase modulation, 38 Physics, 38 Poisson distribution, 7576 Postcritical mode, 98 Probability theory, 3 PublicPrivate Partnership (PPP), 93 Pythagorean theorem, 32 Q Quantization, 47 Quantum circuit, 39 data flow rate, 3839 decoherence, 55 entanglement, 4749, 65, 108109 environment, 4950 events, 2729 feedback system, 76f identity, 5051 information coding, 3738
139
component strength, 5557 decoding, 3738 distance between wave components, 54 entanglement, 4749 environment, 4950 gate, 4244 gyrator, 72 identity, 5051 interaction’s speed between wave components, 55 interference, 5354 quantization, 47 science, 27 self-organization, 5153 interference, 5354 learning, 4445 mechanics, 99 models of hierarchical networks, 6667 information gyrator, 72 processes, 65 time-varying quantum systems, 6772 transfer functions, 7377 node, 5758 objects, 2930 composition of, 3436 time-varying, 37 physics, 60 data flow rate, 3839 events, 2729 holographic approach to phase parameters, 3941 learning, 4445 objects, 2930 two (non-)distinguished quantum subsystems, 4142 two (non-)exclusive observers, 3034 processes, 65 subsystems coexisted, 60 conditional mixture of, 116118 connected, 59 disconnected, 59 illustrative examples, 6164 interactions with an environment, 61
140
Index
Quantum (Continued) symmetrically competing, 60 symmetrically disconnected, 60 transfer functions, 7377 Quantum Dirac impulse, 74 Quantum finite impulse response (QFIR), 74 Quantum infinite impulse response (QIIR), 74 Quantum physics, 60 Quantum transfer function (QTF), 74 nominator and denominator, 74 Quarks, 118122 ddu, 122 uud, 122 R Reference architecture, 95 Resource Description Framework (RDF), 96 S Scale-free network, 7576 Schrodinger wave function, 101103 Second law of thermodynamics, 24 Self-organization, 53, 99 quantum, 5153 Semantic interoperability (SI), 97 Short-term memory (STM), 22, 72 Smart grids, 91 Smart resilient city (SRC), 8894 Smart services basic agent of, 87
orientation, 92 Social-Cyber-Physical System (S-CPS), 9293 Social network, 91 Solar panel, 91 Standard deviation, 127 Standard model of particle physics, 122 Static information stability, 24 T Telematics, 1920, 94 Time-varying quantum systems, 6772 Tolerance factor, 127 Tolerance interval, 127 Topography, 96 Transmission matrix, 5 V Virtual Design and Construction (VDC), 91 Virtualization, 92 von Neumann entropy, 30 W Wave function Bohmian interpretation of, 103105 pure and mixed wave probabilistic states, 122124 Schrodinger, 101103 Wave multimodels theorem, 111116 Wisdom, 8