128 78 5MB
English Pages 102 [100] Year 2024
SpringerBriefs in Information Security and Cryptography Editor-in-Chief Yang Xiang, Swinburne University of Technology, Melbourne, VIC, Australia Series Editors Liqun Chen , Department of Computer Science, University of Surrey, Guildford, UK Kim-Kwang Raymond Choo , Department of Information Systems, The University of Texas at San Antonio, San Antonio, TX, USA Sherman S. M. Chow , Chinese University of Hong Kong, Hong Kong, Hong Kong Robert H. Deng , Singapore Management University, Singapore, Singapore Dieter Gollmann, TU Hamburg-Harburg, Hamburg, Germany Kuan-Ching Li, Department of Computer Science and Information Engineering, Providence University, Taichung, Taiwan Javier Lopez, University of Malaga, Malaga, Spain Kui Ren, University at Buffalo, Buffalo, NY, USA Jianying Zhou , Singapore University of Technology and Design (SUTD), Singapore, Singapore
The series aims to develop and disseminate an understanding of innovations, paradigms, techniques, and technologies in the contexts of information and cybersecurity systems, as well as developments in cryptography and related studies. It publishes concise, thorough and cohesive overviews of state-of-the-art topics in these fields, as well as in-depth case studies. The series also provides a single point of coverage of advanced and timely, emerging topics and offers a forum for core concepts that may not have reached a level of maturity to warrant a comprehensive monograph or textbook. It addresses security, privacy, availability, and dependability issues, also welcoming emerging technologies such as artificial intelligence, cloud computing, cyber physical systems, and big data analytics related to cybersecurity research. Among some core research topics: Fundamentals and theories • Cryptography for cybersecurity • Theories of cybersecurity • Provable security Cyber Systems and Secure Networks • • • • • • •
Cyber systems security Network security Security services Social networks security and privacy Cyber attacks and defense Data-driven cyber security Trusted computing and systems
Applications and others • Hardware and device security • Cyber application security • Human and social aspects of cybersecurity
Daniela Mechkaroska · Aleksandra Popovska-Mitrovikj · Verica Bakeva
Cryptocoding Based on Quasigroups
Daniela Mechkaroska University of Information Science and Technology “St. Paul the Apostle” Ohrid, North Macedonia
Aleksandra Popovska-Mitrovikj Faculty of Computer Science and Engineering Saints Cyril and Methodius University Skopje, North Macedonia
Verica Bakeva Faculty of Computer Science and Engineering Saints Cyril and Methodius University Skopje, North Macedonia
ISSN 2731-9555 ISSN 2731-9563 (electronic) SpringerBriefs in Information Security and Cryptography ISBN 978-3-031-50124-1 ISBN 978-3-031-50125-8 (eBook) https://doi.org/10.1007/978-3-031-50125-8 © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.
Preface
Many researches show that quasigroups and quasigroup transformations can find applications in coding theory and cryptography. The application of quasigroups for cryptographic purposes is mainly due to the large number of quasigroup operations over a given finite set, so if those operations are used in defining the encryption and decryption functions, then the encrypted message is difficult to discover by a third party. Also, these operations can be used to define many other cryptographic primitives. On the other hand, investigations of properties of strings obtained by quasigroup transformations show that these transformations can be applied in coding theory for the design of error-detecting and error-correcting codes. In fact, quasigroup transformations are mappings from finite sequences over a finite alphabet and they manifest properties of discrete dynamical systems. Therefore, as a relatively new area, they represent a challenge for their further research and their application in these areas. The concept of cryptocoding arises from the need to obtain secure and accurate transmission. That is why it is necessary to constantly improve existing and develop new algorithms that will ensure accurate and safe data transmission. This led to the intensive development of coding theory and cryptography as scientific fields dealing with these problems. To ensure both efficient and secure data transmission, the concept of cryptocoding is increasingly being developed, in which the processes of coding and encryption are merged into one process. There are many designs in which these two scientific fields merge: ciphers in which codes are used in order to increase their security, and vice versa codes in which design of encryption algorithms are implemented. In fact, cryptocodes provide correction of a certain number of errors that appear during transmission through a noisy channel and also data security, using only one algorithm. The main research in this area goes in the direction of defining new algorithms for error-detecting and error-correcting codes, random codes, stream ciphers, block ciphers, pseudo-random string generators, hash functions, etc. The initial idea for the application of quasigroups in random codes (Random Codes Based on Quasigroup—RCBQ) is given in (Gligoroski et al., 2007a) and (Gligoroski et al., 2007b). These codes are a combination of cryptographic algorithms and errorcorrection codes and depend on several parameters. If these codes are used to encode v
vi
Preface
messages, then the original data can only be retrieved if one knows exactly which parameters were used in the encoding process, even if the communication channel is noiseless. So, these random codes are cryptocodes. They are defined by using a cryptographic algorithm in the encoding/decoding process. These codes with only one algorithm make it possible not only to correct a certain number of errors that occur during transmission through a noisy channel, but at the same time ensure the confidentiality of data. Therefore, if the information being transmitted is of a secret nature, the proposed random codes are at a great advantage over the others. In this monograph, mainly we consider RCBQs as error-correcting codes and therefore will not analyze in detail their cryptographic properties here. Those cryptographic properties follow from results published in (Markovski, 1999, 2000, 2003), etc. Random codes based on quasigroups are based on the concept of a totally asynchronous stream cipher. The influence of parameters on the performance of these codes when the transmission is over a binary symmetric channel is investigated (Popovska-Mitrovikj et al., 2009). From the results obtained in that research, it is concluded that the chosen quasigroup, the length of the initial key, and the choice of the redundancy pattern have a great influence on the performance of these codes. In (Mechkaroska et al., 2016), the authors examine the performance of these codes when transmitting through a Gaussian channel. Specifically, it is studied how the choice of the redundancy pattern, the key length, and the choice of the quasigroup affect the performance of these codes. The conclusions obtained are similar to those for transmission through a binary symmetric channel. From all the experiments done with RCBQ, it can be concluded that the decoding speed is one of the biggest problems of these codes. To improve the decoding speed, in the papers (Popovska-Mitrovikj et al., 2012, 2015), the authors define new decoding algorithms: Cut-Decoding Algorithm and 4-Sets-Cut-Decoding Algorithm. The decoding of these codes is a list decoding, hence the decoding speed depends on the size of the list (a smaller list means faster decoding). In all algorithms, the size of the list depends on Bmax (the assumed number of bit errors per block transmission). For smaller values of Bmax a smaller list is obtained, but the problem is that we can never know in advance the exact number of errors that occurred during the transmission of one block. If that number is larger than the predicted Bmax , the error will not be corrected, but if the predicted Bmax is too large, then we have too large lists and too slow decoding. To avoid this, especially for channels with lower noise, in (Popovska-Mitrovikj et al., 2017), Fast-Cut-Decoding and Fast-4Sets-Cut-Decoding algorithms have been proposed, which provide decoding with smaller lists. Furthermore, in (Mechkaroska et al., 2019, Popovska-Mitrovikj et al., 2020), the performance of cryptocodes based on quasigroups for transmission through channels with burst errors has been studied and modifications (called BurstCut-Decoding, Burst-4-Sets-Cut-Decoding, Fast-Cut-Decoding, Fast-4-Sets-CutDecoding algorithms) of existing algorithms suitable for burst error correction have been proposed.
Preface
vii
This monograph is organized as follows. Since the considered cryptocodes are based on quasigroups in the Chap. 1, we give a brief overview of quasigroups and their properties. In fact, string transformations based on quasigroups (called quasigroup string transformations) are used in the design of these codes. These quasigroup transformations are very useful for application in cryptography and coding theory, primarily for designing cryptographic primitives, error-detecting, and error-correcting codes. The reasons for this are the structure of quasigroups, their large number, and their properties. In Chap. 2, the history, basic concepts, and some of the encoding/decoding algorithms of Random Codes Based on Quasigroups (RCBQ) are explained. Also, we analyze the properties, disadvantages, and advantages of using these codes as error-correcting codes in a communication channel. In Chap. 3, we present experimental results for cryptocodes based on quasigroups for transmission through a binary symmetric channel (BSC). In the first section, we compare results for code (72,576) obtained using two different decoding algorithms (Cut-Decoding and 4-Sets-Cut-Decoding#3). At the end of this section, we give a conclusion (derived from many experiments made with these codes) about the different decoding algorithms, parameters, and methods for reducing decoding errors that give the best results for these codes. In the second section of this chapter, we consider the performances of these cryptocodes in the transmission of images through a binary symmetric channel. Experimental results for transmission through a Gaussian channel, are given in Chap. 4. First, we briefly describe the Gaussian channel, and after the section with the results for the transmission of ordinary messages, we present some results for the transmission of images. At the end of this chapter, we consider a filter for enhancing the quality of images decoded with these cryptocodes. In Chap. 5, we consider coding/decoding algorithms for RSBQ called Fast-CutDecoding and Fast-4-Sets-Cut-Decoding algorithms. The goal of these algorithms is to increase the decoding speed and decrease the probability of bit-error. In this chapter, we present several experimental results obtained with these algorithms and analyze the results for bit-error and packet-error probabilities and decoding speed when messages are transmitted through Gaussian channels with different values of signal-to-noise ratio. Also, we investigate the performances of fast algorithms for the transmission of images and audio files. In the last Chap. 6 of this monograph, we consider a modification of existing cryptocodes suitable for transmission in burst channels. For the simulation of burst errors, we use the Gilbert-Elliot model. We consider two kinds of Gilbert-Eliott channels, in the first one, in each state, the channel is binary symmetric, and in the second one, the channel is Gaussian. Experimental results for bit-error and packeterror probabilities obtained for different channel and code parameters are presented and we made a comparison of the results obtained with these burst algorithms and previously considered algorithms. Also, we investigate the performances of these algorithms for the transmission of images through a burst channel. In all experiments,
viii
Preface
for different values of bit-error probability (in BSC) and SNR (in Gaussian), the differences between transmitted and decoded images are considered. Further on, we consider an application of the filter for enhancing the quality of decoded images. With the considered filter, clearer images are obtained. At the end of this chapter, we investigate the adoption of Fast-Cut-Decoding and Fast-4-Sets-Cut-Decoding algorithms for transmission through burst channels, called FastB-Cut-Decoding and FastB-4-Sets-Cut-Decoding algorithms. Also, we present and compare experimental results for images coded with both burst algorithms of these cryptocodes, with and without using the median filter defined in Chap. 4. This monograph is primarily intended for scholars and scientists who are doing research in the fields of coding theory, cryptography, and cryptocoding, but it can also be used for some more advanced undergraduate and graduate courses. Skopje, Republic of North Macedonia October 2023
Daniela Mechkaroska Aleksandra Popovska-Mitrovikj Verica Bakeva
References Gligoroski, D., Markovski, S., & Kocarev, LJ. (2007a). Totally asynchronous stream ciphers + Redundancy = Cryptcoding. In S. Aissi, & H.R. Arabnia (Eds.) In Proceedings of the International Conference on Security and Management (pp. 446–451), SAM. CSREA Press. Gligoroski, D., Markovski, S., & Kocarev, LJ. (2007b). Error-correcting codes based on quasigroups. In Proceedings 16th International Conference Computer Communications and Networks (pp. 165–172). Markovski, S., Gligoroski, D., & Bakeva, V. (1999). Quasigrouop string processing: Part 1, Contributions, Sec. Math. Tech. Sci., MANU (Vol. XX 1–2, pp. 13–28). Markovski, S., Kusakatov, V. (2000). Quasigroups string processing: Part 2, Contributions, Sec. Math. Tech. Sci., MANU, (Vol. XXI, 1–2, pp. 15–32). Markovski S. (14–16 April 2003). Quasigroup string processing and applications in cryptography. In Proceedings of the 1stMII 2003 conference (pp. 278–290). Mechkaroska, D., Popovska-Mitrovikj, A., & Bakeva, V. (2016). Cryptcodes based on Quasigroups in Gaussian channel. Quasigroups and Related Systems 24(2), 249–268. Mechkaroska, D., Popovska-Mitrovikj, A., Bakeva, V. (2019). New cryptcodes for burst channels. ´ c, M. Droste, & J. É. Pin (Eds.) Algebraic informatics. CAI 2019 (Vol. 11545, pp. 202– In M. Ciri´ 212). Lecture Notes in Computer Science, Springer. Popovska-Mitrovikj, A., Markovski, S., & Bakeva, V. (2009). Performances of error-correcting codes based on quasigroups. In D. Davcev, & J. M. Gomez (Eds.) ICT-Innovations (Vol. 5, pp. 377–389). Springer. Popovska-Mitrovikj, A., Markovski, S., & Bakeva, V. (2012). Increasing the decoding speed of random codes based on quasigroups. In S. Markovski, & M. Gusev (Eds.) ICT Innovations (pp. 93–102). Web proceedings, ISSN 1857-7288. Popovska-Mitrovikj, A., Markovski, S., & Bakeva, V. (2015). 4-sets-cut-decoding algorithms for random codes based on quasigroups. International Journal of Electronics and Communications (AEU), Elsevier 69(10), 1417–1428.
Preface
ix
Popovska-Mitrovikj, A., Bakeva, V., & Mechkaroska, D. (2017). New decoding algorithm for cryptcodes based on Quasigroups for transmission through a low noise channel. In D. Trajanov, & V. Bakeva (Eds.) Communications in computer and information science series (CCIS) (Vol. 778, pp. 196–204). ICT-Innovations. Springer. Popovska-Mitrovikj, A., Bakeva, V., & Mechkaroska, D. (2020). Fast decoding with cryptcodes for burst rrrors. In V. Dimitrova, & I. Dimitrovski (Eds.) ICT Innovations 2020. Machine learning and applications. ICT Innovations 2020. Communications in Computer and Information Science (Vol. 1316). Springer.
Contents
1 Quasigroups and Quasigroups String Transformation . . . . . . . . . . . . . . 1.1 Quasigroups and Their Properties Useful in Cryptography and Coding Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Quasigroups String Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Cryptographic Properties of Quasigroup Transformations . . . . . . . . . 1.4 Classification of Quasigroups by Fractality . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
2 Cryptocodes Based on Quasigroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 History of Random Codes Based on Quasigroups . . . . . . . . . . . . . . . . 2.2 Description of Standard Algorithm for Random Codes Based on Quasigroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Totally Asynchronous Stream Cipher . . . . . . . . . . . . . . . . . . . . 2.2.2 Coding Process with Standard Algorithm . . . . . . . . . . . . . . . . . 2.2.3 Decoding Process with Standard Algorithm . . . . . . . . . . . . . . . 2.3 Cut-Decoding Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Encoding Process in Cut-Decoding Algorithm . . . . . . . . . . . . 2.3.2 Decoding with Cut-Decoding Algorithm . . . . . . . . . . . . . . . . . 2.4 Methods for Reducing null-errors and more-candidate-errors . . . . . . 2.5 4-Sets-Cut-Decoding Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Coding with 4-Sets-Cut-Decoding Algorithm . . . . . . . . . . . . . 2.5.2 Decoding with 4-Sets-Cut-Decoding Algorithm . . . . . . . . . . . 2.6 Computing of P E R and B E R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Cryptographic Properties of RCBQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9 9
1 3 5 6 7
10 10 11 12 14 15 16 17 18 18 18 21 22 23
3 Experimental Results for Cryptcodes Based on Quasigroups for Transmission Through a BSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.1 Experimental Results for Transmission of Messages . . . . . . . . . . . . . . 25
xi
xii
Contents
3.2 Experimental Results for Decoding Images Transmitted Through a BSC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4 Experimental Results for Cryptocodes Based on Quasigroups for a Gaussian Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Gaussian Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Signal-to-Noise Ratio (S N R) . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Experimental Results for Transmission of Messages . . . . . . . . . . . . . . 4.3 Experimental Results for Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Filter for Images Decoded by Crypotcodes Based on Quasigroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Experimental Results for Audio Files . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33 33 35 36 37 41 41 48
5 Fast Algorithms for Cryptocodes Based on Quasigroups . . . . . . . . . . . . 5.1 Fast-Cut-Decoding and Fast-4-Sets-Cut-Decoding Algorithms . . . . . 5.2 Experimental Results for Fast-Cut-Decoding and Fast-4-Sets-Cut-Decoding Algorithms . . . . . . . . . . . . . . . . . . . . . . 5.3 Experimental Results for Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Experimental Results for Audio Files . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Filter for Enhancing the Quality of Audio Decoded by Cryptocodes Based on Quasigroups . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49 49
6 Cryptocodes Based on Quasigroups for Burst Channels . . . . . . . . . . . . . 6.1 Gilbert-Elliott Burst Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 New Cryptocodes for Burst Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Experimental Results for Gilbert-Elliott Model . . . . . . . . . . . . . . . . . . 6.3.1 Experiments for Gilbert-Elliott with BSC Channels . . . . . . . . 6.3.2 Experiments for Gilbert-Elliott with Gaussian Channels . . . . 6.3.3 Experimental Results for Transmission of Images Through Burst Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Fast Decoding with Cryptocodes for Burst Errors . . . . . . . . . . . . . . . . 6.4.1 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Experimental Results for Transmission of Images Coded with Fast-Burst Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65 65 66 66 67 69
50 53 57 60 63
71 78 78 84 88
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Chapter 1
Quasigroups and Quasigroups String Transformation
Abstract The cryptocodes considered in this monograph are based on quasigroups. Therefore, in this chapter, we give a brief overview of quasigroups and their properties. More details for quasigroups and their properties are given in Belousov (1972), Denes and Keedwell (1974) and Laywine and Mullen (1998). Using quasigroups, some string transformations have been defined in Markovski et al. (1997, 1999, 2001), Markovski (2003), Krapež (2010) and Dimitrova et al. (2012). They are called quasigroup string transformations. These quasigroup transformations are very useful for application in cryptography and coding theory, primarily for designing cryptographic primitives, error-detecting and error-correcting codes. The reasons for this are: the structure of quasigroups, their large number and their properties. Keywords Quasigroup · Quasigroup string transformation · Cryptography · Coding theory · Fractality
1.1 Quasigroups and Their Properties Useful in Cryptography and Coding Theory A quasigroup .(Q, ∗) is a groupoid, i.e., a set . Q with a binary operation .∗ : Q 2 → Q, satisfying the law: (∀u, v ∈ Q)(∃!x, y ∈ Q) (x ∗ u = v & u ∗ y = v)
.
(1.1)
In fact, (1.1) says that a groupoid .(Q, ∗) is a quasigroup if and only if the equations x ∗ u = v and .u ∗ y = v have unique solutions .x and . y for each given .u, v ∈ Q. Further on, we will assume that the set . Q is a finite set. Let the quasigroup .(Q, ∗) be given. It has been noted that every quasigroup .(Q, ∗) has a set of five quasigroups, called parastrophes, denoted by ./, \, ·, .//, .\\ which are defined in Table 1.1.
.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 D. Mechkaroska et al., Cryptocoding Based on Quasigroups, SpringerBriefs in Information Security and Cryptography, https://doi.org/10.1007/978-3-031-50125-8_1
1
2
1 Quasigroups and Quasigroups String Transformation
Table 1.1 Parastrophes of quasigroup operations .∗ Parastrophes operation = z .⇐⇒ .x ∗ z = y = z .⇐⇒ .z ∗ y = x . x · y = z .⇐⇒ . y ∗ x = z . x//y = z .⇐⇒ . y/x = z .⇐⇒ .z ∗ x = y . x\\y = z .⇐⇒ . y\x = z .⇐⇒ . y ∗ z = x . x\y . x/y
In the design of the considered codes we use only the first parastrophe given in Table 1.1, denoted by .“\” which is defined as follows: .
x∗y=z
⇐⇒
y = x \ z.
The algebra .(Q, ∗, \) satisfies identities: .
x \ (x ∗ y) = y, x ∗ (x \ y) = y
(1.2)
and .(Q, \) is also a quasigroup. Further on, we give some properties of quasigroups. Definition 1.1 A quasigroup .(Q, ∗) is commutative if it satisfies the identity .
x ∗ y = y ∗ x.
(1.3)
Definition 1.2 A quasigroup .(Q, ∗) is associative if it satisfies the identity (x ∗ y) ∗ z = x ∗ (y ∗ z)
.
(1.4)
Definition 1.3 A quasigroup .(Q, ∗) is a left loop if it has a left unit element .e ∈ Q such that . (∀ x ∈ Q) (e ∗ x = x) (1.5) Definition 1.4 A quasigroup .(Q, ∗) is a right loop if it has a right unit .e ∈ Q such that . (∀ x ∈ Q) (x ∗ e = x) (1.6) Definition 1.5 A quasigroup .(Q, ∗) is a loop if it has an unit .e ∈ Q such that .
(∀ x ∈ Q) (x ∗ e = x = e ∗ x)
(1.7)
Let’s note that .(Q, ∗) is a loop if and only if it is a left loop and right loop. Definition 1.6 A quasigroup .(Q, ∗) is idempotent if it satisfies the identity .
x∗x =x
(1.8)
1.2 Quasigroups String Transformation
3
Table 1.2 Number of quasigroups of order .n ≤ 11 .n .Qn 1 2 12 576 161280 812851200 61479419904000 108776032459082956800 5524751496156892842531225600 9982437658213039871725064756920320000 776966836171770144107444346734230682311065600000
1 2 3 4 5 6 7 8 9 10 11
Definition 1.7 A quasigroup .(Q, ∗) of order .n is shapeless if and only if it is nonidempotent, non-commutative, non-associative, it does not have neither left nor right unit, it does not contain proper sub-quasigroups, and there is no .k < 2n for which the following identities are satisfied: .
x(. . . ∗ (x ∗y)) = y, y = ((y ∗ x) ∗ . . .) ∗ x. ~~ ~ ~ ~ ~~ ~ k
(1.9)
k
The condition .k < 2n for identities (1.9) means that any left and right translation of quasigroup .(Q, ∗) should have the order .k ≥ 2n + 1. For cryptographic purposes, it is preferable to choose a shapeless quasigroup (Gligoroski et al., 2007). One of the reasons for the great application of quasigroups in cryptography is due to the large number of different quasigroups. Namely, if a quasigroup is a key in a cryptographic primitive, then intruders should be unable to find the key in real time. In Table 1.2, we give the number of different quasigroups of order .n = 1, 2, . . . , 11. As we can see, the number of quasigroups is increasing enormously as the order .n increases.
1.2 Quasigroups String Transformation Here, we will describe the quasigroup transformations that we will use in the following chapters for designing a class of error-correcting codes. For the given alphabet . Q, the set of all finite strings of the elements of . Q will be denoted by . Q + , i.e., . Q + = {a1 a2 . . . an |ai ∈ Q, n ∈ N} or . Q + = Q ∪ Q 2 ∪ Q 3 . . . . Furthermore, we will consider quasigroups whose order is a degree of 2, i.e. .2k , for some .k.
4
1 Quasigroups and Quasigroups String Transformation
In this way, any element of . Q can be presented as .k-tuples of bits. For example, if k = 4, the elements of . Q are nibbles. Using the quasigroup operation .∗, two quasigroup string transformations are defined in Markovski et al. (1999). They are called .e-transformation and + .d-transformation, and they are mappings from . Q to . Q + (.| Q |≥ 2). Let .l ∈ Q be a fixed element, called a leader and .ai ∈ Q, .i = 1, 2, . . . , n. The .e- and .d-transformations are defined as follows: { b1 = l ∗ a1 , .el,∗ (a1 . . . an ) = b1 . . . bn ⇔ (1.10) bi+1 = bi ∗ ai+1 , i = 1, 2, . . . n − 1 .
{ d (a1 . . . an ) = c1 . . . cn ⇔
. l,\
c1 = l \ a1 , ci+1 = ai \ ai+1 , i = 1, 2, . . . n − 1
(1.11)
The graphical presentations of .el,∗ and .dl,\ are given in Figs. 1.1 and 1.2. As a result of the Eq. (1.2) we have that the following property holds. Proposition 1.1 (Markovski et al., 1999) The mappings .e and .d are bijections (permutations.) and for each string .α ∈ Q + , it is true
.
d (el,∗ (α)) = α = el,∗ (dl,\ (α))
. l,\
i.e., .d = e−1 is an inverse bijection of .e .
Fig. 1.1 Graphical representation of the function .el,∗
Fig. 1.2 Graphical representation of the function .dl,\
1.3 Cryptographic Properties of Quasigroup Transformations
5
Let .∗1 , ∗2 , . . . , ∗n are quasigroup operations defined over . Q and .\1 , \2 , . . . , \n are corresponding parastophe operations over . Q. Let the transformations .el1 ,∗1 , el2 ,∗2 , . . . , eln ,∗n are defined in the same way as in (1.10), a .dl1 ,\1 , dl2 ,\2 , . . . , dln ,\n as in (1.11), by selecting fixed leaders .l1 , .l2 ,…,.ln ∈ Q. If we form the following compositions: .
E = eln ,∗n ◦ eln−1 ,∗n−1 ◦, . . . , ◦el1 ,∗1 .
D = dl1 ,\1 ◦ dl2 ,\2 ◦, . . . , ◦dln ,\n
then, as a consequence of Proposition 1.1, we have that . E and . D are bijections that are mutually inverse.
1.3 Cryptographic Properties of Quasigroup Transformations Here we will consider some cryptographic properties of quasigroups string (. E- and D-) transformations. The . E-transformation can be used for designing an encrypting algorithm, and . D-transformation—for decrypting algorithm. Let . M be an original message and .C = E(M) is corresponding encrypted message. Discovering the message . M from message .C should be impossible without knowing the quasigroups used in transformation . E. The encrypted message should be resistant of any kind of attacks. In this case, a brute force attack for finding these quasigroups is also impossible due to the large number of quasigroups (especially, for large order .n). Application of . E-transformation in encrypting algorithms unable the statistical kind of attacks. Namely, this transformation provides uniform distribution of the symbols (letters, pairs, triples and so on). Therefore, it is impossible by using statistical methods (like the frequency of letters in a given language) to find the original message . M from the encrypted message .C. This property of quasigroup . E-transformation is given in the following theorem proved in Markovski et al. (1999). Let .(Q, ∗) be a given finite quasigroup and .p = ( p1 , p2 , . . . , p|Q| ) beE the probapi = 1. bility distribution of the symbols in . Q, such that . pi > 0 for each .i and . .
i
Then the following theorem holds. Theorem 1.1 (Markovski et al., 1999) Consider a random string . M = a1 a2 . . . an (ai ∈ Q) drawn i.i.d. according to the probability distribution .p. Let .C be obtained after .k applications of an .e−transformation on . M. If .n is a large enough integer then the distribution of substrings of .C of length .t is uniform for each .1 ≤ t ≤ k. (We note that for .t > k the distribution of substrings of .C of length .t is not uniform.)
.
6
1 Quasigroups and Quasigroups String Transformation
Another important cryptographic property of the encrypted message .C is its period. In Markovski et al. (2005) authors proved that the period of quasigroup processed strings grows at least linearly and the increasing of the period depends on the chosen quasigroup. More about the period of quasigroup processed string is given in Dimitrova and Markovski (2003).
1.4 Classification of Quasigroups by Fractality The results of existing investigations tell us that the good cryptographic properties of quasigroup processed string depend on the chosen quasigroups. Therefore, classifications of finite quasigroups are very important for successful application of quasigroups in cryptography and coding theory. This is a difficult problem, since the number of quasigroups (even of small order) is very large. A classification of quasigroups by graphical presentation of quasigroup processed strings is given in Dimitrova and Markovski (2007). In that paper, the authors classified the set of all quasigroups of order .4 into 2 disjoint classes, the class of so called fractal quasigroups (if the graphical presentation of quasigroup processed strings has structure), and the class of so called non-fractal quasigroups (if the graphical presentation of quasigroup processed strings has not structure). The number of fractal quasigroups of order 4 is 192 and the number of non-fractal quasigroups is 384. In Fig. 1.3 we present an example for image pattern of a fractal (a) and a non-fractal (b) quasigroup. The class of fractal quasigroups is not recommended to be used for designing cryptographic primitives. In order to obtain RCBQs with good cryptographic properties, the quasigroup used in our experiments is chosen to satisfy the above properties. This means that the chosen quasigroup is shapeless and non-fractal quasigroup.
Fig. 1.3 Fractal and non-fractal quasigroup
References
7
References Belousov, V. D. (1972). n-arnie Kvazigruppi (n-aryQuasigroups). Kisiniev: Stiinca. Denes, J., & Keedwell, A. D. (1974). Latin squares and their applications. The English Universities Press Ltd. Dimitrova V., & Markovski S. (2007). Classification of quasigroups by image patterns. In Proceedings of the Fifth International Conference for Informatics and Information Technology, Macedonia (pp. 152–160). Dimitrova, V., & Markovski, J. (2003) On quasigroup pseudo random sequence generators. In Proceedings of the 1st Balkan Conference in Informatics, Thessaloniki, Greece (pp. 393–401). Dimitrova, V., Bakeva, V., Popovska-Mitrovikj, A., & Krapež, A. (2012). Cryptographic properties of parastrophic quasigroup transformation. In S. Markovski, M. Gusev (Eds.), ICT-Innovations 2012 (pp. 235–243). Springer. Gligoroski, D., Markovski, S., & Kocarev, L. (2007). Totally asynchronous stream ciphers + Redundancy = Cryptcoding. In S. Aissi, H. R. Arabnia (Eds.), Proceedings of the International Conference on Security and Management, SAM 2007 (pp. 446–451). Las Vegas: CSREA Press (2007). Krapež, A. (2010). An application of quasigroups in cryptology. Mathematics Macedonia, 8, 47–52. Laywine, C. F., & Mullen, G. L. (1998). Discrete mathematics using Latin squares. Wiley. Markovski, S. (2003). Quasigroup string processing and applications in cryptography. In Proceedings of the 1stMII 2003 Conference, Thessaloniki, April 14–16 (pp. 278–290). Markovski, S., Gligoroski, D., & Andova, S. (1997). Using quasigroups for one-one secure encoding. In Proceedings of the VIII Conference on Logic in Computer Science “LIRA ’97”, Novi Sad (pp. 157–162). Markovski, S., Gligoroski, D., & Bakeva, V. (1999). Quasigrouop string processing: Part 1. Contributions, Science, Mathematics, Technology, Science, and MANU, XX(1–2), 13–28. Markovski, S., Gligoroski, D., & Bakeva, V. (2001). Quasigroup and hash function. In Discrete Mathematics and Applications: Proceedings of Sixth International Conference (pp. 43–50). Blagoevgrad, Bulgaria: South-West University. Markovski, S., Gligoroski, D., & Kocarev, L. (2005). Unbiased random sequences from quasigroup string transformations. In Proceedings of Fast Software Encryption, LNCS 3557 (pp. 163–180). Springer.
Chapter 2
Cryptocodes Based on Quasigroups
Abstract In this chapter some of the encoding/decoding algorithms of Random Codes Based on Quasigroups (RCBQ) are explained. RCBQs are cryptocodes that in one algorithm combine encryption and encoding of messages. These codes with a single algorithm allow not only correction of a certain amount of errors that occur when transmitting through a channel with noise but also provide information security. The recipient of the information can only access the original data if he knows exactly what parameters are used in the coding process, even if the communication channel is without noise. Keywords Random codes based on quasigroups · Cryptocodes · Totally asynchronous stream cipher · Bit-error probability · Packet-error probability
2.1 History of Random Codes Based on Quasigroups The initial idea for applying quasigroups in random codes is given in Gligoroski et al. (2007a, b), where Random Codes Based on Quasigroup are proposed. RCBQs are cryptocodes, i.e., they are defined using a cryptographic algorithm during the encoding/decoding process. These codes correct errors by using encryption/decryption algorithms during the encoding and decoding process. Algorithms for encoding and decoding given in Gligoroski et al. (2007a, b) will be denoted as Standard Algorithms for RCBQ. They include several parameters and their performances depend on the chosen parameters. The influence of the parameters on the performances of these codes has been studied in Popovska-Mitrovikj et al. (2009). In Popovska-Mitrovikj et al. (2011) the authors compared performances of RCBQs with the performances of Reed-Miller (RMC) and Reed-Solomon codes (RSC). From the obtained results, the authors concluded that the RMC and the RSC have better decoding performances in a binary symmetrical channel with bit-error probability . p < 0.05. For larger biterror probabilities, the RCBQs outperform them significantly. Nevertheless, the time efficiency of the RMC and the RSC is much higher than that of RCBQs. So, the speed of decoding of RCBQ is its disadvantage and it was a challenge for further © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 D. Mechkaroska et al., Cryptocoding Based on Quasigroups, SpringerBriefs in Information Security and Cryptography, https://doi.org/10.1007/978-3-031-50125-8_2
9
10
2 Cryptocodes Based on Quasigroups
improvements. In order to improve the performances of these codes, the authors in Popovska-Mitrovikj et al. (2012a, b, 2013), defined new algorithms for encoding/decoding, called Cut-decoding algorithm and 4-Sets-Cut-Decoding algorithm. In these papers, the performances of Random codes based on quasigroup for data transmission through a binary-symmetric channel are considered. Similar investigations about the influence of the code parameters and performances of RCBQs for transmission through a Gaussian channel are given in Mechkaroska et al. (2016) and Bakeva et al. (2019). New algorithms that improve decoding speed performances of RCBQs, especially for transmission through low noise channels, are proposed in Popovska-Mitrovikj et al. (2017) and their performances for transmission of audio files are considered in Mechkaroska et al. (2018). All these algorithms do not give good results when the transmission is through a burst channel. Therefore, in Mechkaroska (2019) authors define modifications of Cut-Decoding and 4-Sets-CutDecoding, called Burst-Cut-Decoding and Burst-4-Sets-Cut-Decoding for obtaining efficient algorithms for using RCBQs in burst channels. Fast versions of these algorithms are defined in Popovska-Mitrovikj et al. (2010). Performances of Burst algorithms for transmission of images are analyzed in Mechkaroska et al. (2019) and Popovska-Mitrovikj et al. (2023).
2.2 Description of Standard Algorithm for Random Codes Based on Quasigroup In this section, we will present Standard algorithm for Random Codes Based on Quasigroups proposed by Gligoroski et al. (2006, 2007a, b).
2.2.1 Totally Asynchronous Stream Cipher Random Codes Based on Quasigroup are designed using a class of totally asynchronous stream cipher (.T ASC), which concept is defined in Gligoroski et al. (2007a) as follows: Definition 2.1 A totally asynchronous stream cipher is one in which the keystream is generated as a function of the intermediate key and all previous plaintext letters. The encryption function of a totally asynchronous stream cipher can be described by the equations: (i+1) .k = f (k (i) , m i ), ci = h(k (i) , m i ) where .k (0) is the initial secret state of the key, .k (i) are intermediate keys, . f is the key next-state function, and .h is the output function. The decryption function of a totally asynchronous stream cipher can be described by the equations:
2.2 Description of Standard Algorithm for Random Codes Based on Quasigroup
11
k (i+1) = f ' (k (i) , ci ), m i = h ' (k (i) , ci ).
.
From the definition, it is clear that the totally asynchronous stream cipher gives cipher text .ci which depends on all previous plain text letters .m 0 , m 1 , ..., m i . The authors of Standard algorithm of RCBQ give one possible implementation of TASC by using quasigroup string transformations, called EdonZ.
2.2.2 Coding Process with Standard Algorithm Let . Q be a set of all .a-bit symbols, i.e., . Q has .2a letters, (. Q, ∗) is a given quasigroup, and (. Q, \) is its parastrophe. Before encoding with Standard algorithm for RCBQ, the sequence of bits obtained from the information source is divided into blocks with a length of . Nblock bits and let . M be one of these blocks. We will assume that . M = M1 M2 . . . Ml , where . Mi ∈ Q, i.e., . Mi are symbols of .a-bits. Hence, it is clear that . Nblock = la. Then the coding process is performed in the following steps: • The message . M received from the source is expanded by adding redundant information. In this way, each message with a length . Nblock is mapped to a message with length . N > Nblock . This can be done in many different ways, but usually only zeros that are appropriately placed in the message are added, according to certain patterns and we get the expanded message: .
L = L (1) L (2) . . . L (s) = L 1 L 2 . . . L m ,
where . L i ∈ Q, and . L (i) are sub-blocks of .r symbols. So, . N = ma and .m = r s. In this way, we obtain .(Nblock , N ) code with a rate . R = Nblock /N . • The extended message is encoded using the encrypting algorithm from TASC, which is given in Fig. 2.1. Notice that this encryption algorithm is based on quasigroup transformations . E defined in Chap. 1. So, the cryptographic properties of RCBQ follow directly from Theorem 1.1. • At the end of the coding algorithm, the codeword C = C1 C2 . . . Cm ,
.
is obtained, where .Ci ∈ Q. The coding process is schematically presented in Fig. 2.2.
12
2 Cryptocodes Based on Quasigroups
Fig. 2.1 Algorithms for encryption and decryption
Fig. 2.2 Coding with Standard algorithm
2.2.3 Decoding Process with Standard Algorithm In fact, the decoding is a procedure in which the codewords obtained as an output of the channel should be returned (if it is possible). Since errors occur during transmission, decoding is only possible if redundant information is used. In this code, decoding uses the fact that some letters in certain positions in the extended message are zero. During the decoding process, block by block is decoded sequentially. This
2.2 Description of Standard Algorithm for Random Codes Based on Quasigroup
13
type of decoding allows part of the codeword to be decoded when it is impossible to decode the entire codeword. After transmission through a noisy channel, the codeword .C will be received as a message . D = D (1) . D (2) . . . D (s) .= D1 D2 . . . Dm where (i) .D are blocks of .r symbols from . Q and . Di ∈ Q. The decoding process consists of four steps: .(i) procedure for generating the sets with predefined Hamming distance, .(ii) inverse coding algorithm, .(iii) procedure for generating decoding candidate sets and .(iv) decoding rule. Procedure for Generating the Sets with Predefined Hamming Distance The probability that maximum .t bits in . D (i) are not correctly transmitted is
.
P( p; t) =
t ( ) E ra k=0
k
p k (1 − p)ra−k ,
where . p is the probability of bit-error in the channel. Let . Bmax be a given integer which denotes the assumed maximum number of bit errors that occur in a block during transmission. We generate the sets . Hi = {α|α ∈ Q r , H (D (i) , α) ≤ Bmax }, for .i = 1, 2, . . . , s, where . H (D (i) , α) is a Hamming distance between . D (i) and .α. The cardinality of the sets . Hi is .
Bchecks
) ( ) ( ) ( ra ra ra =1+ + + ··· + 1 2 Bmax
and the number . Bchecks determines the complexity of the decoding procedure: to find the element .C (i) in the set . Hi , at most . Bchecks checks have to be made. Clearly, for efficient decoding the number of checks. Bchecks has to be reduced as much as possible. Inverse Coding Algorithm The inverse coding algorithm is the decrypting algorithm of TASC given in Fig. 2.1. Generating Decoding Candidate Sets The decoding candidate sets . S0 , . S1 , . S2 ,…,. Ss are defined iteratively. Let . S0 = (k1 . . . kn ; λ), where.λ is the empty sequence. Let. Si−1 be defined for.i ≥ 1. Then. Si is the set of all pairs.(δ, w1 w2 . . . wrai ) obtained by using the sets. Si−1 and. Hi as follows (.w j are bits). For each element .α ∈ Hi and each .(β, w1 w2 . . . .wra(i−1) ) ∈ Si−1 , we apply the inverse coding algorithm with input .(α, β). If the output is the pair .(γ , δ) and if both sequences .γ and . L (i) have the redundant zeros in the same positions, then the pair.(δ, w1 w2 . . . wra(i−1) c1 c2 . . . cr ) ≡ (δ, w1 w2 . . . wrai ) (.ci ∈ Q) is an element of . Si .
14
2 Cryptocodes Based on Quasigroups
Decoding Rule The decoding of the received message . D is given by the following rule: • If the set. Ss contains only one element.(d1 . . . dn , w1 . . . wras ) then. L = w1 . . . wras is the decoded (redundant) message. In this case, we say that we have a successful decoding. • If the decoded message is not the correct one then we have an undetected-error . • In the case when the set . Ss contains more than one element, the decoding of . D is unsuccessful and we say that more-candidate-error appears. • In the case when . S j = ∅ for some . j ∈ {1, . . . , s}, the process will be stopped and we say that an error of type null-error appears. We conclude that for some (i) .i ≤ j, D contains more than . Bmax errors, resulting in .Ci ∈ / Hi .
2.3 Cut-Decoding Algorithm A. Popovska-Mitrovikj, S. Markovski and V. Bakeva in their paper (PopovskaMitrovikj et al., 2009) investigated the influence of parameters of Random Codes Based on Quisigroup on codes performances for transmission through a binarysymmetric channel. They conclude that all parameters used in the code (such as the pattern, the key length, and the quasigroup) have an impact on the performances of Random Codes Based on Quasigroup. The same conclusions about the influence of the parameters are derived in Mechkaroska et al. (2016) when the transmission is in a Gaussian channel. However, the biggest problem of these codes is the decoding speed. In order to improve the decoding speed, in the papers Popovska-Mitrovikj et al. (2012a, b, 2013), the authors define a new encoding/decoding algorithm called Cut-Decoding algorithm and they study their performances when messages are transmitted through a binary-symmetric channel. Here, we will explain the encoding and decoding process in Cut-Decoding algorithm. Since decoding with RCBQ is actually a list decoding, decoding speed and the probability of accurate decoding depend on the size of the lists of potential candidates for the decoded message. Therefore, the Cut-Decoding algorithm has been proposed to reduce the number of candidates for decoding in all iterations of the decoding process. In this algorithm, a message is coded twice using different parameters, and the candidates for the decoded message are obtained using the intersection of the corresponding sets . S. In this way, the decoding process for code (72,288) is 4.5 times faster than the Standard algorithm in a binary-symmetric channel (PopovskaMitrovikj et al., 2012a) and about 4 times in a Gaussian channel (Mechkaroska et al., 2016).
2.3 Cut-Decoding Algorithm
15
2.3.1 Encoding Process in Cut-Decoding Algorithm In the coding process of Standard algorithm described in the previous section, we used code (. Nblock , N ) with the rate . R = Nblock /N . In Cut-Decoding algorithm, two (. Nblock , N /2) codes with rate .2R are used to encode/decode the same message with . Nblock bits. The coding process consists of the following steps: • The input message . M = M1 M2 . . . Ml is expanded by adding redundant zero symbols (in the same way as in Standard coding algorithm) and thus we obtain a redundant message . L = L (1) L (2) . . . L (s/2) = L 1 L 2 . . . L m/2 of . N /2 bits, where (i) .L is a sub-block of .r symbols from the alphabet . Q, and . L i ∈ Q. • Then, for coding, we use twice the coding algorithm given in Fig. 2.1, on the same redundant message . L, using different parameters (different keys or quasigroups). • In this way we obtain the codeword for the input message . M as a concatenation of two codewords of . N /2 bits, i.e., C = C1 C2 . . . Cm/2 Cm/2+1 . . . Cm ,
.
where .Ci ∈ Q. The coding process is schematically presented in Fig. 2.3.
Fig. 2.3 Coding with Cut-Decoding algorithm
16
2 Cryptocodes Based on Quasigroups
2.3.2 Decoding with Cut-Decoding Algorithm The encoded message is transmitted through a noisy channel and we obtain the outgoing message . D = D (1) D (2) . . . D (s) , where . D (i) are sub-blocks of .r symbols. The message. D is divided into two messages. D1 = D (1) D (2) . . . D (s/2) and. D2 = D (s/2+1) (s/2+2) .D . . . D (s) with equal length. Then, these two messages are decoded parallel with the corresponding parameters. In Cut-Decoding algorithm, a modification in the part .(iii)-procedure for generating decoding candidate sets, of the decoding process is made. In this algorithm, these sets are generated in the following way. Step 1. Let . S0(1) = (k1(1) . . . kn(1) ; λ) and . S0(2) = (k1(2) . . . kn(2) ; λ) where .λ is the empty sequence, .k1 = k1(1) . . . kn(1) and .k2 = k1(2) . . . kn(2) are the initials keys used for obtaining the two codewords. (1) (2) Step 2. Let . Si−1 and . Si−1 be defined for .i ≥ 1. Step 3. Let two decoding candidate sets. Si(1) and. Si(2) be obtained in the both decoding processes, in the same way as in the standard RCBQ. Step 4. Let .V1 = {w1 w2 . . . wrai |(δ, w1 w2 . . . wrai ) ∈ Si(1) }, (2) . V2 = {w1 w2 . . . wrai .|(δ, w1 w2 . . . wrai ) ∈ Si } and . V = V1 ∩ V2 . (1) Step 5. For each .(δ, w1 w2 . . . wrai ) ∈ Si , if .w1 w2 . . . wrai ∈ / V then (1) (1) . Si ← Si \ {(δ, w1 w2 . . . wrai )}. Also, for each .(δ, w1 w2 . . . wrai ) ∈ Si(2) , if .w1 w2 . . . wrai ∈ / V then . Si(2) ← Si(2) \ {(δ, w1 w2 . . . wrai )}. (* Actually, we eliminate from . Si(1) all elements whose second part does not match with the second part of an element in the . Si(2) , and vice versa. In the next iteration, the both processes use the corresponding reduced sets . Si(1) and . Si(2) . *) Step 6. If .i < s/2 then increase .i and go back to Step 3. The decoding rule in Cut-Decoding algorithm is defined in the following way. (1) (2) • After the last iteration, if the reduced sets . Ss/2 and . Ss/2 have only one element with same second component .w1 . . . war s/2 , then . L = w1 . . . war s/2 is the decoded redundant message. In this case, we say that we have a successful decoding. • If the decoded message is not the correct one then we have an undetected-error. (1) (2) • If the reduced sets. Ss/2 and. Ss/2 have more than one element, after the last iteration, we have more-candidate-error. • If we obtain . Si(1) = ∅, Si(2) /= ∅ or . Si(2) = ∅, Si(1) /= ∅ in some iteration then the decoding of the message continues only with the nonempty set . Si(2) or . Si(1) , correspondingly, by using Standard RCBQ decoding algorithm. • In the case when. Si(1) = Si(2) = ∅ in some iteration, then the process will be stopped (null-error appears).
In experiments with this method of decoding a significant reduction in the number of elements in the sets . S is noticed and a big improvement in the speed of the decoding process is achieved.
2.4 Methods for Reducing null-errors and more-candidate-errors
17
The problem in Cut-Decoding algorithm is that for obtaining code with rate . R we need a pattern for code with twice larger rate. But, it is hard to make a good pattern for larger rates, since the number of redundant zeros in these patterns is smaller. Therefore, in the experiments with this decoding method, worse results in the number of unsuccessful decodings of type more-candidate-error are obtained, but the number of unsuccessful decodings with null-error is smaller. To resolve the problem of the greater number of more-candidate-errors in Popovska-Mitrovikj et al. (2012a) authors propose one heuristic in the decoding rule for decreasing this type of error. Namely, from the experiments with RCBQ authors found that when the decoding process ends with more elements in the reduced decoding candidate sets in the last iteration, almost always the correct message is in these sets (as the second part of an element in both sets). So, in this case, we can randomly select a message from one of the reduced sets in the last iteration and it can be taken as the decoded message. If the selected message is the correct one, then the bit-error is 0, so . B E R will also be reduced. In the experiments made with this modification, the correct message is selected in around half of the cases.
2.4 Methods for Reducing null-errors and more-candidate-errors In the papers Popovska-Mitrovikj et al. (2009, 2013) the authors proposed methods for reducing the number of null-errors and more-candidate-errors in the decoding process of RCBQ. Unsuccessful decoding with null-error occurs when in some of the sub-blocks of the encoded message, more than predicted . Bmax bit errors appear during transmission. Therefore, it is clear that some of these errors can be eliminated if we cancel a few iterations of the decoding process and reprocess all of them or part of them with a larger value of . Bmax . With this procedure only part of these unsuccessful decoded messages will be eliminated since we cannot know exactly in which iteration the correct sub-block does not enter into the set of candidates for decoding and exactly how many transmission errors (. Bmax + 1, . Bmax + 2 or more) occur in this sub-block. Moreover, the cancellation of the iterations slows down the decoding, and the number of elements in sets . Si can become too large, leading to unsuccessful decoding of type more-candidate-error. Experiments show that this modification gives a significant elimination of null-errors and the decoding process is only slightly slower. A similar modification with backtracking for decreasing the number of morecandidate-errors is proposed in Popovska-Mitrovikj et al. (2013). If the decoding process ends with more-candidate-error, then in order to obtain one candidate for the decoded message, some iterations can be reprocessed with a smaller value of . Bmax . Namely, when decoding ends with more elements in the last decoding candidate set, then a few iterations can be canceled and the first of canceled iterations is reprocessed using a smaller value of . Bmax . The next iterations use the previous . Bmax value.
18
2 Cryptocodes Based on Quasigroups
From the experiments can be concluded that using this modification for reducing the number of more-candidate-errors, an improvement of . P E R and . B E R is achieved, for all values of . p. Improvements in packet-error and bit-error probabilities obtained with both methods for reducing errors by backtracking, lead to an idea to use a combination of these two methods. Therefore, in the next chapters, we will present experimental results using a combination of these two methods for reducing unsuccessful decodings.
2.5 4-Sets-Cut-Decoding Algorithm Improving the decoding speed with Cut-Decoding algorithm gives the idea of using intersections of more sets . Si , in order to obtain a greater increase in decoding speed. Thus, in Popovska-Mitrovikj et al. (2015) the proposers of Cut-Decoding algorithm modified this algorithm and used four transformations of the redundant message. With this algorithm, called 4-Sets-Cut-Decoding algorithm , a better improvement in decoding speed was obtained. Also, in order to improve the probability of package error and bit error, several methods have been defined for generating reduced sets with decoding candidates.
2.5.1 Coding with 4-Sets-Cut-Decoding Algorithm In this modification of Cut-Decoding algorithm instead of .(Nblock , N ) code with rate . R, four .(Nblock , N /4) codes with rate .4R are used, that encode/decode a same message of . Nblock bits. The input message. M = M1 M2 . . . Ml is expanded by adding redundant zero symbols (in the same way as in Standard and Cut-Decoding algorithm) and a redundant message . L = L (1) L (2) . . . L (s/4) = L 1 L 2 . . . L m/4 of . N /4 bits is obtained, where . L (i) is a sub-block of .r symbols from the alphabet . Q, and . L i ∈ Q. Next, the encryption algorithm, given in Fig. 2.1 is applied four times on the redundant message . L using different parameters (different keys or quasigroups). The codeword .C of the message is a concatenation of the four obtained codewords of . N /4 bits.
2.5.2 Decoding with 4-Sets-Cut-Decoding Algorithm In Popovska-Mitrovikj et al. (2015), the authors proposed 4 different versions of 4-Sets-Cut-Decoding algorithm.
2.5 4-Sets-Cut-Decoding Algorithm
2.5.2.1
19
First Version of 4-Sets-Cut-Decoding Algorithm (4-Sets-Cut-Decoding Algorithm.#1)
After transmitting through a noisy channel, the outgoing message . D = D (1) D (2) . . . (s) .D is divided in four messages. D 1 = D (1) D (2) . . . D (s/4) ,. D 2 = D (s/4+1) D (s/4+2) . . . (s/2) .D ,. D 3 = D (s/2+1) D (s/2+2) . . .. D (3s/4) and. D 4 = D (3s/4+1) . D (3s/4+2) . . . D (s) with equal lengths and they are decoded parallelly with the corresponding parameters. Similarly, as in Cut-Decoding algorithm (with two sets), in each iteration of the decoding process the decoding candidate sets (obtained in the four decoding processes) are reduced. In the first version of 4-Sets-Cut-Decoding algorithm, decoding candidate sets are generated in the following way. Step 1. Let . S0(1) = (k1(1) . . . kn(1) ; λ), …, . S0(4) = (k1(4) . . . kn(4) ; λ), where .λ is the empty sequence, and .k1 = k1(1) . . . kn(1) , …, .k4 = k1(4) . . . kn(4) are the initials keys used for obtaining the four codewords, respectively. (1) (4) Step 2. Let . Si−1 , …, . Si−1 be defined for .i ≥ 1. Step 3. Let four decoding candidate sets . Si(1) , …, . Si(4) be obtained in the four decoding processes, in the same way as in the standard RCBQ. Step 4. Let .V1 = {w1 w2 . . . wr ·a·i |(δ, w1 w2 . . . wr ·a·i ) ∈ Si(1) }, …, (4) . V4 = {w1 w2 . . . wr ·a·i .|(δ, w1 w2 . . . wr ·a·i ) ∈ Si } and . V = V1 ∩ V2 ∩ V3 ∩ V4 . ( j) Step 5. For each . j = 1, 2, 3, 4 and for each .(δ, w1 w2 . . . wr ·a·i ) ∈ Si , if ( j) ( j) .w1 w2 . . . wr ·a·i ∈ / V then . Si ← Si \ {(δ, w1 w2 . . . wr ·a·i )}. ( j) (* Actually, for each . j = 1, 2, 3, 4, we eliminate from . Si all elements whose second part does not match with the second part of an element in all other three sets. In the next iteration, the four processes use the corresponding reduced sets . Si(1) , . Si(2) , . Si(3) , . Si(4) *). Step 6. If .i < s/4 then increase .i and go back to Step 3. The decoding rule in all versions of 4-Sets-Cut-Decoding algorithm is defined in the following way. (1) (2) (3) • After the last iteration, if all reduced decoding candidate sets . Ss/4 , . Ss/4 , . Ss/4 , (4) . Ss/4 have only one element with the same second component .w1 . . . wr ·a·s/4 , then . L = w1 . . . .wr ·a·s/4 is the decoded redundant message. In this case, we say that we have a successful decoding. (1) (4) • If the reduced sets . Ss/4 ,…, . Ss/4 have more than one element, after the last iteration, then we have more-candidate-error. In this case, the same heuristic as in CutDecoding algorithm is applied. Namely, from the reduced sets in the last iteration, one message is randomly selected. • If we obtain only one empty decoding candidate set (or two empty sets) then the decoding continues with the three (or two) nonempty sets (the set .V in Step 4 is an intersection of the non-empty sets only). • If we obtain only one nonempty set, in some iteration, then the decoding continues with the nonempty set using the standard decoding algorithm of RCBQ.
20
2 Cryptocodes Based on Quasigroups
• If we obtain . Si(1) = Si(2) = Si(3) = Si(4) = ∅ in some iteration, then the process will be stopped and a null-error appears.
2.5.2.2
Second Version of 4-Sets-Cut-Decoding Algorithm (4-Sets-Cut-Decoding Algorithm.#2)
The experiments with 4-Sets-Cut-Decoding algorithm.#1 showed that when the decoding process ends with null-error, i.e., when all four reduced sets are empty, very often the correct message is in three of four non-reduced sets. Therefore, in the second version of 4-Sets-Cut-Decoding algorithm (4-Sets-Cut-Decoding algorithm.#2) the following modification in Step 4 of the procedure for generating decoding candidate sets is made. Step 4#2 Let .V1 = {w1 w2 . . . wr ·a·i |(δ, w1 w2 . . . wr ·a·i ) ∈ Si(1) }, …, (4) . V4 = {w1 w2 . . . wr ·a·i .|(δ, w1 w2 . . . wr ·a·i ) ∈ Si } and. V = V1 ∩ V2 ∩ V3 ∩ V4 .
.
1. 2. 3. 4.
If .V = ∅ then .V ' = V1 ∩ V2 ∩ V3 and .V = V ' . If .V ' = ∅ then .V '' = V1 ∩ V2 ∩ V4 and .V = V '' . If .V '' = ∅ then .V ''' = V1 ∩ V3 ∩ V4 and .V = V ''' . If .V ''' = ∅ then .V iv = V2 ∩ V3 ∩ V4 and .V = V iv .
In fact, in this modification if the intersection of the four sets .V1 , V2 , V3 , V4 is empty then we try to find a nonempty intersection of three sets. In this way, a great improvement in the packet-error and bit-error probabilities is obtained without decreasing the decoding speed. But, for a more significant improvement in the performances, two more modifications of Step 4 when the intersection of all four sets is an empty set, are considered.
2.5.2.3
Third Version of 4-Sets-Cut-Decoding Algorithm (4-Sets-Cut-Decoding Algorithm.#3)
In the experiments, 4-Sets-Cut-Decoding algorithm.#2 gives better results for . P E R and . B E R. But, analyzing the experiments with null-error obtained with this version, in Popovska-Mitrovikj et al. (2015) authors noticed the following situation. Namely, in some experiments .V ' /= ∅, but the correct message is not in .V ' and it is in .V '' or ''' .V or .V iv . Similarly, if (.V ' = ∅ and .V '' /= ∅) or (.V ' = ∅, .V '' = ∅ and .V ''' /= ∅) and the correct message is in some of the next intersections (which are not considered if a previous intersection is not empty). Therefore, another modification of Step 4 is considered. Namely, if the intersection of all four sets is empty then the set .V = V ' ∪ V '' ∪ V ''' ∪ V iv , i.e., the new modification of Step 4 is the following.
2.6 Computing of P E R and B E R
21
Step 4#3 Let .V1 = {w1 w2 . . . wr ·a·i |(δ, w1 w2 . . . wr ·a·i ) ∈ Si(1) }, …, (4) . V4 = {w1 w2 . . . wr ·a·i |(δ, w1 w2 . . . wr ·a·i ) ∈ Si } and. V = V1 ∩ V2 ∩ V3 ∩ V4 . If .V = ∅ then .V = (V1 ∩ V2 ∩ V3 ) ∪ (V1 ∩ V2 ∩ V4 ) ∪ (V1 ∩ V3 ∩ V4 ) ∪ (V2 ∩ V3 ∩ V4 ).
.
In the experiments With this modification, if .V /= ∅ a better improvement of the probabilities for packet-error and bit-error than in 4-Set-Cut-Decoding algorithm.#2, are obtained. Also, this modification does not decrease the speed of the decoding.
2.5.2.4
Fourth Version of 4-Sets-Cut-Decoding Algorithm (4-Sets-Cut-Decoding Algorithm.#4)
In the third version of 4-Sets-Cut-Decoding algorithm, if .V = ∅ then we have unsuccessful decoding with null-error. In order, to reduce these errors, a new modification in the algorithm is made. Actually, if after Step .4#3 , .V is an empty set then .V is defined as a union of all intersections of two sets, i.e., the new version of Step 4 is the following. Step 4#4 Let .V1 = {w1 w2 . . . wr ·a·i |(δ, w1 w2 . . . wr ·a·i ) ∈ Si(1) }, …, (4) . V4 = {w1 w2 . . . wr ·a·i |(δ, w1 w2 . . . wr ·a·i ) ∈ Si } and. V = V1 ∩ V2 ∩ V3 ∩ V4 . If .V = ∅ then .V = (V1 ∩ V2 ∩ V3 ) ∪ (V1 ∩ V2 ∩ V4 ) ∪ (V1 ∩ V3 ∩ V4 ) ∪ (V2 ∩ V3 ∩ V4 ). If .V = ∅ then . V = (V1 ∩ V2 ) ∪ (V1 ∩ V3 ) ∪ (V1 ∩ V4 ) ∪ (V2 ∩ V3 ) ∪ (V2 ∩ V4 ) ∪ (V3 ∩ V4 ).
.
In the experiments with this new modification, better results are obtained only for Bmax = 4. When . Bmax = 5 percentage of eliminated null-errors is good, but a larger number of more-candidate-errors is obtained. The best results are obtained using the 4-Sets-Cut-Decoding algorithm.#3. So, in the next chapters, we will present only experiments with the third version.
.
2.6 Computing of . P E R and . B E R In all experiments whose results are presented in this book, the values of packeterror probabilities (. P E R) and bit-error probabilities (. B E R) are analyzed. They are computed in the following way. The outgoing message is decoded with the corresponding algorithm and if the decoding process is completed successfully (the last set/s of candidates for decoding has only one element), the decoded message
22
2 Cryptocodes Based on Quasigroups
is compared with the input message. If they differ in at least one bit (undetectederror appears) we compute the number of incorrectly decoded bits as the Hamming distance between the input and the decoded message. Experiments showed that this type of package error occurs rarely. In our experiments, we also calculate the number of incorrectly decoded bits when the decoding process finishes with more-candidate-error or null-error. Then, that number is calculated as follows. When null-error appears in the .ith iteration, we take all the elements (without redundant symbols) from the reduced decoding candidate sets in the previous iteration and we find their maximal common prefix substring. If this substring has .k bits and the length of the sent message is .m bits (.k ≤ m), then we compare this substring with the first .k bits of the sent message. If they differ in .b bits, then the number of incorrectly decoded bits is .m − k + b. If a more-candidates-error appears we take all the elements from the reduced decoding candidate sets in the last iteration and we randomly select one message and compare it with the sent message. The number of incorrectly decoded bits is computed as in the case of undetected-error. The total number of incorrectly decoded bits is the sum of all previously mentioned numbers of incorrectly decoded bits. Therefore, we compute the probability of biterror as .
BE R =
#(incorrectly decoded bits in all packets) . #(bits in all packets)
On the other side, we compute the probability of packet-error as .
PER =
#(incorrectly decoded packets) #(all packets)
where the incorrectly decoded packet is a decoded message with at least one bit error.
2.7 Cryptographic Properties of RCBQ Random codes based on quasigroups with all coding/decoding algorithms presented in this chapter are cryptocodes. Namely, these codes also encrypt messages. So, the original message is protected from different kinds of attacks. Namely, if an intruder catches the encrypted message, she/he cannot discover the original message without knowing the parameters used in the encrypting process. The brute-force attack for finding the parameters is impossible since the number of quasigroups is too large, so it is impossible to find the used quasigroup in real-time. Additionally, the used key and pattern for adding redundancy have to be found. On the other side, the coding/encrypting algorithm uses quasigroup transformations. According to Theorem 1.1, if the key (applied in the encryption of the mes-
References
23
Fig. 2.4 Original and encrypted image
sage) has length .n, then .k-tuple in the message are uniformly distributed, for all k = 1, 2, . . . , n. This means that the intruder cannot find the original message using statistical attacks (all letters, pairs, triplets, …will be uniformly distributed). An illustration of how RSBQ encrypts messages is given in Fig. 2.4, where the image of “Lenna” is used. The first image is the original one, and the second image is the encrypted image obtained by using one of the encoding algorithms described in this chapter. A similar encrypted image is obtained with all previously mentioned algorithms. It is obvious that the content of the original image (or of the message) is completely hidden after coding with these codes. Further on, in this monograph, we analyze the performances of RCBQs as error correction codes.
.
References Bakeva, V., Popovska-Mitrovikj, A., Mechkaroska, D., Dimitrova, V., Jakimovski, B., & Ilievski, V. (2019). Gaussian channel transmission of images and audio files using cryptcoding. IET Communications,13(11), 1625–1632 (2019). (IF. 1.779) Gligoroski, D., Knapskog, S. J., & Andova, S. (2006). Cryptcoding - encryption and error-correction coding in a single step. In The 2006 International Conference on Security and Management. Las Vegas, Nevada, USA: CSREA Press. Gligoroski, D., Markovski, S., & Kocarev, L. (2007a). Totally asynchronous stream ciphers + Redundancy = Cryptcoding. In S. Aissi, H. R. Arabnia (Eds.) Proceedings of the International Conference Security and Management, SAM 2007 (pp. 446–451). CSREA Press: Las Vegas. Gligoroski, D., Markovski, S., & Kocarev, L. (2007b). Error-correcting codes based on quasigroups. In Proceeding of the 16th International Conference Computer Communications and Networks (pp. 165–172). Mechkaroska, D., Popovska-Mitrovikj, A., & Bakeva, V. (2019). Cryptcoding of images for transmission trough a burst channels. Journal of Engineering Science and Technology Review (JESTR),
24
2 Cryptocodes Based on Quasigroups
65–69. ISSN: 1791-2377. Proceedings of Fourth International Scientific Conference “Telecommunications, Informatics, Energy and Management”, September 2019, Kavala, Greece Mechkaroska, D., Popovska-Mitrovikj, A., & Bakeva Smiljkova, V. (2018). Performances of fast algorithms for random codes based on quasigroups for transmission of audio files in gaussian channel. In S. Kalajdziski, N. Ackovska (Eds.) ICT innovations 2018. Engineering and life sciences (pp. 286–296). Springer International Publishing. Mechkaroska, D., Popovska-Mitrovikj, A., & Bakeva, V. (2019). New cryptcodes for burst channels. ´ c, M. Droste, J.É. Pin (Eds.) Algebraic informatics. CAI 2019. Lecture notes in computer In M. Ciri´ science (Vol. 11545, pp. 202–212). Cham: Springer. Mechkaroska, D., Popovska-Mitrovikj, A., & Bakeva, V. (2016). Cryptcodes based on quasigroups in Gaussian channel. Quasigroups and Related Systems, 24(2), 249–268. Popovska-Mitrovikj, A., Bakeva, V., & Mechkaroska, D. (2017). New decoding algorithm for cryptcodes based on quasigroups for transmission through a low noise channel. In D. Trajanov & V. Bakeva (Eds.), Communications in computer and information science series (CCIS). ICTinnovations 2017 (Vol. 778, pp. 196–204). Springer. Popovska-Mitrovikj, A., Bakeva, V., & Mechkaroska, D. (2020). Fast decoding with cryptcodes for burst errors. In V. Dimitrova, I. Dimitrovski (Eds.) ICT innovations 2020. Machine learning and applications. Communications in computer and information science (Vol. 1316). Cham: Springer. Popovska-Mitrovikj, A., Markovski, S., & Bakeva, V. (2009). Performances of error-correcting codes based on quasigroups. In D. Davcev, J. M. Gomez (Eds.) ICT-innovations 2009 (pp. 377– 389). Springer. 15 Popovska-Mitrovikj, A., Markovski, S., & Bakeva, V. (2012a). Increasing the decoding speed of random codes based on quasigroups. In S. Markovski, M. Gusev (Eds.) ICT innovations 2012, Web proceedings (pp. 93–102). ISSN 1857-7288. Popovska-Mitrovikj, A., Markovski, S., & Bakeva, V. (2012b). On improving the decoding of random codes based on quasigroups. In Proceedings of the 9th Conference on Informatics and Information Technology with International Participants. Faculty of Computer Science and Engineering, University “Ss.Cyril and Methodius” (Macedonia), Bitola, Macedonia, April 2012 (pp. 214–217). Popovska-Mitrovikj, A., Markovski, S., & Bakeva, V. (2013). Some new results for random codes based on quasigroups. In Proceedings of the 10th Conference on Informatics and Information Technology with International Participants. Faculty of Computer Science and Engineering, University “Ss.Cyril and Methodius” (Macedonia), Bitola, Macedonia, April 2013 Popovska-Mitrovikj, A., Markovski, S., & Bakeva, V. (2015). 4-sets-cut-decoding algorithms for random codes based on quasigroups. International Journal of Electronics and Communications (AEU), 69(10), 1417–1428. Elsevier Popovska-Mitrovikj, A., Bakeva, V., & Markovski, S. (2011). On random error correcting codes based on quasigroups. Quasigroups and Related Systems, 19, 301–316. Popovska-Mitrovikj, A., Bakeva, V., & Mechkaroska, D. (2023). Fast decoding of images with cryptcodes for burst channels. IEEE Access, 11, 50823–50829. https://doi.org/10.1109/ACCESS. 2023.3278051
Chapter 3
Experimental Results for Cryptcodes Based on Quasigroups for Transmission Through a BSC
Abstract In this chapter, we present experimental results for cryptcodes based on quasigroups for transmission through a binary-symmetric channel (BSC). In the first section, we compare results for code (72, 576) obtained using two different decoding algorithms (Cut-Decoding and 4-Sets-Cut-Decoding#3). At the end of this section, we give a conclusion (derived from many experiments made with these codes) about the different decoding algorithms, parameters, and methods for reducing decoding errors that give the best results for these codes. In the second section, we consider the performances of these cryptocodes in the transmission of images through a binarysymmetric channel. Keywords Binary-symmetric channel · Cryptocodes · Images · Bit-error probability · Packet-error probability
3.1 Experimental Results for Transmission of Messages Here, we present the experimental results for code (72, 576) with rate . R = 1/8 using Cut-Decoding and 4-Sets-Cut-Decoding#3 algorithms (Popovska-Mitrovikj et al., 2015). In all experiments we used alphabet . Q = {0, 1, . . . 9, a, b, c, d, e, f } of nibbles with the quasigroup operations .∗ and .\ on . Q given in Table 3.1 and blocks of 4 nibbles in the decoding process. In our experiments, we use the following parameters, which give the best results for code (72, 576): • In Standard decoding algorithm—redundancy pattern: 1100 1000 0000 0000 0000 0000 1100 1000 0000 0000 0000 0000 1100 1000 0000 0000 0000 0000 1100 1000 0000 0000 0000 0000 1100 1000 0000 0000 0000 0000 1100 1000 0000 0000 0000 0000 (Here, 1 denotes the place of a message symbol, and 0 is the redundant symbol) and an initial key of 10 nibbles. • In Cut-Decoding algorithm—redundancy pattern: 1100 1100 1000 0000 1100 1000 1000 0000 1100 1100 1000 0000 1100 1000 1000 0000 0000 0000 for rate .1/4 and two different keys of 10 nibbles. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 D. Mechkaroska et al., Cryptocoding Based on Quasigroups, SpringerBriefs in Information Security and Cryptography, https://doi.org/10.1007/978-3-031-50125-8_3
25
26
3 Experimental Results for Cryptcodes Based on Quasigroups …
Table 3.1 Quasigroup of order 16 and its parastrophe used in the experiments
* 0 1 2 3 4 5 6 7 8 9 a b c d e f
0 3 0 1 6 4 f 2 e c b 9 7 5 a d 8
1 c 3 0 b 5 a f 9 7 e 4 8 2 6 1 d
2 2 9 e f 0 1 a c 6 4 d 5 b 8 3 7
3 5 d c 1 7 0 3 a 2 9 8 e 6 4 f b
4 f 8 4 9 6 e c 1 a d 0 2 7 3 b 5
5 7 1 5 4 b 2 8 d f 3 6 a 9 e 0 c
6 6 7 f e 9 4 d 8 b 1 5 3 0 c 2 a
7 1 b 9 a 3 c 0 6 5 f 7 4 e d 8 2
8 0 6 d 3 f 7 b 5 1 8 e c a 2 4 9
9 b 5 3 7 2 d e f 0 c 1 6 8 9 a 4
a d 2 6 8 a 3 9 b 4 5 f 0 c 1 7 e
b e a 7 0 8 b 4 2 9 6 3 d f 5 c 1
c 8 c a 2 d 5 6 4 e 7 b f 1 0 9 3
d 4 f 8 c e 9 1 0 d a 2 b 3 7 5 6
e 9 e b d c 8 5 7 3 2 a 1 4 f 6 0
f a 4 2 5 1 6 7 3 8 0 c 9 d b e f
\ 0 1 2 3 4 5 6 7 8 9 a b c d e f
0 8 0 1 b 2 3 7 d 9 f 4 a 6 c 5 e
1 7 5 0 3 f 2 d 4 8 6 9 e c a 1 b
2 2 a f c 9 5 0 b 3 e d 4 1 8 6 7
3 0 1 9 8 7 a 3 f e 5 b 6 d 4 2 c
4 d f 4 5 0 6 b c a 2 1 7 e 3 8 9
5 3 9 5 f 1 c e 8 7 a 6 2 0 b d 4
6 6 8 a 0 4 f c 7 2 b 5 9 3 1 e d
7 5 6 b 9 3 8 f e 1 c 7 0 4 d a 2
8 c 4 d a b e 5 6 f 8 3 1 9 2 7 0
9 e 2 7 4 6 d a 1 b 3 0 f 5 9 c 8
a f b c 7 a 1 2 3 4 d e 5 8 0 9 6
b 9 7 e 1 5 b 8 a 6 0 c d 2 f 4 3
c 1 c 3 d e 7 4 2 0 9 f 8 a 6 b 5
d a 3 8 e c 9 6 5 d 4 2 b f 7 0 1
e b e 2 6 d 4 9 0 c 1 8 3 7 5 f a
f 4 d 6 2 8 0 1 9 5 7 a c b e 3 f
3.1 Experimental Results for Transmission of Messages
27
Table 3.2 Experimental results for packet-error probabilities for . Bmax = 5 . P E Rcut . P E R4−sets p 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16
0.00007 0.00086 0.00257 0.00571 0.01429 0.03057 0.05086 0.08800 0.13629 0.18886 0.28948 / / /
0 0.00029 0.00114 0.00229 0.00286 0.01114 0.01514 0.02429 0.04543 0.07914 0.10943 0.17200 0.22857 0.32314
• In the 4-Sets-Cut-Decoding algorithms—redundancy pattern: 1100 1110 1100 1100 1110 1100 1100 1100 0000 for rate .1/2 and four different keys of 10 nibbles. • In all experiments we used the same quasigroup on . Q given in Table 3.1. Experimental results for packet-error and bit-error probabilities for . Bmax = 5 and different values of bit-error probability . p of binary-symmetric channel (BSC) are given in Tables 3.2 and 3.3. In these tables, . P E Rcut and . B E Rcut are the packet-error and bit-error probabilities obtained with Cut-Decoding algorithm, and . P E R4−sets and . B E R4−sets the packet-error and bit-error probabilities obtained with 4-SetsCut-Decoding#3 algorithm. For both algorithms, we made experiments until we got . B E R > p since using the codes does not make sense when . B E R > p (the number of incorrectly decoded bits is greater than the number of incorrectly received bits without coding). From the values for . P E Rcut and . P E R4−sets in Table 3.2, we can conclude that for all values of . p using 4-Sets-Cut-Decoding#3 algorithm we obtain more than 2 times better results than with Cut-Decoding algorithm (for . p = 0.07, . P E R4−sets is almost 5 times smaller than . P E Rcut ). The same conclusions can be derived for the values of bit-error probabilities. As we can see in Table 3.3, in all experiments the values of . B E R4−sets are more than 2 times better than the corresponding values obtained with Cut-Decoding algorithm. Also, from the duration of our experiments, we concluded that 4-Sets-Cut-Decoding#3 algorithm is from 1.2 to 6.2 times (depending on the value of . p) faster than Cut-Decoding algorithm. As is mentioned in Chap. 2, a good improvement of packet-error and bit-error probabilities is obtained using the methods with backtracking for reducing the number of null-errors and more-candidate-errors. For transmission through a binary-
28
3 Experimental Results for Cryptcodes Based on Quasigroups …
Table 3.3 Experimental results for bit-error probabilities for . Bmax = 5 p . B E Rcut . B E R4−sets 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16
0.00005 0.00047 0.00148 0.00302 0.00831 0.01778 0.02836 0.05131 0.07755 0.10314 0.16065 / / /
0 0.00011 0.00049 0.00152 0.00178 0.00636 0.00899 0.01505 0.02546 0.04704 0.06365 0.10289 0.13609 0.19017
symmetric channel, the best results for code (72, 576) are obtained using 4-Sets-CutDecoding#3 algorithm, parameters given at the beginning of this section, . Bmax = 5 and the following combination of the methods for reducing decoding errors. If we obtain null-error, i.e., empty set in some iteration of the decoding process, then we cancel two iterations, and we reprocess the first of canceled iterations using . Bmax + 2 = 7. If the decoding process ends with more elements in the sets after the last iteration, then we go two iterations back, and we reprocess the penultimate iteration with . Bmax − 1 = 4. Also, if after the backtracking for null-error we obtain more candidates in the last iteration, then we make one more backtracking for more-candidate-error. In Table 3.4 we compare the packet error probability . P E R4−sets_back , obtained using the above combination of both methods with backtracking and the probability . P E R4−sets obtained by the third version of 4-Sets-CutDecoding algorithm without backtracking (from Table 3.2). Also, in Table 3.5, we compare the suitable bit-error probabilities . B E R4−sets_back and . B E R4−sets obtained in the same experiments. With the proposed methods by backtracking, we also made experiments until we got . B E R > p. From the results in Tables 3.4 and 3.5 we can see that with the proposed methods with backtracking, a great improvement of . P E R and . B E R is obtained for all values of . p. From all experimental results presented in Popovska-Mitrovikj et al. (2015), it is concluded that the best results for code .(72, 576) when the transmission is through a binary-symmetric channel are obtained with 4-Sets-Cut-Decoding algorithm.#3, parameters given in Sect. 3.1, and the previously described combination of methods
3.1 Experimental Results for Transmission of Messages
29
Table 3.4 Experimental results for . P E R for . Bmax = 5 with and without backtracking p . P E R4−sets . P E R4−sets_back 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18
0 0.00029 0.00114 0.00229 0.00286 0.01114 0.01514 0.02429 0.04543 0.07914 0.10943 0.17200 0.22857 0.32314 / /
0 0 0.00029 0.00200 0.00200 0.00914 0.01057 0.01514 0.03057 0.05714 0.07714 0.12629 0.17257 0.26000 0.32514 0.44971
Table 3.5 Experimental results for . B E R for . Bmax = 5 with and without backtracking p . B E R4−sets . B E R4−sets_back 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18
0 0.00011 0.00049 0.00152 0.00178 0.00636 0.00899 0.01505 0.02546 0.04704 0.06365 0.10289 0.13609 0.19017 / /
0 0 0.00015 0.00080 0.00087 0.00437 0.00463 0.00724 0.01437 0.02592 0.03212 0.05622 0.07500 0.11002 0.13777 0.19689
30
3 Experimental Results for Cryptcodes Based on Quasigroups …
for reducing errors. Therefore, in the next section, we present the results for the transmission of images through a binary-symmetric channel obtained with this choice of decoding algorithm, parameters, and combination of methods with backtracking.
3.2 Experimental Results for Decoding Images Transmitted Through a BSC In this section, we investigate the performances of the 4-Sets-Cut-Decoding algorithm.#3 (using the same parameters and combination of both methods with backtracking given in the previous section) for coding/decoding images transmitted through a binary-symmetric channel. For that aim, we made experiments with the picture of Lenna (Fig. 3.1). Experiments are made for the following values of bit-error probability in the channel: . p = 0.05, . p = 0.10, and . p = 0.15. In all decoding algorithms for RCBQ, when null-error appears, the decoding process ends early, and we have only a part of the message that is decoded. Therefore, in the experiments with images, we use the following solution. In the cases when null-error appears, i.e., all reduced sets . Si(1) , …, . Si(4) are empty, we take the strings (1) (4) , …, . Si−1 and we find without redundant symbols from all elements in the sets . Si−1 their maximal common prefix substring. If this substring has .k symbols, then in order to obtain the decoded message of .l symbols, we take these .k symbols and add .l − k zero symbols at the end of the message. In the experiments with images, we notice that this type of error makes the most visible changes in the decoded images. Images obtained for the considered values of bit-error probability . p in the binarysymmetric channel are presented in Figs. 3.2, 3.3 and 3.4. In these figures, the images in (a) are obtained after transmission through the channel without using any errorcorrecting code. In (b) we give the images obtained using RCBQ with 4-Sets-CutDecoding algorithm.#3 with the previously mentioned combination of both methods with backtracking.
Fig. 3.1 The original image
3.2 Experimental Results for Decoding Images Transmitted Through a BSC
Fig. 3.2 Images for . p = 0.05
Fig. 3.3 Images for . p = 0.10
Fig. 3.4 Images for . p = 0.15
31
32
3 Experimental Results for Cryptcodes Based on Quasigroups …
From the figures, we can see that using RCBQ with 4-Sets-Cut-Decoding algorithm.#3 (with backtracking) the obtained images are relatively clear even for large values of . p. Therefore, these codes and the proposed algorithms can be applied for coding/decoding data transmitted through a channel with a large bit-error probability. As we explain above, in the experiments with RCBQ we put zero symbols in the place of the non-decoded part of the message when the decoding process ends with null-error. In fact, these zero symbols are the horizontal lines that can be seen in Figs. 3.2b, 3.3b and 3.4b. On the other hand, the images obtained without using error-correcting codes (Figs. 3.2a, 3.3a and 3.4a do not have these lines, but the entire images have points that are incorrectly transmitted symbols. In order to clear some of these horizontal lines (in the images decoded with RCBQ), analyzing the surrounding pixels, a filter for clearer images can be defined. This filter will be considered in the next sections. Moreover, due to their cryptographic properties (explained at the end of the previous section), these codes will also provide information security.
Reference Popovska-Mitrovikj, A., Markovski, S., & Bakeva, V. (2015). 4-Sets-Cut-Decoding algorithms for random codes based on quasigroups. International Journal of Electronics and Communications (AEU), 69(10), 1417–1428. Elsevier.
Chapter 4
Experimental Results for Cryptocodes Based on Quasigroups for a Gaussian Channel
Abstract In the previous chapters we explain the algorithms and properties of cryptocodes based on quasigroups. Also, we present some experimental results when the transmission is through a binary-symmetric channel. In this chapter, we give corresponding results for transmission through a Gaussian channel. In the first section, we briefly describe the Gaussian channel. After the section with the results for the transmission of ordinary messages, we present some results for the transmission of images. In the end, we consider a filter for enhancing the quality of images decoded with these cryptocodes. Keywords Gaussian channel · Cryptocodes · Images · Audio · Filter
4.1 Gaussian Channel The most significant continuous data transmission channel is the Gaussian channel described in Fig. 4.1. It is a discrete-time channel where the output in time .i is .Yi and that output is obtained as the sum of the input . X i and the noise . Z i . The noise . Z i is drawn from independent and evenly distributed random variables from a Gaussian distribution with mean .0 and variance . N and we call it Additive White Gaussian Noise (AWGN), and the channel is called Gaussian (AWGN) channel. In fact, .Yi = X i + Z i Z i ∼ N(0, N ). We assume that the noise . Z i is independent of the signal . X i . This is a good model that can describe several types of communication channels. If the noise dispersion is zero, then the receiver receives the transmitted signal correctly. Because . X can receive any real value, the channel can transmit any real number without error. If the noise variance is not zero and there is no input limit, we can choose an infinite subset of arbitrary inputs so that the resulting inputs will differ from each other with a low error probability. Such a scheme has infinite capacity (Cover and Thomas, 2005). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 D. Mechkaroska et al., Cryptocoding Based on Quasigroups, SpringerBriefs in Information Security and Cryptography, https://doi.org/10.1007/978-3-031-50125-8_4
33
34
4 Experimental Results for Cryptocodes Based on Quasigroups …
Fig. 4.1 Gaussian channel
Fig. 4.2 Simple block diagram with BPSK transmitter-receiver
The most common input limitation is the power limitation. For any codeword (x1 , x2 , . . . , xn ) transmitted over the channel, we have:
.
.
1E 2 xi ≤ P n
For transmission through a Gaussian channel, a digital modulation is needed. Digital modulations are used to transfer data to mobile phones, scientific and geomagnetic instruments, etc. Any digital modulation scheme uses a finite number of different symbols to represent digital data. One of these modulations is PSK (Phaseshift keying). It uses a finite number of phases, each associated with a single pattern of binary digits. Usually, each phase encodes an equal number of bits. Each bit model forms a symbol, which is represented by a certain phase. The demodulator, which is specially designed for the set of symbols used by the modulator, determines the phase of the received signal and replicates it back to the symbols that represent it, thus returning to the original data. The simplest form of PSK modulator is BPSK (Binary Phase-shift keying) which uses only√2 phases. With √ BPSK, binary digits 1 and 0 can be represented by the analog levels .+ P and .− P, respectively. Here, . P is the power limit. After, the modulation, the signal is transmitted through a Gaussian channel. The receiver obtains the corresponding signal .Y and tries to demodulate it. The optimal demodulating rule is the following one. If the received signal is .Y > 0 the receiver supposes that the sent bit is .1. If the received signal is .Y < 0, it supposes that the sent bit is .0. This model is shown in Fig. 4.2 (Sankar, 2007).
4.1 Gaussian Channel
35
4.1.1 Signal-to-Noise Ratio (. SN R) The signal-to-noise ratio (. S N R) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. In fact, it is a ratio of the power of the desired signal and the power of background noise (the undesired signal), i.e., Psignal .S N R = . Pnoise If the variance of the signal and the noise are known and the signal has a mean equal to 0, then: 2 σsignal .S N R = . 2 σnoise If the signal and noise are measured in the same unit, then . S N R can be obtained by calculating the ratio of the square of the amplitudes: ( .
SN R =
Asignal Anoise
)2 .
Since many signals have a very wide dynamic range, . S N R is often expressed using a logarithmic decibel scale. In decibels, . S N R is defined as follows: ( .
S N Rdb = 10log10
Table 4.1 The probability of bit-error
Psignal Pnoise
)
( = 20log10
Asignal Anoise
) .
.S N R
. Pb
.−3
.0.1584
.−2
.0.1306
.−1
.0.1038
.0
.0.0786
.1
.0.0563
.2
.0.0375
.3
.0.0229
.4
.0.0125
.5
.0.0060
.6
.0.0024
.7
.0.000773
.8
.0.000191
.9
.0.0000336
.10
.0.00000387
36
4 Experimental Results for Cryptocodes Based on Quasigroups …
The values for the bit-error probability . Pb in the transmitted message (without using error-correcting codes) for . S N R values in the range of .−3 to .10 decibels are shown in Table 4.1. From the table, it can be seen that the bit-error probability . Pb decreases when . S N R increases.
4.2 Experimental Results for Transmission of Messages As is mentioned in Chap. 2, the performances of these cryptocodes depend on the choice of the code parameters. The experimental results obtained with Standard algorithm and Cut-Decoding algorithm for code (72, 288) and transmission through a Gaussian channel are presented in Mechkaroska et al. (2016). There, the performances of these codes are investigated and it is concluded that the code performances depend on the code parameters: pattern, key and quasigroup. In this chapter, we give the experimental results obtained with 4-Sets-Cut-Decoding algorithm.#3 and we compare these results with corresponding results obtained with Cut-Decoding. The experiments in this section are made for code .(72, 576) with rate . R = 1/8. We made experiments with many different patterns, keys and quasigroups, and here we present the best results obtained with the following parameters: – In Cut-Decoding—redundancy pattern: 1100 1100 1000 0000 1100 1000 1000 0000 1100 1100 1000 0000 1100 1000 1000 0000 0000 0000 for a rate .1/4 and two different keys of 10 nibbles. – In 4-Sets-Cut-Decoding—redundancy pattern: 1100 1110 1100 1100 1110 1100 1100 1100 0000 for rate .1/2 and four different keys of 10 nibbles. – In all experiments, we used the same quasigroup on the set of nibbles . Q = {0, 1, . . . 9, a, b, c, d, e, f } given in Table 3.1. – In all experiments, the assumed maximum number of bit errors that occur in a transmission of a block is . Bmax = 5. In Table 4.2, we present experimental results for bit-error probabilities . B E Rcut and . B E R4sets , obtained with Cut-Decoding algorithm and 4-Sets-Cut-Decoding algorithm, respectively, and the corresponding packet-error probabilities . P E Rcut and . P E R4sets for different values of . S N R. For . S N R smaller than .−1, using of CutDecoding algorithm does not have sense since the values of bit-error probabilities are larger than the bit-error probability in the channel without coding. Here, we also made experiments until we got . B E R > Pb . Analyzing the results in Table 4.2, we can conclude that for all values of . S N R, results for . B E R4sets are better than the corresponding results of . B E Rcut . The same conclusions can be derived from the comparison of . P E R4sets and . P E Rcut . This is more significant for the larger value of . S N R, i.e., for low noise in the channel. Even more, for . S N R ≥ 2, the values for . B E R and . P E R obtained with both algorithms are approximately equal to 0.
4.3 Experimental Results for Images
37
Table 4.2 Experimental results for . R = 1/8 .S N R . B E Rcut . B E R4sets .−2 .−1 .0 .1 .2
/ 0.05591 0.01232 0.00224 0.00037
0.06548 0.02225 0.00449 0.00066 0.00008
. P E Rcut
. P E R4sets
/ 0.10491 0.02376 0.00425 0.00058
0.11283 0.03831 0.00835 0.00136 0.00014
4.3 Experimental Results for Images Here, we present the experimental results obtained for the transmission of images through a Gaussian channel for different signal-to-noise ratio values (. S N R) (Bakeva et al., 2019). All experiments are for code.(72, 576) with rate. R = 1/8 using the same parameters as in the previous section. In all experiments (for different values of . S N R in the channel) we consider the differences between transmitted and decoded images. Also, we compare experimentally obtained values for bit-error probability (. B E R) and packet-error probability (. P E R) and the duration of the decoding processes with both algorithms. As we mention in Sect. 3.2, in all decoding algorithms for RCBQ, when a nullerror appears, the decoding process ends earlier and only a part of the message is decoded. Here, we use the same solution for this problem as previously. Namely, in the cases of a null-error (all reduced sets are empty in some iteration), we take the strings without redundant symbols from all elements in the sets from the previous iteration and find their maximal common prefix substring. If this substring has .k symbols, then in order to obtain a decoded message of .l symbols, we take these .k symbols and add .l − k zero symbols at the end of the message. Also, if a morecandidate-error appears, then a message is randomly selected from the reduced sets in the last iteration and that message is considered as a decoded message. In this section, we present and compare the results obtained using Cut-Decoding and 4-Sets-Cut-Decoding algorithms with the following combination of the methods for reducing the number of unsuccessful decodings, explained in Sect. 2.4. If we obtain null-error, i.e., empty set in some iteration of the decoding process, then we cancel two iterations, and we reprocess the first of canceled iterations using . Bmax + 2 = 7. If the decoding process ends with more elements in the sets after the last iteration, then we go two iterations back, and we reprocess the penultimate iteration with . Bmax − 1 = 4. Also, if after the backtracking for null-error we obtain more candidates in the last iteration, then we make one more backtracking for morecandidate-error. In the experiments for image transmission, we use the image of “Lenna”. In Figs. 4.3, 4.4, 4.5, 4.6 and 4.7, we present images obtained for . S N R = −2, . S N R = −1, . S N R = 0 and . S N R = 1, correspondingly. In each figure, the first image is obtained after transmission through the channel without using any error-correcting
38
4 Experimental Results for Cryptocodes Based on Quasigroups …
Fig. 4.3 . S N R = −3
Fig. 4.4 . S N R = −2
Fig. 4.5 . S N R = −1
code, the second image is obtained using Cut-Decoding algorithm and the third one using 4-Sets-Cut-Decoding algorithm. If we compare the second and the third images with the first one in Figs. 4.3, 4.4, 4.5, 4.6 and 4.7, we can see that Cut-Decoding and 4-Sets-Cut-Decoding algorithms correct many errors that appeared during transmission. Also, we obtain fewer damages (lines) of the images decoded with 4-Sets-Cut-Decoding algorithm than with Cut-Decoding algorithm. The values of . B E R and . P E R obtained with both algorithms (presented in Tables 4.3 and 4.4) confirm our conclusions derived from images. There,. B E Rcut and . P E Rcut denote the probabilities for Cut-Decoding algorithm and . B E R4−sets and . P E R4−sets corresponding probabilities obtained by 4-Sets-Cut-Decoding algorithm. From these tables, we can see that for all values of . S N R, . B E R4−sets is more
4.3 Experimental Results for Images
39
Fig. 4.6 . S N R = 0
Fig. 4.7 . S N R = 1 Table 4.3 Experimental results for . B E R . B E Rcut .S N R
. B E R4−sets
.−3
.0.10411
.0.31351
.−2
.0.13257
.0.03487
.−1
.0.04029
.0.01233
.0
.0.00990
.0.00306
.1
.0.00161
.0.00040
Table 4.4 Experimental results for . P E R . P E Rcut .S N R
. P E R4−sets
.−3
.0.52898
.0.24045
.−2
.0.24265
.0.08019
.−1
.0.07429
.0.02622
.0
.0.01675
.0.00645
.1
.0.00316
.0.00082
than 3 times smaller than . B E Rcut . Also, the packet-error probabilities obtained with 4-Sets-Cut-Decoding algorithm are more than 2 times smaller than the probabilities obtained with Cut-Decoding algorithm.
40
4 Experimental Results for Cryptocodes Based on Quasigroups …
Table 4.5 Experimental results with Cut-Decoding .S N R . P E Rnull
. P E Rmore−candidate
.−3
.0.52719
.0.00096
.−2
.0.24155
.0.00041
.−1
.0.07402
.0.00027
.0
.0.01675
.0
.1
.0.00316
.0
Table 4.6 Experimental results with 4-Sets-Cut-Decoding . P E Rnull .S N R
. P E Rmore−candidate
.−3
.0.10588
.0.07278
.−2
.0.02691
.0.03131
.−1
.0.01181
.0.00796
.0
.0.00247
.0.00288
.1
.0.00041
.0.00027
As we explained above, when null-error appears we add .l − k zero symbols at the end of the message and this makes horizontal black lines on the decoded image. The horizontal white and gray lines on the decoded images are obtained in the case of more-candidate-error when the randomly selected message from the reduced sets in the last iteration differs from the original message. On the other hand, the images obtained without using error-correcting codes do not have these lines, but the entire image has points that are incorrectly transmitted symbols. In Table 4.5 (for Cut-Decoding algorithm) and Table 4.6 (for 4-Sets-Cut-Decoding algorithm), probabilities of null-error (. P E Rnull ) and more-candidate-error (. P E Rmore−candidate ) are given. From these tables, we can conclude that with CutDecoding algorithm we obtain a much greater number of unsuccessful decodings with null-error than with 4-Sets-Cut-Decoding algorithm, but the number of morecandidate-errors is smaller. Therefore, the images decoded with Cut-Decoding algorithm have more black lines than the images decoded with 4-Sets-Cut-Decoding algorithm. Also, most of the lines in the third images (obtained with 4-Sets-Cut-Decoding algorithm) are white or gray. Also, we analyzed the speed of two algorithms and concluded that 4-Sets-CutDecoding algorithm is faster than Cut-Decoding algorithm. The differences in the speed of the algorithms depend on the value of . S N R and it increases as . S N R increases. For example, for . S N R = 1, 4-Sets-Cut-Decoding algorithm is two times faster than Cut-Decoding algorithm, and for . S N R = −2, it is 16 times faster. In order to clear some of the damages (horizontal lines) on the decoded images, in the next subsection, we propose two filters that visually enhance pixels damaged by null-errors and more-candidate-errors.
4.4 Experimental Results for Audio Files
41
4.3.1 Filter for Images Decoded by Crypotcodes Based on Quasigroups In Mechkaroska et al. (2017) authors define a filter for enhancing the quality of images decoded with RCBQ. For repairing damage, a filter has to locate where the damage appears. Locating the null-errors is easy, since we add zero symbols in place of the undecoded part of the message. In order to locate more-candidate-errors, we change the decoding rule for these kinds of errors. Instead of random selection of a message from the reduced sets in the last iteration, we take a message of all zero symbols as a decoded message. Now, one pixel is considered as damaged if it belongs in a zero sub-block with at least four consecutive zero nibbles. The basic idea in the definition of this filter is to replace the damaged pixel intensity value with a new value taken over a neighborhood of fixed size. In this filter, the median of the nonzero gray values of the surrounding pixels is used. So, this filter is a median filter. For each damaged pixel in the position .(i, j), the filter uses the following algorithm: 1. take a 3 .× 3 region centered around the pixel .(i, j); 2. sort the nonzero intensity values of the pixels in the region into ascending order; 3. select the middle value (the median) as the new value of the pixel .(i, j). In Figs. 4.8, 4.9, 4.10 and 4.11, we present images obtained with Cut-Decoding and 4-Sets-Cut-Decoding algorithm before and after application of the proposed filter for . S N R = −2, . S N R = −1, . S N R = 0 and . S N R = 1, correspondingly. In each figure, the images in the first row are for Cut-Decoding (without and with the filter, correspondingly) and the images in the second row are for 4-Sets-Cut-Decoding algorithm (without and with the filter). From the presented images, we can notice that the proposed filter provides a great improvement of the images for all considered values of . S N R. Also, this filter gives better results for the images decoded with Cut-Decoding algorithm than with 4-SetsCut-Decoding algorithm. The reason for this is the larger number of undetectederrors obtained with 4-Sets-Cut-Decoding algorithm. This filter can not locate this kind of error.
4.4 Experimental Results for Audio Files In these experiments, we use audio that consists of one 16-bit channel with a sampling rate of 44100 Hz and it is a part of Beethoven’s “Ode to Joy” with a total length of approximately 4.3 s. All experiments are for code .(72, 576) with rate . R = 1/8 using the same parameters as in the previous section. Results presented in this section are given in Bakeva et al. (2019). Here, we also present experimental results for different values of . S N R obtained using Cut-Decoding and 4-Sets-Cut-Decoding algorithms. In both decoding algo-
42
Fig. 4.8 . S N R = −2
Fig. 4.9 . S N R = −1
4 Experimental Results for Cryptocodes Based on Quasigroups …
4.4 Experimental Results for Audio Files
Fig. 4.10 . S N R = 0
Fig. 4.11 . S N R = 1
43
44
4 Experimental Results for Cryptocodes Based on Quasigroups …
Fig. 4.12 Original audio samples
rithms for RCBQ, when a null-error appears, the decoding process ends earlier and only a part of the message is decoded. Therefore, in the experiments with audio files, we use the same solution as for images. Namely, in the cases when a null-error appears, i.e., all reduced sets are empty in some iteration, we take the strings without redundant symbols from all elements in the sets from the previous iteration and we find their maximal common prefix substring. If this substring has .k symbols, then in order to obtain a decoded message of .l symbols we take these .k symbols and add .l − k zero symbols at the end of the message. In all experiments, we consider the differences between the sample values of the original and transmitted signals. We present these analyses on graphs where the sample number in the sequence of samples consisting of the audio signal is on the .x-axes and the value of the sample is on the . y-axis. In Fig. 4.12 the original audio samples are presented. The samples for . S N R = −1 are given in Fig. 4.13. The first graph on this figure is obtained when the audio signal is transmitted through the channel without using any error-correcting code, the second graph is for CutDecoding and the third one is for 4-Sets-Cut-Decoding algorithm. In Fig. 4.14, we present only two graphs per . S N R = 0 (a) and . S N R = 1 (b): the first one is for CutDecoding and the second is for 4-Sets-Cut-Decoding algorithm. The graphs obtained when the audio signal is transmitted through the channel without using any errorcorrecting code are very similar to the first graph in Fig. 4.13 and we do not give them. From the figures, it is evident that for all values of . S N R, the results obtained using 4-Sets-Cut-Decoding algorithm are better than the results obtained with CutDecoding algorithm. For . S N R = −3 and . S N R = −2, experiments with Cut-Decoding algorithm did not finish in real time since during the decoding of some messages we obtain a large number of elements in the decoding-candidate sets. Therefore, we conclude that for . S N R < −1 there is no sense to make experiments with this algorithm and in Fig. 4.15 we present the results only for 4-Sets-Cut-Decoding algorithm. The first graph on
4.4 Experimental Results for Audio Files
Fig. 4.13 . S N R = −1
Fig. 4.14 Results for . S N R = 0 and . S N R = 1
45
46
4 Experimental Results for Cryptocodes Based on Quasigroups …
Fig. 4.15 Results for . S N R = −3 and . S N R = −2 Table 4.7 Experimental results for . B E R .S N R . B E Rcut
. B E R4−sets
.−3
.0.10386
.−2 .−1 .0 .1
/ / .0.04741 .0.01114 .0.00175
.0.03658 .0.01122 .0.00281 .0.00052
these figures is obtained when the audio signal is transmitted through the channel without any error-correcting code and the second one when 4-Sets-Cut-Decoding algorithm is used. In Tables 4.7 and 4.8, experimental results for . B E R and . P E R are given. These tables confirm our conclusions obtained from the figures. We can see that for decoding audio files transmitted through a Gaussian channel, 4-Sets-Cut-Decoding algorithm is better than Cut-Decoding algorithm. All audio files obtained in our experiments for transmission through a Gaussian channel with different . S N R can be found on the following link: https://www.dropbox.com/sh/hwhk63ylgukvfq9/AACymAdyUBE1K_QJ1SM39Jea?dl=0.
4.4 Experimental Results for Audio Files
47
Table 4.8 Experimental results for . P E R .S N R . P E Rcut .−3
. P E R4−sets
/ / .0.08623 .0.02208 .0.00383
.−2 .−1 .0 .1
.0.24074 .0.08451 .0.02499 .0.00632 .0.00113
If one listens to these audio files, he/she would notice that as . S N R decreases, the noise increases, but the original melody can be listened to completely in the background. In order to explain the reason behind this phenomenon, we calculate the Energy Spectral Density of the audio signal. The Energy Spectral Density shows how the energy of the signal is distributed with frequency and it is defined as { .
E=
+∞
−∞
|s(t)|2 dt
where .s is the signal. Since we consider finite discrete signals, in this case 4.3 s long audio signal, using Parseval’s theorem we can express the Energy Spectral Density in the frequency domain using Fourier Analysis, i.e., {
+∞
.
−∞
{ |s(t)|2 dt =
+∞
−∞
|ˆs ( f )|2 d f
where .sˆ ( f ) is the Fourier transformation of the signal .s. Therefore, using the Fast Fourier Transform Algorithm from MatLab’s Signal Processing Toolbox, in Fig. 4.16, we plot the energy densities for the sounds. The first graph is for the original sound, the second is for transmitted sound in a channel with . S N R = 0, without using error-correcting code and the last two graphs are for the same sound coded/decoded using Cut-Decoding and 4-Sets-Cut-Decoding algorithms, correspondingly. From the graphs, it is evident that after the transmission of the signal, the frequencies carrying most of the signal’s energy are still present, although a bit altered, but all other energies are amplified. Consequently, this is the reason why we can still hear the original melody in the background intermixed with the noisy sounds. The graphs confirm that the noise is reduced when Cut-Decoding, especially 4Sets-Cut-Decoding algorithm, is used. The same results are obtained for other values of . S N R. Note that the previous link also contains the audio file crypt._audio.wav obtained by encrypting the original audio. In this encrypted file, we can not hear anything from the original melody, which confirms that the algorithm also crypts audio files.
48
4 Experimental Results for Cryptocodes Based on Quasigroups …
Fig. 4.16 Energy spectral density for . S N R = 0
References Bakeva, V., Popovska-Mitrovikj, A., Mechkaroska, D., Dimitrova, V., Jakimovski, B., & Ilievski, V. (2019). Gaussian channel transmission of images and audio files using cryptcoding. IET Communications, 13(11), 1625–1632 (2019). (IF. 1.779) Cover, T. M., & Thomas, A. J. (2005). Elements of information theory. Mechkaroska, D., Popovska-Mitrovikj, A., & Bakeva, V. (2017). A filter for images decoded using cryptocodes based on quasigroups. In Proceedings of the 14th International Conference on Informatics and Information, Mavrovo, April 2017 Mechkaroska, D., Popovska-Mitrovikj, A., & Bakeva, V. (2016). Cryptcodes based on quasigroups in Gaussian channel. Quasigroups and Related Systems, 24(2), 249–268. Sankar, K. (2007). Bit Error Rate (BER) for BPSK modulation. August 2007. http://www.dsplog. com/2007/08/05/bit-error-probability-for-bpsk-modulation/
Chapter 5
Fast Algorithms for Cryptocodes Based on Quasigroups
Abstract In this chapter, we consider coding/decoding algorithms for RSBQ called Fast-Cut-Decoding and Fast-4-Sets-Cut-Decoding algorithms. The goal of these algorithms is to increase the decoding speed and decrease the probability of biterror. Here, we present several experimental results obtained with these algorithms and analyze the results for bit-error and packet-error probabilities and decoding speed when messages are transmitted through Gaussian channels with different values of signal-to-noise ratio. Also, we investigate the performances of fast algorithms for the transmission of images and audio files. Keywords Cryptocodes · Decoding speed · Gaussian channel · Images · Audio
5.1 Fast-Cut-Decoding and Fast-4-Sets-Cut-Decoding Algorithms As we mentioned previously, the decoding with Cut-Decoding and 4-Sets-Cut-Decoding algorithms is actually a list decoding. Therefore, the speed of the decoding process depends on the list size (a shorter list gives faster decoding). In both algorithms, the list size depends on . Bmax (the maximal assumed number of bit errors in a block). For smaller values of . Bmax , shorter lists are obtained. However, we do not know in advance how many errors appear during the transmission of a block. If this number of errors is larger than the assumed number of bit errors . Bmax in a block, the errors will not be corrected. On the other side, if . Bmax is too large, we have long lists and the process of decoding is too slow. Also, the larger value of . Bmax can lead to an ending of the decoding process with more-candidate-error (the correct message will be in the list of the last iteration if there are no more than . Bmax errors during transmission). Therefore, with all decoding algorithms for RCBQ, more-candidateerrors can be obtained, although the bit-error probability of the channel is small and the number of bit errors in a block is not greater than . Bmax (or no errors during transmission). In order to solve this problem, in Popovska-Mitrovikj et al. (2017) authors
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 D. Mechkaroska et al., Cryptocoding Based on Quasigroups, SpringerBriefs in Information Security and Cryptography, https://doi.org/10.1007/978-3-031-50125-8_5
49
50
5 Fast Algorithms for Cryptocodes Based on Quasigroups
propose new modifications of Cut-Decoding and 4-Sets-Cut-Decoding algorithms, called Fast-Cut-Decoding and Fast-4-Sets-Cut-Decoding algorithms. Here, instead of fixed value. Bmax in both algorithms, we start with. Bmax = 1. If we have successful decoding, the procedure is done. If not, we increase the value of. Bmax with.1 and repeat the decoding process with the new value of. Bmax , etc. The decoding finishes with . Bmax = 4 (for rate .1/4) or with . Bmax = 5 (for rate .1/8). The decoding with larger values (. Bmax > 5) leads to too long lists and decoding is too slow. These algorithms try to decode the message using the shorter lists and in the case of successful decoding with a small value of . Bmax (. Bmax < 4), we avoid long lists and slower decoding. Also, we decrease the number of more-candidate-errors.
5.2 Experimental Results for Fast-Cut-Decoding and Fast-4-Sets-Cut-Decoding Algorithms In this section, we give the experimental results, given in Popovska-Mitrovikj et al. (2017), obtained with Fast-Cut-Decoding and Fast-4-Sets-Cut-Decoding algorithms for rate . R = 1/4 and . R = 1/8, for different values of . S N R in Gaussian channel. We compare these results with corresponding results obtained with Cut-Decoding and 4-Sets-Cut-Decoding algorithms. In order to show the efficiency of these algorithms we present percentages of messages whose decoding finished with . Bmax = 1, 2, 3, 4 or .5. First, we present results for code .(72, 288) with rate . R = 1/4. For this code, we made experiments with Cut-decoding algorithm and Fast-Cut-Decoding algorithm using the following parameters: • Redundancy pattern: 1100 1100 1000 0000 1100 1000 1000 0000 1100 1100 1000 0000 1100 1000 1000 0000 0000 0000 for rate .1/2 • Two different keys of 10 nibbles, and • Quasigroup on .(Q, ∗) given in Table 3.1 and the corresponding parastrophe. In experiments with Cut-Decoding algorithm we use . Bmax = 4 and for Fast-CutDecoding algorithm the maximum value of . Bmax is 4. In Table 5.1, we give experimental results for bit-error probabilities . B E Rcut and packet-error probabilities . P E Rcut (obtained with Cut-Decoding algorithm) and the corresponding probabilities . B E R f −cut and . P E R f −cut (obtained with Fast-CutDecoding algorithm). For this code, the authors concluded that for values of . S N R smaller than 0, the coding does not make sense since the bit-error probability obtained with Cut-Decoding algorithm is larger than the bit-error probability in the channel (without coding). Therefore, in Table 5.1 we present experimental results only for . S N R ≥ 0. Analyzing the results in Table 5.1, we can conclude that for all values of . S N R, results for . B E R f −cut are better than the corresponding results of . B E Rcut . For . S N R = 4, . B E R f −cut is even 50 times smaller than . B E Rcut . The same conclusions can be derived from the comparison of . P E R f −cut and . P E Rcut .
5.2 Experimental Results for Fast-Cut-Decoding and Fast-4-Sets-Cut-Decoding Algorithms Table 5.1 Experimental results with . R = 1/4 .S N R . B E Rcut . B E R f −cut 0 1 2 3 4
0.07153 0.01830 0.00249 0.00073 0.00052
0.06122 0.01493 0.00155 0.00006 0.00001
. P E Rcut
. P E R f −cut
0.10001 0.02722 0.00410 0.00230 0.00252
0.08993 0.02275 0.00274 0.00014 0.00007
Table 5.2 Percentage of messages decoded with different values of . Bmax . Bmax = 1 (%) . Bmax = 2 (%) . Bmax = 3 (%) .S N R 0 1 2 3 4
2.74 14.66 44.62 76.45 93.68
26.35 45.83 41.90 20.50 5.89
37.28 28.98 11.41 2.79 0.42
. Bmax
51
= 4 (%)
33.63 10.53 2.07 0.26 0.01
In Table 5.2, we present the percentage of messages which decoding ended with Bmax = 1, . Bmax = 2, . Bmax = 3 or . Bmax = 4. From the results given there, we can see that for smaller values of . S N R (.0 or .1), we have a larger percentage of messages which decoding needed. Bmax = 3 or.4. On the other side, for. S N R = 4, the decoding of more than .90% of the messages successfully finished with . Bmax = 1. From these results, we can conclude that for larger values of . S N R (low noise in the channel) decoding with Fast-Cut-Decoding algorithm is much faster than with the old one. Further on, we present the results for code .(72, 576) with rate . R = 1/8. We compare experimental results obtained with Cut-Decoding, Fast-Cut-Decoding, 4Sets-Cut-Decoding, and Fast-4-Sets-Cut-Decoding algorithms. In the experiments, we used the following parameters:
.
– In Cut-Decoding, Fast-Cut-Decoding algorithms—redundancy pattern: 1100 1100 1000 0000 1100 1000 1000 0000 1100 1100 1000 0000 1100 1000 1000 0000 0000 0000 for rate .1/4 and four different keys 10 nibbles. – In 4-Sets-Cut-Decoding, Fast-4-Sets-Cut-Decoding algorithms—redundancy pattern: 1100 1110 1100 1100 1110 1100 1100 1100 0000 for rate .1/2 and four different keys 10 nibbles. – In all experments we used the same quasigroup on . Q given in Table 3.1. Here, in the experiments with old algorithms we use . Bmax = 5 and for Fast-CutDecoding and Fast-4-Sets-Cut-Decoding algorithms the maximum value of . Bmax is 5. In Table 5.3, we present experimental results for bit-error probabilities . B E Rcut , . B E R f −cut ,. B E R4sets ,. B E R f −4sets obtained with Cut-Decoding algorithm, Fast-CutDecoding algorithm, 4-Sets-Cut-Decoding algorithm and Fast-4-Sets-Cut-Decoding algorithm, respectively, and the corresponding packet-error probabilities . P E Rcut ,
52
5 Fast Algorithms for Cryptocodes Based on Quasigroups
Table 5.3 Experimental results with . R = 1/8 .S N R . B E Rcut . B E R f −cut .−2 .−1
0 1 2 .S N R .−2 .−1 0 1 2
/ 0.05591 0.01232 0.00224 0.00037 . P E Rcut / 0.10491 0.02376 0.00425 0.00058
/ 0.03920 0.00872 0.00069 0 . P E R f −cut / 0.07171 0.01598 0.00187 0
. B E R4sets
. B E R f −4sets
0.06548 0.02225 0.00449 0.00066 0.00008 . P E R4sets 0.11283 0.03831 0.00835 0.00136 0.00014
0.04905 0.00795 0.00074 0.00013 0 . P E R f −4sets 0.09086 0.01656 0.00151 0.00021 0
Table 5.4 Percentage of messages decoded with different values of . Bmax Fast-Cut-Decoding . Bmax .S N R 1 2 3 4 .−2
/ 0% .0 0.07% .1 1.71% 18.84% .2 Fast-4-Sets-Cut-Decoding . Bmax .S N R 1 0% .−2 0% .−1 .0 3.39% 20.24% .1 57.86% .2 .−1
5
/ 1.57% 13.80% 48.12% 65.41%
/ 25.32% 49.38% 39.16% 13.88%
/ 44.46% 27.88% 9.43% 1.68%
/ 28.64% 8.86% 1.58% 0.18%
2 1.14% 6.99% 28.74% 51.47% 37.67%
3 7.68% 32.13% 48.76% 25.88% 4.37%
4 51.39% 49.54% 17.62% 2.33% 0.10%
5 39.79% 11.02% 1.48% 0.09% 0.01%
P E R f −cut , . P E R4sets , . P E R f −4sets . For . S N R smaller than .−1, using of CutDecoding algorithm does not have sense since the values of bit-error probabilities are larger than the bit-error probability in the channel. From the results given in Table 5.3, we can derive similar conclusions for rate .1/8 as for rate .1/4. Namely, we can conclude that for all values of . S N R, the results for . B E R and . P E R obtained with the fast algorithms are better than the corresponding results obtained with the old versions of these algorithms. Again, this improvement is more significant for the larger value of . S N R, i.e., for low noise in the channel. Even .
5.3 Experimental Results for Images
53
more, for . S N R ≥ 2, the values for . B E R and . P E R obtained with the fast algorithms are equal to 0. In Table 5.4, we give the percentage of messages which decoding ended with . Bmax = 1, . Bmax = 2, . Bmax = 3, . Bmax = 4 or . Bmax = 5 for both fast algorithms. From Table 5.4, we can see that with Fast-4-Sets-Cut-Decoding algorithm if the values of . S N R are smaller, then we have a larger percentage of messages in which decoding needed . Bmax = 4 or .5. On the other side, for . S N R = 2, decoding of more than .80% of the messages successfully finished with . Bmax = 1 or . Bmax = 2. Therefore, we can conclude that for larger values of . S N R (low noise in the channel) decoding with the fast proposed algorithms is much faster than with the old one.
5.3 Experimental Results for Images In this section, we present results for investigation of the performances of fast algorithms for transmission of images. For this goal, the authors made many experiments using a Gaussian channel and RCBQ as an error-correcting code. Here, we compare the results obtained with four algorithms for RCBQ: Cut-Decoding, Fast-CutDecoding, 4-Sets-Cut-Decoding, and Fast-4-Sets-Cut-Decoding with modifications for reducing the number of unsuccessful decoding (defined in Sect. 2.4). All experiments are made for code .(72, 576) with the rate . R = 1/8 and different values of . S N R in the Gaussian channel. In the experiments, the same parameters (as previous) are used: – In Cut-Decoding, Fast-Cut-Decoding algorithms—redundancy pattern: 1100 1100 1000 0000 1100 1000 1000 0000 1100 1100 1000 0000 1100 1000 1000 0000 0000 0000 for the rate .1/4 and four different keys 10 nibbles. – In 4-Sets-Cut-Decoding, Fast-4-Sets-Cut-Decoding algorithms—redundancy pattern: 1100 1110 1100 1100 1110 1100 1100 1100 0000 for rate .1/2 and four different keys 10 nibbles. – In all experiments we used the same quasigroup on . Q given in Table 3.1. Here, in the experiments with Cut-Decoding and 4-Sets-Cut-Decoding algorithms we use . Bmax = 5 and for Fast-Cut-Decoding and Fast-4-Sets-Cut-Decoding algorithms the maximum value of . Bmax is 5. In the case of null-error or most-candidate-error, we apply the same heuristic as in Sect. 3.2. In the experiments for image transmission, we use the same image of “Lenna”, given in Fig. 3.1. The image is transmitted through a Gaussian channel (with different values of . S N R) and the corresponding decoding algorithm is applied. In Figs. 5.1, 5.2, 5.3 and 5.4, we present images obtained for. S N R = −2,. S N R = −1,. S N R = 0 and. S N R = 1, correspondingly. In each figure, the images in the first row are obtained using Cut-Decoding and 4-Sets-Cut-Decoding algorithm, and in the second row, images are obtained using the corresponding fast version of the algorithm. If we compare the images, we can conclude that the images obtained with FastCut-Decoding algorithm and Fast-4-Sets-Cut-Decoding algorithm are clearer than
54
Fig. 5.1 . S N R = −2
Fig. 5.2 . S N R = −1
5 Fast Algorithms for Cryptocodes Based on Quasigroups
5.3 Experimental Results for Images
Fig. 5.3 . S N R = 0
Fig. 5.4 . S N R = 1
55
56
5 Fast Algorithms for Cryptocodes Based on Quasigroups
Table 5.5 Experimental results with . R = 1/8 .S N R . B E Rcut . B E R f −cut
. B E R4sets
. B E R f −4sets
.−2
.0.13257
.0.14189
.0.03487
.0.04886
.−1
.0.04029
.0.05664
.0.01233
.0.00823
.0
.0.00990
.0.00594
.0.00306
.0.00062
.1
.0.00161
.0.00140
.0.00040
.0
.S N R
. P E Rcut
. P E R f −cut
. P E R4sets
. P E R f −4sets
.−2
.024265
.0.24045
.−1
.0.07429
.−0
.0.01675
.1
.0.00316
0.24952 0.07127 0.01332 0.00357
0.09063 0.01634 0.00151 0
.0.08019 .0.02622 .0.00645
Table 5.6 Percentage of messages decoded with different values of . Bmax .S N R . Bmax = 1 (%) . Bmax = 2 (%) . Bmax = 3 (%) . Bmax = 4 (%) . Bmax = 5 (%) .−2 .−1
0 1 2
0 0 0.08 1.99 18.76
0.12 1.61 14.16 47.68 66.03
6.04 25.89 48.15 39.29 13.43
33.92 43.27 28.93 9.37 1.68
59.91 29.24 8.68 1.68 0.07
the corresponding images obtained with the Cut-Decoding algorithm and 4-Sets-CutDecoding algorithm. Also, the images obtained with Fast-4-Sets-Cut-Decoding algorithm are clearer than the corresponding images obtained with Fast-Cut-Decoding algorithm for all values of . S N R. The previous conclusions obtained from the images can be confirmed with the values of bit-error and packet-error probabilities. In Table 5.5, we present experimental results for bit-error probabilities . B E Rcut , . B E R f −cut , . B E R4sets , . B E R f −4sets obtained with Cut-Decoding algorithm, Fast-Cut-Decoding algorithm, 4-Sets-CutDecoding algorithm and Fast-4-Sets-Cut-Decoding algorithm, respectively, and the corresponding packet-error probabilities . P E Rcut , . P E R f −cut , . P E R4sets , . P E R f −4sets . The percentage of messages whose decoding ended with . Bmax = 1, . Bmax = 2, . Bmax = 3, . Bmax = 4 or . Bmax = 5 with Fast-Cut-Decoding algorithm, are presented in Table 5.6. The conclusions are very similar to conclusions for ordinary messages. From the results given there, we can see that for greater values of . S N R (.1 or .2), we have a larger percentage of messages in which decoding needed . Bmax = 2. From these results, again we can conclude that for greater values of . S N R (low noise in the channel) decoding with the fast algorithm is much faster than the other one. In Table 5.7, we present the corresponding percentage for Fast-4-Sets-CutDecoding algorithm. From the results given in this table, we can derive the same conclusions as previous.
5.4 Experimental Results for Audio Files
57
Table 5.7 Percentage of messages decoded with different values of . Bmax .S N R . Bmax = 1 (%) . Bmax = 2 (%) . Bmax = 3 (%) . Bmax = 4 (%) . Bmax = 5 (%) .−2
0.03
1.15
7.84
51.11
39.87
.−1
.0.36
.7.28
.32.74
.48.81
.10.82
.0
.3.53
.28.00
.49.77
.17.19
.1.51
.1
.20.54
.51.37
.25.84
.2.14
.0.10
.2
.57.79
.37.70
.4.49
.0.03
.0
5.4 Experimental Results for Audio Files In this section, we investigate the performances of fast algorithms for the transmission of audio files. The experiments are made for the Gaussian channel and the results are given in Mechkaroska et al. (2018) . Similarly, as for images, we compare the results for code .(72, 576) obtained using four algorithms for RCBQ: Cut-Decoding, Fast-Cut-Decoding, 4-Sets-Cut-Decoding, and Fast-4-Sets-Cut-Decoding for rate . R = 1/8 and different values of . S N R in Gaussian channel. In these experiments, we use the audio that consists of one 16-bit channel with a sampling rate of 44100 Hz and it is a part of Beethoven’s “Ode to Joy” with a total length of approximately 4.3 s. In the experiments with audio files, the same code parameters (as previously) are used. Similarly, as for images, in the experiments with Cut-Decoding and 4-Sets-CutDecoding algorithms we use . Bmax = 5 and for Fast algorithms the maximum value of . Bmax is 5. In all decoding algorithms for RCBQ, when a null-error appears, we use the same solution described in Sect. 4.4. In Table 5.8, we present experimental results for bit-error probabilities . B E Rcut , . B E R f −cut ,. B E R4sets ,. B E R f −4sets obtained with Cut-Decoding algorithm, Fast-CutDecoding algorithm, 4-Sets-Cut-Decoding algorithm and Fast-4-Sets-Cut-Decoding algorithm, respectively and the corresponding packet-error probabilities . P E Rcut , . P E R f −cut , . P E R4sets , . P E R f −4sets . For . S N R smaller than .−1, the usage of CutDecoding algorithm does not make sense since the values of bit-error probabilities are larger than the bit-error probability in the channel. As previously, from the experimental results given in Table 5.8 we can conclude that for all values of . S N R, the results for . P E R and . B E R obtained with fast algorithms are better than the results obtained with Cut-Decoding and 4-SetsCut-Decoding algorithms, especially for larger values of . S N R (low noise). Even more, for . S N R ≥ 2 bit-error and packet-error probabilities obtained with fast algorithms are .0. Also, decoding with these new algorithms is much faster than with the old ones. For all experiments, we also consider the differences between the sample values of the original and decoded signal. We present these analyses on graphs where the sample number in the sequence of samples consisting of the audio signal is on the
58
5 Fast Algorithms for Cryptocodes Based on Quasigroups
Table 5.8 Experimental results for . B E R and . P E R .S N R . B E Rcut . B E R f −cut .−2 .−1
0 1 .S N R .−2 .−1 0 1
/ 0.04741 0.01114 0.00175 . P E Rcut / 0.08623 0.02208 0.00383
/ 0.04019 0.00713 0.00086 . P E R f −cut / 0.07589 0.01418 0.00177
. B E R4sets
. B E R f −4sets
0.03658 0.01122 0.00281 0.00052 . P E R4sets 0.08451 0.02499 0.00632 0.00113
0.04782 0.00825 0.00081 0.00003 . P E R f −4sets 0.08917 0.01644 0.00165 0.00012
Fig. 5.5 Original audio samples
x-axes and the value of the sample is on the . y-axis. In Fig. 5.5 the original audio samples are presented. Graphs for decoded audio files for considered values of . S N R are given in Figs. 5.6, 5.7, 5.8 and 5.9. In Fig. 5.6 (for . S N R = −2) we present only two graphs, the first one is for 4-Sets-Cut-Decoding algorithm and the second is for Fast-4-Sets-Cut-Decoding algorithm, since the coding/decoding with Cut-Decoding algorithm does not have a sense for this value of . S N R (the obtained value of . B E R is larger than the bit-error in the channel). For the other values of . S N R, in Figs. 5.7, 5.8 and 5.9, we present 4 graphs (for decoded audio files using all 4 mentioned algorithms) in the following order: Cut -Decoding, Fast-Cut-Decoding, 4-Sets-CutDecoding and Fast-4-Sets-Cut-Decoding. From these figures, we can derive the same conclusions as from the values for . P E R and . B E R, given in Table 5.8. From the graphs, it is evident that for all values of . S N R, the results obtained using 4-Sets-Cut-Decoding algorithm are better than the results obtained with Cut-Decoding algorithm and the results obtained with the .
5.4 Experimental Results for Audio Files
Fig. 5.6 Results for . S N R = −2
Fig. 5.7 Results for . S N R = −1
Fig. 5.8 Results for . S N R = 0
59
60
5 Fast Algorithms for Cryptocodes Based on Quasigroups
Fig. 5.9 Results for . S N R = 1
fast algorithms are better than corresponding results obtained with the other versions of these algorithms. All audio files obtained in our experiments for transmission of audio files through a Gaussian channel with different . S N R can be found on the following link: https://www.dropbox.com/sh/5zq5ly6qtiho8d6/AACTQBgUDopFq9psdbaMb8BKa?dl=0.
If someone listened to these audio files, he/she would notice that as . S N R decreases, the noise increases, but the original melody can be completely heard in the background. This is also evident in the graphs. If we compare the graphs for files decoded with all four algorithms with the graph for original audio samples, we can see that the samples of the original audio are contained on all graphs. It can be confirmed by analyzing the energy Spectral Density of the audio signal as in Sect. 4.4. Therefore we can still hear the original melody in the background, intermixed with the noisy sounds. In order to clear some of these noise sounds, in the next subsection we propose a filter that can repair some of the damages done by null-errors and more-candidate-errors.
5.4.1 Filter for Enhancing the Quality of Audio Decoded by Cryptocodes Based on Quasigroups For repairing damage, a filter has to locate the position where it appears. Locating of the null-errors is easy, since we add zero symbols in the place of the undecoded part of the message. In order to locate more-candidate-errors we change the decoding rule for this kind of error, in the same way as for the filter for decoded images, given
5.4 Experimental Results for Audio Files
61
in Sect. 4.3.1. Namely, instead of random selection of a message from the reduced sets in the last iteration, we take a message of all zero symbols as a decoded message. The basic idea in the definition of this filter is to replace damaged (erroneously decoded) nibbles with a new value derived from the values of several previous nibbles. So, we take all decoded messages as one list of nibbles, and one nibble is considered as an erroneously decoded symbol if it belongs in a zero sub-list with at least four consecutive zero nibbles. Then each erroneously decoded symbol (nibble) we replace with the median of the previous .2k + 1 nibbles in the list. If the erroneous nibble is at the beginning of the list and there are no .2k + 1 previous ones, then a median of all previous nibbles (to the beginning of the list) is taken. For repairing a nibble, we use only previous .2k + 1 nibbles since the next nibbles are zeros (erroneously decoded symbol belongs in a zero sub-list with at least four consecutive zero nibbles) and probably they are erroneously decoded. We made experiments for .2k + 1 equal to .3, .5, .7, and .9 and the results were similar, but they are a little bit better for .2k + 1 equal to 7 or 9. Further on, we present results obtained with a median of 7 previous nibbles. Notice that we take an odd number of previous nibbles since a median of these nibbles is computed, and if this number is even the median is a mean of two values in the middle, so it can be a number that is not in . Q. In Figs. 5.10, 5.11, 5.12 and 5.13, we present graphs of samples of audio files decoded with Fast-Cut-Decoding and Fast-4-Sets-Cut-Decoding algorithm before and after using the proposed filter, for . S N R = −2, . S N R = −1, . S N R = 0 and . S N R = 1, correspondingly. In Fig. 5.10, the first graph is for the file decoded with Fast-4-Sets-Cut-Decoding algorithm without the filter, and the second graph is obtained using the filter. In other figures, the first two graphs are for Fast-CutDecoding (without and with the filter, correspondingly), and the last two images are for Fast-4-Sets-Cut-Decoding algorithm (without and with the filter). Also, audio files obtained after application of the proposed filter can be found on the following link: https://www.dropbox.com/sh/5zq5ly6qtiho8d6/AACTQBgUDopFq9psdbaMb8BKa?dl=0.
From the given graphs and audio files, we can notice that the proposed filter provides a great improvement of the audio for all considered values of . S N R.
Fig. 5.10 . S N R = −2
62
Fig. 5.11 . S N R = −1
Fig. 5.12 . S N R = 0
5 Fast Algorithms for Cryptocodes Based on Quasigroups
References
63
Fig. 5.13 . S N R = 1
References Mechkaroska, D., Popovska-Mitrovikj, A., & Bakeva Smiljkova, V. (2018). Performances of fast algorithms for random codes based on quasigroups for transmission of audio files in gaussian channel. In S. Kalajdziski, N. Ackovska (Eds.), ICT innovations 2018. Engineering and life sciences (pp. 286–296). Springer International Publishing. Popovska-Mitrovikj, A., Bakeva, V., & Mechkaroska, D. (2017). New decoding algorithm for cryptcodes based on quasigroups for transmission through a low noise channel. In D. Trajanov & V. Bakeva (Eds.), Communications in computer and information science series (CCIS). ICTInnovations 2017 (Vol. 778, pp. 196–204). Springer.
Chapter 6
Cryptocodes Based on Quasigroups for Burst Channels
Abstract In this chapter we consider a modification of existing cryptocodes suitable for transmission in burst channels. For the simulation of burst errors, we use the Gilbert-Elliot model. We consider two kinds of Gilbert-Eliott channels, in the first one in each state the channel is binary-symmetric and in the second one, the channel is Gaussian. Experimental results for bit-error and packet-error probabilities obtained for different channel and code parameters are presented and we made a comparison of the results obtained with these burst algorithms and previously considered algorithms. Also, we investigate the performances of these algorithms for the transmission of images through a burst channel. In all experiments, for different values of bit-error probability (in BSC) and . S N R (in Gaussian), the differences between transmitted and decoded images are considered. Further on, we consider an application of the filter for enhancing the quality of decoded images. With the considered filter clearer images are obtained. At the end of this chapter, we investigate an adoption of Fast-CutDecoding and Fast-4-Sets-Cut-Decoding algorithms for transmission through burst channels, called FastB-Cut-Decoding and FastB-4-Sets-Cut-Decoding algorithms. Keywords Cryptocodes · Burst channel · Gilbert-Elliot model · Decoding speed · Images
6.1 Gilbert-Elliott Burst Model Gilbert-Elliott Burst Model is a channel model introduced by Edgar Gilbert and E. O. Elliott. This model is based on a Markov chain with two states: G (good or gap) and B (bad or burst). In the good state, the probability of incorrect transmission of a bit is small, and in the bad state, this probability is large. This model is widely used for describing burst error patterns in transmission channels, which enables simulations of the digital error performance of communications links. This model is shown in Fig. 6.1, where G represents the good state and B represents the bad state. The transition probability from bad to good state is . PBG and the transition probability from good to bad state is . PG B (Knag et al., 1998; Labiod, 1999). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 D. Mechkaroska et al., Cryptocoding Based on Quasigroups, SpringerBriefs in Information Security and Cryptography, https://doi.org/10.1007/978-3-031-50125-8_6
65
66
6 Cryptocodes Based on Quasigroups for Burst Channels
Fig. 6.1 Gilbert-Elliott Burst Model
Fig. 6.2 Coding/decoding with new algorithms
We made experiments with two kinds of Gilbert-Elliott channels. In the first one, in each state, the channel is a binary-symmetric with bit error probabilities . Pe (G) in the good state and . Pe (B) in the bad state. In the second one, the channels are Gaussian, where . S N R is high in the good state and low in the bad state.
6.2 New Cryptocodes for Burst Channels In experiments with burst channels, previously explained algorithms do not give good results and therefore in Mechkarosk et al. (2019b) authors define new algorithms for coding/decoding called Burst-Cut-Decoding and Burst-4-Sets-Cut-Decoding algorithms. In these algorithms, an interleaver after coding and the corresponding deinterleaver before decoding, are added. It is known that interleaving and deinterleaving are useful for reducing errors caused by burst errors in a communication system. In proposed cryptocodes for burst channel after coding, the interleaver rearranges (by rows).m nibbles of the codeword in a matrix of order.(m/k) × k. The output of the interleaver is a message obtained by reading this matrix by columns. The mixed message is transmitted through a burst channel and before decoding the output message is rearranged with the corresponding deinterleaver. This process of coding/decoding in the new algorithms is schematically presented in Fig. 6.2.
6.3 Experimental Results for Gilbert-Elliott Model In this section, we present the experimental results obtained with RCBQ for transmission through a burst channel. These results are given in Mechkarosk et al. (2019b). For the simulation of the channel, we use the Gilbert-Elliott model. We compare the
6.3 Experimental Results for Gilbert-Elliott Model
67
values of packet-error probability (. P E R) and bit-error probability (. B E R) obtained with Cut-Decoding, 4-Sets-Cut-Decoding, Burst-Cut-Decoding and Burst-4-SetsCut-Decoding algorithms. We consider codes .(72, 288) with rate .1/4 using CutDecoding algorithm and Burst-Cut-Decoding algorithm, and also .(72, 576) with rate .1/8 using all four algorithms (Cut-Decoding, 4-Sets-Cut-Decoding, Burst-CutDecoding and Burst-4-Sets-Cut-Decoding). In the experiments, we use the same code parameters as previously: • for code.(72, 288) in Cut-Decoding and Burst-Cut-Decoding algorithm, the parameters are: – redundancy pattern: 1100 1110 1100 1100 1110 1100 1100 1100 0000 for rate .1/2 and two different keys of 10 nibbles. • for code .(72, 576) the code parameters are: – in Cut-Decoding/Burst-Cut-Decoding—redundancy pattern: 1100 1100 1000 00001100 1000 1000 0000 1100 1100 1000 0000 1100 1000 1000 0000 0000 0000, for rate .1/4 and two different keys of 10 nibbles. – in 4-Sets-Cut-Decoding/Burst-4-Sets-Cut-Decoding - redundancy pattern: 1100 1110 1100 1100 1110 1100 1100 1100 0000 for rate .1/2 and four different keys of 10 nibbles. For all experiments, we use the same quasigroup on . Q given in Table 3.1. For burst algorithms, we made experiments for different values of .k, i.e., for all divisors of .36. Namely, the number of nibbles of two (or four) concatenated codewords in Cut-Decoding/Burst-Cut-Decoding (or 4-Sets-Cut-Decoding/Burst-4Sets-Cut-Decoding) algorithms is 36. The best results are obtained for.k = 9. Further on, we will present only results for this value of .k. In Sect. 6.3.1, we present experimental results for the Gilbert-Elliott model with binary-symmetric channels for different values of probability of bit-error and different transition probabilities. The experimental results, for different values of. S N R and different transition probabilities in the Gilbert-Elliott model with Gaussian channels, are given in Sect. 6.3.2.
6.3.1 Experiments for Gilbert-Elliott with BSC Channels In all experiments for the Gilbert-Elliott model with binary-symmetric channels, we use the value of bit-error probability . Pe (G) = 0.01 in the good state and some different values of bit-error probabilities in the bad state, i.e., . Pe (B) ∈ {0.2; 0.16; 0.13; 0.1}. In Table 6.1, we give experimental results for bit-error probabilities . B E Rcut and packet-error probabilities . P E Rcut (obtained with Cut-Decoding algorithm) and the
68
6 Cryptocodes Based on Quasigroups for Burst Channels
Table 6.1 Experimental results for Gilbert-Elliott channel with BSC and . R = 1/4 .Pe(B) .PERcut .PERb-cut .BERcut .BERb-cut .PGG = 0.8, .PBB = 0.8 0.1 0.13 0.16 0.2 0.1 0.13 0.16 0.2 0.1 0.13 0.16 0.2 0.1 0.13 0.16 0.2
0.14069 0.06818 0.34014 0.18173 0.58078 0.32466 0.81271 0.51087 .PGG = 0.5 .PBB = 0.5 0.13464 0.04665 0.32711 0.12283 0.56365 0.23394 0.82358 0.43581 .PGG = 0.2 .PBB = 0.8 0.20470 0.14177 0.46781 0.34619 0.73048 0.59497 0.94247 0.84951 .PGG = 0.8 .PBB = 0.2 0.05709 0.00921 0.14292 0.01555 0.27728 0.03333 0.48898 0.07250
0.10453 0.26055 0.45424 0.67081
0.05030 0.13535 0.24698 0.40820
0.09799 0.24417 0.43088 0.65848
0.03376 0.08970 0.17073 0.33149
0.14853 0.35206 0.57155 0.79485
0.10266 0.25621 0.45788 0.68579
0.04201 0.10386 0.20678 0.37152
0.00560 0.01066 0.02259 0.05260
corresponding probabilities . B E Rb−cut and . P E Rb−cut (obtained with Burst-CutDecoding algorithm) for code with rate.1/4, and following combinations of transition probabilities from good to good state . PGG and from bad to bad state . PB B : • . PGG • . PGG • . PGG • . PGG
= 0.8 and . PB B = 0.5 and . PB B = 0.2 and . PB B = 0.8 and . PB B
= 0.8 = 0.5 = 0.8 = 0.2.
Analyzing the results in Table 6.1, we can conclude that for all values of . Pe (B) and for all values of . PGG and . PB B the results for . B E Rb−cut are better than the corresponding results of . B E Rcut . The same conclusions can be derived from the comparison of . P E Rb−cut and . P E Rcut . Namely, from the results, we can derive the following conclusions: • for . PGG = 0.8 and . PB B = 0.8, . B E Rb−cut is from 1.6 to 2.5 times better than . B E Rcut ; • for . PGG = 0.5 and . PB B = 0.5, . B E Rb−cut is from 1.9 to 2.8 times better than . B E Rcut ;
6.3 Experimental Results for Gilbert-Elliott Model
69
• for . PGG = 0.2 and . PB B = 0.8, . B E Rb−cut is about 1.2 times better than . B E Rcut ; • for . PGG = 0.8 and . PB B = 0.2, . B E Rb−cut is from 7.5 to 10 times better than . B E Rcut . In Table 6.2, we give experimental results for code with rate .1/8, for the same combinations of transition probabilities from good to good state . PGG and from bad to bad state . PB B as for the code with rate .1/4. There, . B E Rcut , . B E Rb−cut , . B E R4sets and . B E Rb−4sets are bit-error probabilities obtained with Cut-Decoding, Burst-CutDecoding, 4-Sets-Cut-Decoding and Burst-4-Sets-Cut-Decoding algorithm, correspondingly. Also, . P E Rcut , . P E Rb−cut , . P E R4sets and . P E Rb−4sets are corresponding packet-error probabilities. From the results in Table 6.2, we can conclude that for all values of . Pe (B) and for all values of. PGG and. PB B , the results for. B E Rb−cut are better than the corresponding results of . B E Rcut and the results for . B E Rb−4sets are better than the corresponding results of . B E R4sets . Also, if we compare the results for burst algorithms, we can conclude that Burst-4-Sets-Cut-Decoding algorithm gives from 2 to 7 times better results than Burst-Cut-Decoding algorithm depending on the channel parameters. The same conclusions can be derived for corresponding packet-error probabilities.
6.3.2 Experiments for Gilbert-Elliott with Gaussian Channels In this subsection, we presented the experimental results for the Gilbert-Elliott model with Gaussian channels with . S N RG = 4 and for different values of . S N R B = −3, −2, −1 and the same transition probabilities as in the experiments with binarysymmetric channels. First, in Table 6.3 we give experimental results for code with rate .1/4, where we use the same notations as previously. From Table 6.3, we can see that the results obtained with Burst-Cut-Decoding algorithm are 2 to 8 times better than the corresponding results obtained with Cutdecoding algorithm. In Table 6.4, we give experimental results for bit-error probabilities and packeterror probabilities for codes with a rate.1/8. The notations are the same as the previous one. From this table, we can make similar conclusions for rate .1/8 as for rate .1/4. Namely, we can conclude that for all values of . S N R, results for . B E R and . P E R obtained with the burst algorithms are better than the corresponding results obtained with the other versions of the algorithms. Also, Burst-4-Sets-Cut-decoding algorithm gives from 2 to 8 times better results than Burst-Cut-decoding algorithm.
70
6 Cryptocodes Based on Quasigroups for Burst Channels
Table 6.2 Experimental results for Gilbert-Elliott channel with BSC and . R = 1/8 .Pe (B)
.PGG
= 0.8, .PBB = 0.8
.BERcut
.BERb-cut
.BER4sets
.BERb-4sets
0.1
0.09183
0.04224
0.01796
0.00601
0.13
0.23069
0.11561
0.08060
0.02616
0.16
0.41820
0.22276
0.20798
0.07405
0.2
0.64522
0.39519
0.45947
0.20217
.Pe (B)
.PERcut
.PERb-cut
.PER4sets
.PERb-4sets
0.1
0.15790
0.07445
0.03578
0.01252
0.13
0.37334
0.19549
0.16957
0.05429
0.16
0.62600
0.35476
0.41101
0.15207
0.2
0.85671
0.56624
0.75302
0.34288
.PGG
= 0.5 .PBB = 0.5
.Pe (B)
.BERcut
.BERb-cut
.BER4sets
.BERb-4sets
0.1
0.08336
0.02759
0.01619
0.00412
0.13
0.21774
0.07662
0.07504
0.01353
0.16
0.20663
0.15975
0.20663
0.04268
0.2
0.63793
0.31081
0.47000
0.01298
.Pe (B)
.PERcut
.PERb-cut
.PER4sets
.PERb-4sets
0.1
0.22753
0.16028
0.07099
0.03729
0.13
0.52584
0.39026
0.29586
0.17713
0.16
0.80105
0.66229
0.66993
0.45636
0.2
0.97227
0.89840
0.95542
0.82632
.PGG
= 0.2 .PBB = 0.8
.Pe (B)
.BERcut
.BERb-cut
.BER4sets
.BERb-4sets
0.1
0.13132
0.09052
0.03269
0.01683
0.13
0.32319
0.23406
0.13834
0.08086
0.16
0.54950
0.43033
0.36372
0.22914
0.2
0.77847
0.67042
0.66579
0.50899
.Pe(B)
.PERcut
.PERb-cut
.PER4sets
.PERb-4sets
0.1
0.22753
0.16028
0.07099
0.03729
0.13
0.52584
0.39026
0.29586
0.17713
0.16
0.80105
0.66229
0.66993
0.45636
0.2
0.97227
0.89840
0.95542
0.82632
.PGG
= 0.8 .PBB = 0.2
.Pe (B)
.BERcut
.BERb-cut
.BER4sets
.BERb-4sets
0.1
0.03391
0.00320
0.00576
0.00044
0.13
0.09260
0.00866
0.02041
0.00124
0.16
0.18494
0.02014
0.06019
0.00256
0.2
0.34766
0.04408
0.16918
0.00803
.Pe (B)
.PERcut
.PERb-cut
.PER4sets
.PERb-4sets
0.1
0.05889
0.00583
0.01094
0.00079
0.13
0.16503
0.01684
0.04133
0.00230
0.16
0.31617
0.03578
0.12802
0.00518
0.2
0.55465
0.07913
0.34569
0.01569
6.3 Experimental Results for Gilbert-Elliott Model
71
Table 6.3 Experimental results for Gilbert-Elliott channel with Gaussian and . R = 1/4 .Pe (B) .PERcut .PERb-cut . B E Rcut . B E Rb−cut .PGG = 0.8, .PBB = 0.8 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1
0.56322 0.32366 0.46867 0.35419 0.16467 0.08179 .PGG = 0.5 .PBB = 0.5 0.56322 0.32366 0.32754 0.12348 0.15243 0.05609 .PGG = 0.2 .PBB = 0.8 0.73127 0.57783 0.46867 0.35419 0.22775 0.16561 .PGG = 0.8 .PBB = 0.2 0.27282 0.03513 0.15056 0.01980 0.06732 0.00986
0.44058 0.35223 0.12212
0.24605 0.26129 0.05911
0.44058 0.24303 0.10993
0.24605 0.08867 0.03993
0.57146 0.35223 0.16732
0.4417 0.26129 0.12115
0.20121 0.10891 0.04852
0.02449 0.01393 0.00589
6.3.3 Experimental Results for Transmission of Images Through Burst Channels In this section, we present experimental results obtained with burst algorithms for RC B Q for the transmission of images through a burst channel. The results are given in Mechkaroska et al. (2019a), where authors investigate performances of Burst-Cut-Decoding and Burst-4-Sets-Cut-Decoding algorithms for transmission of images through a Gilbert-Elliot channel. Experiments are made with two (previously explained) kinds of Gilbert-Elliott channels. The first one is with two binarysymmetric channels and the second one is with two Gaussian channels. All experiments, presented in this subsection, are made for code (72, 576) with rate R .= 1/8, . Bmax = 5 and the code parameters given in the beginning of this section. First, in Fig. 6.3 we present some images obtained after transmission through burst channels without using any error-correcting code. The first one is obtained after transmission through a Gilbert-Elliott channel with BSCs . Pe (G) = 0.01, . Pe (B) = 0.16 and transition probabilities . PGG = 0.2 and . PB B = 0.8. The second one is obtained after transmission through a channel with Gaussian channels with . S N RG = 4, . S N R B = −3 and the same transition probabilities. From the images in Fig. 6.3 we can notice, that in both cases, there are incorrectly transmitted pixels in the entire image. .
72
6 Cryptocodes Based on Quasigroups for Burst Channels
Table 6.4 Experimental results for Gilbert-Elliott channel with Gaussian and . R = 1/8 .Pe(B) .PGG = 0.8, .PBB = 0.8 .BERcut .BERb-cut .BER4sets .BERb-4sets .−3 .−2 .−1 .Pe(B) .−3 .−2 .−1
.Pe(B) .−3 .−2 .−1 .Pe(B) .−3 .−2 .−1
.Pe(B) .−3 .−2 .−1 .Pe(B) .−3 .−2 .−1
.Pe(B) .−3 .−2 .−1 .Pe(B) .−3 .−2 .−1
0.40468 0.23806 0.10796 .PERcut 0.61031 0.38256 0.17821 .PGG = 0.5 .BERcut 0.38576 0.21698 0.09497 .PERcut 0.60865 0.37028 0.16402 .PGG = 0.2 .BERcut 0.53986 0.33082 0.15119 .PERcut 0.78650 0.53269 0.26101 .PGG = 0.8 .BERcut 0.17904 0.09442 0.04009 .PERcut 0.30645 0.16345 0.07063
0.21957 0.11587 0.04945 .PERb-cut 0.34835 0.19693 0.08640 . PB B = 0.5 .BERb-cut 0.15412 0.07825 0.03024 .PERb-cut 0.26116 0.13637 0.05501 .PBB = 0.8 .BERb-cu 0.42327 0.24201 0.10676 .PERb-cut 0.65300 0.40408 0.18613 .PBB = 0.2 .BERb-cut 0.02085 0.01037 0.00503 .PERb-cut 0.03809 0.01886 0.00907
0.202399 0.08246 0.02266 .PER4sets 0.40048 0.17461 0.048603
0.07334 0.02539 0.00871 .PERb-4sets 0.15092 0.05457 0.01771
. B E R4sets
. B E Rb−4sets
0.20372 0.07806 0.02002 .PER4sets 0.40710 0.16748 0.04169
0.043289 0.01490 0.00385 .PERb-4sets 0.09454 0.03204 0.00835
.BER4sets
.BERb-4sets
0.35535 0.15007 0.03686 .PER4sets 0.65797 0.31820 0.08208
0.22462 0.08345 0.02265 .PERb-4sets 0.44758 0.18130 0.04910
. B E R4sets
.BERb-4sets 0.00328 0.00134 0.00060 .PERb-4sets 0.00604 0.00252 0.00115
0.05540 0.02074 0.00703 .PER4sets 0.11880 0.04428 0.01404
6.3 Experimental Results for Gilbert-Elliott Model
73
Fig. 6.3 Images obtained after transmission through the channel without using any error-correcting code
Experiments for Gilbert-Elliott with BSC Channels In all experiments for Gilbert-Elliott model with binary-symmetric channels, we use bit-error probability in the good state . Pe (G) = 0.01 and a few different values of bit-error probabilities in the bad state . Pe (B) ∈ {0.16; 0.13; 0.1} and the following combinations of transition probabilities from good to good state . PGG and from bad to bad state . PB B : • . PGG = 0.2 and . PB B = 0.8 • . PGG = 0.8 and . PB B = 0.2 In Fig. 6.4, in the first row we present the images coded with Burst-Cut-Decoding algorithm and transmitted through Gilbert-Elliott channels with . PGG = 0.2, . PB B = 0.8 and all considered . Pe (B) (the first image is for . Pe (B) = 0.16, the second one for . Pe (B) = 0.13, and the last one for . Pe (B) = 0.1. The corresponding images obtained after the application of the filter (proposed in Sect. 4.3.1) are given in the second row of the same figure. Images obtained for . PGG = 0.8 and . PB B = 0.2 are given in Fig. 6.5 (in the first row without filter, and the second row with the filter). Images obtained with Burst-4-Sets-Cut-Decoding algorithm for . PGG = 0.2, . . PB B = 0.8 are presented in Fig. 6.6, while images for . PGG = 0.8, . PB B = 0.2 are presented in Fig. 6.7. From the images for . PGG = 0.2, . PB B = 0.8, we can conclude that Burst-4-SetsCut-Decoding algorithm gives better results than Burst-Cut-Decoding algorithm for all considered values of . Pe (B). Comparing the images before and after applying the filter, we can conclude that the filter significantly enhances the quality of images decoded with both algorithms. However, it is visible that the filter gives better results for the images decoded with Burst-Cut-Decoding algorithm than with Burst-4-SetsCut-Decoding algorithm. The reason for this is the larger number of undetected errors in the experiments with Burst-4-Sets-Cut-Decoding algorithm. Namely, the filter cannot detect this kind of error. The images for . PGG = 0.8, . PB B = 0.2 (in Figs. 6.5 and 6.7) are clearer due to the smaller number of errors in the channels with
74
6 Cryptocodes Based on Quasigroups for Burst Channels
Fig. 6.4 Images obtained with Burst-Cut-Decoding algorithm without and with the filter for = 0.2, . PB B = 0.8
. PGG
Fig. 6.5 Images obtained with Burst-Cut-Decoding algorithm without and with the filter for = 0.8, . PB B = 0.2
. PGG
6.3 Experimental Results for Gilbert-Elliott Model
75
Fig. 6.6 Images obtained with Burst-4-Sets-Cut-Dcoding algorithm without and with the filter for = 0.2, . PB B = 0.8
. PGG
Fig. 6.7 Images obtained with Burst-4-Sets-Cut-Decoding algorithm without and with the filter for . PGG = 0.8, . PB B = 0.2
76
6 Cryptocodes Based on Quasigroups for Burst Channels
Fig. 6.8 Images obtained with Burst-Cut-Decoding algorithm without and with the filter for = 0.2, . PB B = 0.8
. PGG
these transition probabilities. Therefore, in these images, there is no great difference between the images decoded with both algorithms and after applying the filter. Experiments for Gilbert-Elliott with Gaussian Channels Here, we present the experimental results for Gilbert-Elliott model with Gaussian channels for . S N RG = 4 and different values of . S N R B = −3, −2, −1. For these channels, we made experiments with the same transition probabilities as in the experiments with binary-symmetric channels. Images obtained with Burst-Cut-Decoding algorithm for . PGG = 0.2, . PB B = 0.8 and all considered . S N R B are given in the first row of Fig. 6.8 (the first image is for . S N R B = −3, the second one for . S N R B = −2, and the last one for . S N R B = −1). The corresponding images obtained after the application of the filter are given in the second row. Images for . PGG = 0.8, . PB B = 0.2 are given in Fig. 6.9. In Fig. 6.10 we present the images obtained with Burst-4-Sets-Cut-Decoding algorithm for . PGG = 0.2, . PB B = 0.8 and in Fig. 6.11 for . PGG = 0.8, . PB B = 0.2. Analyzing the images given in this part (transmitted through a Gilbert-Elliott with Gaussian channels) we can derive the same conclusions as for images transmitted through a Gilbert-Elliott with BSCs.
6.3 Experimental Results for Gilbert-Elliott Model
77
Fig. 6.9 Images obtained with Burst-Cut-Decoding algorithm without and with the filter for = 0.8, . PB B = 0.2
. PGG
Fig. 6.10 Images obtained with Burst-4-Sets-Cut-Decoding algorithm without and with the filter for . PGG = 0.2, . PB B = 0.8
78
6 Cryptocodes Based on Quasigroups for Burst Channels
Fig. 6.11 Images obtained with Burst-4-Sets-Cut-Decoding algorithm without and with the filter for . PGG = 0.8, . PB B = 0.2
6.4 Fast Decoding with Cryptocodes for Burst Errors In this section, we consider fast versions of burst algorithms for RCBQ given in the previous section. For this goal, in Popovska-Mitrovikj et al. (2020) authors combine the previous two ideas in two new algorithms for coding/decoding called FastB-CutDecoding and FastB-4-Sets-Cut-Decoding algorithms. Namely, at first, one of the coding algorithms (Fast-Cut-Decoding or Fast-4-Sets-Cut-Decoding algorithms) is applied to the original message. After that, the interleaving is applied to the obtained codewords (before concatenation). The interleaved message is transmitted through a burst channel. The received message, after transmission through the channel, is divided into two (or four) parts, and after that the deinterleaving is applied to each part of the message and then all parts are decoded using the appropriate fast decoding algorithm. Namely, in the decoding process, we try to decode the message with a smaller value of . Bmax .
6.4.1 Experimental Results Here, we present experimental results obtained with FastB-Cut-Decoding and FastB-4-Sets-Cut-Decoding algorithms for rate . R = 1/4 and . R = 1/8, for transmission through a burst channel. The results in this subsection are given in Popovska-
6.4 Fast Decoding with Cryptocodes for Burst Errors
79
Mitrovikj et al. (2020). The simulation of burst errors is made using the Gilbert-Elliott model with Gaussian channels. We compare results obtained with FastB-Cut-Decoding and FastB-4-Sets-CutDecoding algorithms with the corresponding results for Burst-Cut-Decoding and Burst-4-Sets-Cut-Decoding algorithms. Also, in order to show the efficiency of fast algorithms, we present percentages of messages whose decoding finished with . Bmax = 1, 2, 3, 4 or .5. In the experiments, we use the same code parameters as previous. Also, for the experiments with Burst-Cut-Decoding for the code .(72, 288) we use . Bmax = 4, and in the experiments with FastB-Cut-Decoding algorithm the maximum value of . Bmax is 4. For the code .(72, 576) in Burst-Cut-Decoding and Burst-4-Sets-Cut-Decoding algorithms we use . Bmax = 5, and in FastB-Cut-Decoding and FastB-4-Sets-CutDecoding algorithms the maximum value of . Bmax is 5. In all experiments, the value of . S N R in the good state is . S N RG = 4 and we present experimental results for different values of . S N R in the bad state . S N R B ∈ {−3, −2, −1}. We consider channels with the following combinations of transition probabilities from good to good state . PGG and from bad to bad state . PB B in the Gilbert-Elliott model: • . PGG • . PGG • . PGG • . PGG
= 0.8 and . PB B = 0.5 and . PB B = 0.2 and . PB B = 0.8 and . PB B
= 0.8 = 0.5 = 0.8 = 0.2
In Table 6.5, we give experimental results for bit-error probabilities. B E R and packeterror probabilities . P E R for the code .(72, 288). With . B E R b−cut and . P E R b−cut we denote probabilities obtained with Burst-Cut-Decoding algorithm, and with . B E R f −cut and . P E R f −cut probabilities obtained with FastB-Cut-Decoding algorithm. The results for . B E R b−cut and . P E R b−cut for Burst-Cut-Decoding algorithm are taken from Sect. 6.3.2. Analyzing the results in Table 6.5, we can conclude that for all values of . S N R B and all combinations of transition probabilities, results for. B E R f −cut and. P E R f −cut are slightly better than the corresponding results of . B E R b−cut and . P E R b−cut . When decodings in FastB-Cut-Decoding algorithm end with . Bmax = 1 or . Bmax = 2 decoding with this algorithm is much faster than with Burst-Cut-Decoding. Therefore, in Table 6.6 we give the percentage of messages for which decoding ended with. Bmax = 1,. Bmax = 2,. Bmax = 3 or. Bmax = 4 in the experiments with FastB-CutDecoding algorithm. From the results given there, we can see that the percentages depend not only on the value of. S N R B but also on the transition probabilities for badto-bad and good-to-good states. For . PGG = 0.8, PB B = 0.2 and all values of . S N R B for more than .75% of the messages decoding successfully finished with . Bmax = 1 or . Bmax = 2. For . PGG = 0.8, PB B = 0.8 or . PGG = 0.5, PB B = 0.5 and . S N R B = −1 more than .50% of the decoding successfully finished with . Bmax = 1 or . Bmax = 2. In other experiments, where the channel is in a bad state with greater probability, we have smaller percentages for successful decoding with . Bmax = 1 or . Bmax = 2.
80
6 Cryptocodes Based on Quasigroups for Burst Channels
Table 6.5 Experimental results for code .(72, 288) .SNRB .PERf-cut .PERb-cut .PGG = 0.8 .PBB = 0.8 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1
0.27657 0.32366 0.15431 0.17727 0.06588 0.08179 .PGG = 0.5 .PBB = 0.5 0.20838 0.23135 0.10023 0.12348 0.04097 0.05609 .PGG = 0.2 .PBB = 0.8 0.57402 0.57783 0.35088 0.35419 0.15315 0.16561 .PGG = 0.8 .PBB = 0.2 0.02254 0.03513 0.00994 0.01980 0.00382 0.00986
.BERf-cut
.BERb-cut
0.20524 0.11143 0.04531
0.24605 0.13127 0.05911
0.14832 0.06993 0.02839
0.16945 0.08867 0.03993
0.43486 0.25447 0.10795
0.44170 0.26129 0.12115
0.01529 0.00669 0.00253
0.02449 0.01393 0.00589
So, we can conclude that the fast-burst algorithms improve the decoding speed of RCBQs for transmission through a Gilbert-Elliott channel with a smaller probability of a bad state. In Tables 6.7 and 6.8, we present the experimental results for code .(72, 576) with rate . R = 1/8. We compare bit-error (. B E R) and packet-error (. P E R) probabilities obtain with Burst-Cut-Decoding, FastB-Cut-Decoding, Burst-4-Sets-CutDecoding and FastB-4-Sets-Cut-Decoding algorithms. With . B E R b−cut ,. P E R b−cut , . B E Rb−4sets , . P E R b−4sets we denote probabilities for package and bit-errors obtained with burst algorithms (Burst-Cut-Decoding and Burst-4-Sets-Cut-Decoding algorithms) and with . B E R f −cut , . P E R f −cut , . B E R f −4sets and . P E R f −4sets the corresponding probabilities obtained with fast-burst algorithms (FastB-Cut-Decoding and FastB-4-Sets-Cut-Decoding). From the results given in Tables 6.7 and 6.8, we can conclude that for all combinations of transition probabilities and all values of . S N R B , results for . B E R and . P E R obtained with fast-burst algorithms are better than the corresponding results obtained with burst algorithms. Comparing the results of all algorithms, we can see that the best results (for all channel parameters) are obtained with FastB-4-SetsCut-Decoding algorithm. For . PGG = 0.8, PB B = 0.2 and all considered values of . S N R B , the values of . B E R and . P E R obtained with this algorithm are equal to 0. Also, from the duration of the experiments, we concluded that FastB-4-Sets-CutDecoding algorithm is much faster than the other three algorithms considered in this section.
6.4 Fast Decoding with Cryptocodes for Burst Errors
81
Table 6.6 Percentage of messages decoded with different values of . Bmax .SNRB .Bmax = 1 .Bmax = 2 .Bmax = 3 .PGG = 0.8 .PBB = 0.8 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1
15.79% 20.06% 24.95% .PGG = 0.5 5.11% 8.92% 18.17% .PGG = 0.2 0.17% 0.42% 2.17% .PGG = 0.8 42.42% 50.97% 60.10%
.PBB
.PBB
.PBB
18.23% 22.56% 29.59% = 0.5 20.39% 29.10% 37.82% = 0.8 3.28% 8.55% 20.99% = 0.2 32.80% 32.00% 29.56%
.−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1
0.06381 0.02148 0.00624 .PGG = 0.5 0.03417 0.01257 0.00250 .PGG = 0.2 0.16994 0.06408 0.01697 .PGG = 0.8 0.00200 0.00063 0.00013
.PBB
.PBB
.PBB
0.09182 0.03935 0.01347 = 0.5 0.05539 0.02198 0.00707 = 0.8 0.19376 0.08687 0.02736 = 0.2 0.00477 0.00174 0.00075
=4
17.71% 22.34% 24.25%
48.26% 35.04% 21.21%
26.30% 30.34% 27.70%
48.19% 31.64% 16.31%
10.95% 21.45% 32.60%
85.60% 69.58% 44.24%
16.32% 12.66% 8.13%
8.46% 4.37% 2.22%
Table 6.7 Experimental results for . B E R for code .(72, 576) .BERf-cut .BERb-cut .sets .SNRB .PGG = 0.8, .PBB = 0.8 .−3
.Bmax
.BERb-4sets
0.01421 0.00322 0.00044
0.03798 0.01631 0.00518
0.00588 0.00138 0.00022
0.02263 0.00853 0.00251
0.06586 0.01560 0.00258
0.08592 0.03244 0.01085
0 0 0
0.00225 0.00091 0.00026
82
6 Cryptocodes Based on Quasigroups for Burst Channels
Table 6.8 Experimental results for . P E R for code .(72, 576) .SNRB .PERf-cut .PERb-cut .PERf-4sets .PGG = 0.8, .PBB = 0.8 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1
0.11470 0.04155 0.01202 .PGG = 0.5 0.06509 0.02304 0.00540 .PGG = 0.2 0.29551 0.11874 0.03283 .PGG = 0.8 0.00374 0.00115 0.00029
.PBB
.PBB
.PBB
0.16734 0.07273 0.02556 = 0.5 0.10765 0.04198 0.01375 = 0.8 0.35628 0.16381 0.05357 = 0.2 0.00886 0.00382 0.00158
.PERb-4sets
0.02657 0.00662 0.00079
0.06552 0.02931 0.00900
0.01202 0.00324 0.00043
0.03910 0.01419 0.00446
0.12025 0.02988 0.00490
0.14653 0.05688 0.01879
0 0 0
0.00410 0.00158 0.00043
In Tables 6.9 and 6.10, we give the percentage of messages decoded with FastBCut-Decoding and FastB-4-Sets-Cut-Decoding algorithms, correspondingly, where decoding ended with . Bmax = 1, . Bmax = 2, . Bmax = 3, . Bmax = 4 or . Bmax = 5. From Tables 6.9 and 6.10, we can see that in all experiments we obtained better percentages (larger percentages for smaller . Bmax and smaller percentages for larger . Bmax ) with FastB-4-Sets-Cut-Decoding algorithm than with FastB-Cut-Decoding algorithm. In almost all cases, the percentage of messages in which decoding needed . Bmax = 5 with this algorithm is below .8% and for more than .60% of messages the decoding ended with . Bmax ≤ 3. Exception to this are only the cases when . PGG = 0.2, PB B = 0.8, S N R B ≤ −2 and . PGG = 0.8, PB B = 0.8, S N R B = −3. Also, for . PGG = 0.8, PB B = 0.2 more than half of decodings ended with . Bmax = 1. This means that for these channel parameters, FastB-4-Sets-Cut-Decoding algorithm is faster than Burst-4-Sets-Cut-Decoding algorithm. From all results given in this subsection, we can conclude that with fast-burst algorithms, better results for packet-errors and bit-error probabilities, are obtained compared to the previously defined (Burst-Cut-Decoding and Burst-4-Sets-CutDecoding) algorithms of RCBQ for transmission through burst channels. Also, from the presented percentages of messages whose decoding ends with different values of . Bmax , we can conclude that for some channel parameters, FastB-Cut-Decoding and FastB-4-Sets-Cut-Decoding algorithms provide faster decoding.
6.4 Fast Decoding with Cryptocodes for Burst Errors
83
Table 6.9 Percentage of messages decoded with FastB-Cut-Decoding algorithm with different values of . Bmax .Bmax = 1 .Bmax = 2 .Bmax = 3 .Bmax = 4 .Bmax = 5 .SNRB .PGG = 0.8 .PBB = 0.8 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1
3.49% 5.16% 8.10% .PGG = 0.5 0.22% 0.78% 2.62% .PGG = 0.2 0% 0% 0.04% .PGG = 0.8 17.09% 25.58% 36.12%
14.38% 22.52% 34.92% .PBB = 0.5 9.41% 20.39% 39.57% .PBB = 0.8 0.18% 1.56% 7.75% .PBB = 0.2 49.87% 52.37% 51.60%
24.80% 32.39% 34.77%
28.57% 24.98% 16.15%
28.76% 14.96% 6.06%
33.84% 43.13% 40.81%
33.83% 25.25% 13.28%
22.70% 10.45% 3.72%
6.57% 20.51% 40.79%
29.51% 41.17% 35.68%
63.75% 36.76% 15.74%
24.72% 17.68% 10.25%
6.55% 3.64% 1.68%
1.78% 0.73% 0.35%
Table 6.10 Percentage of messages decoded with FastB-4-Sets-Cut-Decoding algorithm with different values of . Bmax .SNRB .Bmax = 1 .Bmax = 2 .Bmax = 3 .Bmax = 4 .Bmax = 5 .PGG = 0.8 .PBB = 0.8 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1 .−3 .−2 .−1
23.22% 27.61% 35.64% .PGG = 0.5 7.13% 12.36% 23.95% .PGG = 0.2 0.17% 0.68% 3.18% .PGG = 0.8 54.50% 63.89% 73.37%
18.02% 25.09% 32.21% . PB B = 0.5 20.07% 31.98% 42.21% .PBB = 0.8 2.12% 6.96% 19.99% .PBB = 0.2 31.98% 29.17% 23.83%
22.20% 26.56% 24.14%
25.42% 17.04% 7.33%
11.13% 3.70% 0.68%
35.07% 38.33% 28.65%
30.54% 15.50% 4.85%
7.19% 1.84% 0.34%
8.19% 25.23% 44.45%
45.62% 50.45% 28.76%
43.90% 16.68% 3.62%
11.89% 6.44% 2.68%
1.55% 0.48% 0.12%
0.09% 0.01% 0%
84
6 Cryptocodes Based on Quasigroups for Burst Channels
6.4.2 Experimental Results for Transmission of Images Coded with Fast-Burst Algorithms In this subsection, we present experimental results obtained with RCBQ for transmission through a burst channel with algorithms considered in this chapter. We compare the values of bit-error probability (. B E R) and images obtained with BurstCut-Decoding, Burst-4-Sets-Cut-Decoding, FastB-Cut-Decoding and FastB-4-SetsCut-Decoding algorithms. Presented results are given in Popovska-Mitrovikj et al. (2023). Experimental results are for Gilbert-Elliott model with Gaussian channels with . S N R G = 4 (. S N R in good state) and for different values of . S N R B = −3, −2, −1 (. S N R in bad state). We made experiments with the following two combinations of transition probabilities . PGG from good to good state and . PB B from bad to bad state: • . PGG = 0.2 and . PB B = 0.8 • . PGG = 0.8 and . PB B = 0.2. We choose these combinations for transition probabilities to see the different behavior of the channel when the transition probability from bad to bad state is larger than the transition probability from good to good state and opposite. Namely, with the first combination of probabilities we obtain a channel with a bigger probability to be in a good state, which corresponds to low noise. On the other side, the situation with the second combination of the probabilities is opposite, the channel is more often in a bad state with high noise. The image is transmitted through the channel and the corresponding decoding algorithm is applied. In Table 6.11, we present experimental results for bit-error probabilities . B E R f b−cut , . B E R b−cut , . B E R f b−4sets , . B E Rb−4sets obtained with FastB-CutDecoding algorithm, Burst-Cut-Decoding algorithm, FastB-4-Sets-Cut-Decoding algorithm and Burst-4-Sets-Cut-Decoding algorithm, respectively. We can conclude that for transmission of images through burst channels with FastB-Cut-Decoding and FastB-4-Sets-Cut-Decoding algorithms, we obtained bet-
Table 6.11 Experimental results for . B E R .SNRB .BERfb-cut .BERb-cut .PGG = 0.2 .PBB = 0.8
.BERfb-4sets
.BERb-4sets
.−3
.0.17161
.0.19323
.0.06478
.0.08006
.−2
.0.06829
.0.08234
.0.01663
.0.03463
.−1
.0.01843
.0.03461
.0.00362
.0.01895
.0.00012
.0.002258
.0
.0.000814
.0
.0.000139
= 0.8 .0.00200 .0.00063 .0.00026 .PGG
.−3 .−2 .−1
. PB B
= 0.2 .0.00571 .0.00192 .0.00065
6.4 Fast Decoding with Cryptocodes for Burst Errors
85
Fig. 6.12 Images for . PGG = 0.2, . PB B = 0.8 and . S N R = −3
Fig. 6.13 Images for . PGG = 0.2, . PB B = 0.8 and . S N R = −2
ter results for bit-error probabilities than with Burst-Cut-Decoding and Burst-4-SetsCut-Decoding algorithms. Also, we can see that FastB-4-Sets-Cut-Decoding algorithm gives from 2 to 16 times better results than FastB-Cut-Decoding algorithm. For visual comparison in Figs. 6.12, 6.13 and 6.14, we present images transmitted in the channels with . PGG = 0.2, . PB B = 0.8 and . S N R B = −3, . S N R B = −2, and . S N R B = −1, correspondingly. In each figure, the first image is obtained using Burst-Cut-Decoding, the second image is obtained using Burst-4-Sets-Cut-Decoding algorithm, the third image using FastB-Cut-Decoding algorithm and the fourth one using FastB-4-Sets-Cut-Decoding algorithm. In the second row, we give the corresponding images obtained after applying the filter defined in Sect. 4.3.1.
86
6 Cryptocodes Based on Quasigroups for Burst Channels
Fig. 6.14 Images for . PGG = 0.2, . PB B = 0.8 and . S N R = −1
Fig. 6.15 Images for . PGG = 0.8, . PB B = 0.2 and . S N R = −3
In Figs. 6.15, 6.16 and 6.17, we present images for the channels with for PGG = 0.8,. PB B = 0.2, obtained for. S N R B = −3,. S N R B = −2, and. S N R B = −1, correspondingly. The images from the first row of each figure (Figs. 6.12, 6.13, 6.14, 6.15, 6.16 and 6.17) give visual confirmation of the previous conclusions derived from Table 6.11. Namely, we can see that with FastB algorithms we obtain clearer images (with less damaged pixels) than with Burst algorithms for both considered combinations of . PGG and . PB B and all different values of . S N R B . This confirms the smaller values of BER (bit-error-probability) given in Table 6.11. Comparing the images
.
6.4 Fast Decoding with Cryptocodes for Burst Errors
87
Fig. 6.16 Images for . PGG = 0.8, . PB B = 0.2 and . S N R = −2
Fig. 6.17 Images for . PGG = 0.8, . PB B = 0.2 and . S N R = −1
before and after applying the filter (the corresponding images in the first and second row of each figure) we can conclude that the proposed filter enhances the quality of the images coded/decoded with these algorithms of RCBQ for all considered values of . S N R B and transition probabilities in the channel.
88
6 Cryptocodes Based on Quasigroups for Burst Channels
References Knag, J., Stark, W., & Hero, A. (1998). Turbo codes for fading and burst channels. In IEEE Theory Mini Conference (pp. 40–45) Labiod, H. (1999). Performance of reed solomon error-correcting codes on fading channels. In IEEE International Conference on Personal Wireless Communications (Cat. No.99TH8366), Jaipur, India (pp. 259–263) Mechkaroska, D., Popovska-Mitrovikj, A., & Bakeva, V. (2019a). Cryptcoding of images for transmission trough a burst channels. Journal of Engineering Science and Technology Review (JESTR), 65–69. ISSN: 1791-2377. Proceedings of Fourth International Scientific Conference “Telecommunications, Informatics, Energy and Management”, September 2019, Kavala, Greece Mechkaroska, D., Popovska-Mitrovikj, A., & Bakeva, V. (2019b). New cryptcodes for burst chan´ c, M. Droste, J. É. Pin (Eds.) Algebraic informatics. CAI 2019. Lecture notes in nels. In M. Ciri´ computer science (Vol. 11545, pp. 202–212). Cham: Springer. Popovska-Mitrovikj, A., Bakeva, V., & Mechkaroska, D. (2020). Fast decoding with cryptcodes for burst errors. In V. Dimitrova, I. Dimitrovski (Eds.) ICT innovations 2020. Machine learning and applications. Communications in computer and information science (Vol. 1316). Cham: Springer. Popovska-Mitrovikj, A., Bakeva, V., & Mechkaroska, D. (2023). Fast decoding of images with cryptcodes for burst channels. IEEE Access, 11, 50823–50829. https://doi.org/10.1109/ACCESS. 2023.3278051
Index
A Associative quasigroup, 2 Audio files, 57
B Binary Phase-shift keying modulation, 34 Binary-symmetric channel (BSC), 26 Bit-error probability (. B E R), 21 Burst channel, 65 Burst-Cut-Decoding algorithm, 66 Burst-4-Sets-Cut-Decoding algorithm, 66
C Commutative quasigroup, 2 Cut-Decoding algorithm, 14, 36, 41
D Decoding candidate sets, 13 Decoding rule, 14 Deinterleaver, 66, 78
F Fast-Cut-Decoding algorithm, 50, 78 Fast-4-Sets-Cut-Decoding algorithm, 50, 78 Filter for audio files, 60 Filter for decoded images, 41 4-Sets-Cut-Decoding algorithm, 18, 41 4-Sets-Cut-Decoding algorithm.#1, 19 4-Sets-Cut-Decoding algorithm.#2, 20
4-Sets-Cut-Decoding algorithm.#3, 20, 25, 36 4-Sets-Cut-Decoding algorithm.#4, 21 Fractal quasigroup, 6
G Gaussian channel, 33 Gilbert-Elliott channel, 65
I Idempotent quasigroup, 2 Images, 30, 37, 53 Interleaver, 66, 78 Inverse coding algorithm , 13
L Left loop, 2 Loop, 2
M Methods for reducing decoding errors, 17 More-candidate-error, 14
N Non-fractal quasigroup, 6 Null-error, 14
P Packet-error probability (. P E R), 21
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 D. Mechkaroska et al., Cryptocoding Based on Quasigroups, SpringerBriefs in Information Security and Cryptography, https://doi.org/10.1007/978-3-031-50125-8
89
90 Q Quasigroup, 1 Quasigroup string transformation, 4 R Random codes based on quasigroups (RCBQ), 9 Right loop, 2 S Sets with predefined Hamming distance, 13
Index Shapeless quasigroup, 3 Signal-to-noise ratio (. S N R), 35 Standard algorithm of RCBQ, 10 Standard coding algorithm, 11, 12
T TASC, 10
U Undetected-error, 14