137 62 10MB
English Pages 140 Year 2022
Lecture Notes in Networks and Systems 541
Irfan Awan Muhammad Younas Jamal Bentahar Salima Benbernou Editors
The International Conference on Deep Learning, Big Data and Blockchain (DBB 2022)
Lecture Notes in Networks and Systems Volume 541
Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas— UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Turkey Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong
The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).
More information about this series at https://link.springer.com/bookseries/15179
Irfan Awan Muhammad Younas Jamal Bentahar Salima Benbernou •
•
•
Editors
The International Conference on Deep Learning, Big Data and Blockchain (DBB 2022)
123
Editors Irfan Awan Department of Computer Science University of Bradford Bradford, UK Jamal Bentahar Concordia University Montreal, QC, Canada
Muhammad Younas School of Engineering, Computing and Mathematics Oxford Brookes University Oxford, UK Salima Benbernou Université de Paris Paris, France
ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-031-16034-9 ISBN 978-3-031-16035-6 (eBook) https://doi.org/10.1007/978-3-031-16035-6 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
It was a great pleasure to welcome all the participants of the 3rd International Conference on Deep Learning, Big Data and Blockchain (DBB 2022). The conference was held during August 22–24, 2022, in the historic city of Rome, Italy. Rome is one of the World Heritage sites, which attracts millions of visitors from all over the world. It is full of museums, squares, Roman landmarks and sights and other attractions such as shopping areas and restaurants. The DBB 2022 conference involved hard work, time and commitment from the conference organizing and technical committees. The goal was to provide participants with an opportunity to share and exchange ideas on different topics related to the conference’s theme, including machine/deep learning, blockchain, big data and their integration in modern applications and convergence in new and emerging research and development areas. The call for papers of the conference included innovative and timely topics in the aforementioned areas and their sub-topics, such as learning-based models; clustering, classification and regression; data analysis, insights and hidden pattern; blockchain protocols and applications; verification; security and trust; and applications of deep learning, blockchain and big data in areas such as business, finance and healthcare among others. Blockchain and smart contract tools and methods are increasingly used in new and emerging systems to ensure transparency of data and transactions. For instance, machine learning techniques have been used by businesses in order to analyze a large volume of (big) data and to identify useful patterns in the data so that they can be used for purposes of intelligent and timely decision making. The conference technical committee created a fascinating technical program to provide a forum where participants could present, discuss and provide constructive feedback on different aspects of deep learning, big data and blockchain. Though the ongoing pandemic has affected the number of submissions, the DBB conference has attracted many good-quality papers from different countries worldwide. The conference followed a rigorous review process wherein all submitted papers were reviewed by multiple members of the technical program
v
vi
Preface
committee. Based on the reviews, ten papers were accepted for the conference, which gave an acceptance rate of 34% of the total submissions. The accepted papers included interesting work on different topics such as deep learning and biologically inspired methods; security, privacy and trust; blockchain algorithms and protocols; smart contracts; re-enforcement learning; smart video surveillance systems; identifying illicit accounts; and stake consensus protocol. The papers also included work on practical applications such as clinical trials, crime detection, and financial applications and transactions. We sincerely thank all the members of the program committee who have spent their valuable time reviewing the submitted papers and providing useful feedback to the authors. We were also thankful to all the authors for their contributions to the conference. We were grateful to the conference organizers: General Chair, Prof Salima Benbernou; Workshop Coordinator, Dr. Filipe Portela; Publicity Chair, Dr. Mourad Ouziri; and Journal Special Issue Coordinator, Prof Natalia Kryvinska. We sincerely thank Springer’s team for the time and support they provided throughout the production of the conference proceedings. August 2022
Irfan Awan Muhammad Younas Jamal Bentahar
Organization
DBB 2022 Organizing Committee General Chair Salima Benbernou
University of Paris, France
Program Co-chairs Irfan Awan Jamal Bentahar
University of Bradford, UK Concordia University, Canada
Publication Chair Muhammad Younas
Oxford Brookes University, UK
Journal Special Issue Coordinator Natalia Kryvinska
University of Vienna, Austria
Workshop Coordinator Filipe Portela
University of Minho Portugal, Portugal
Publicity Chair Mourad Ouziri
University of Paris, France
Program Committee Ahmad Javaid Antonio Dourado Bruno Veloso Chirine Ghedira Guegan
The University of Toledo, Spain University of Coimbra, Portugal INESC Technology and Science, Portugal Université Lyon3, France
vii
viii
Chouki Tibermacine Daniela Zaharie Daniele Apiletti Dhiya Al-Jumeily Fahimeh Farahnakian Hassina Meziane Huiru (Jane) Zheng Jus Kocijan Lei Zhang Nizar Bouguila Rabiah Ahmad Rosangela Ballini Sotiris Kotsiantis Sung-Bae Cho Tomoyuki Uchida Zografoula Vagena
Organization
Université de Montpellier, France West University of Timisoara, Romania Polytechnic University of Turin, Italy Liverpool John Moores University, UK University of Turku, Finland Université of Oran, Algeria Ulster University, UK University of Nova Gorica, Slovenia East China Normal University, China Concordia University, Canada Universiti Teknikal Malaysia, Malaysia University of Campinas, Brazil University of Patras, Greece Yonsei University, Korea Hiroshima City University, Japan Université de Paris, France
Contents
Blockchain and Applications Apply Trust Computing and Privacy Preserving Smart Contracts to Manage, Share, and Analyze Multi-site Clinical Trial Data . . . . . . . . Yusen Wu, Chao Liu, Lawrence Sebald, Phuong Nguyen, and Yelena Yesha
3
Design Principles for Interoperability of Private Blockchains . . . . . . . . Suha Bayraktar and Sezer Gören
15
Blockchain for Proposal Management . . . . . . . . . . . . . . . . . . . . . . . . . . Mustafa Sanli
27
Machine and Deep Learning One-Shot Federated Learning-based Model-Free Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gaith Rjoub, Jamal Bentahar, Omar Abdel Wahab, and Nagat Drawel
39
A New Approach for Selecting Features in Cancer Classification Using Grey Wolf Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Halah AlMazrua and Hala AlShamlan
53
A Smart Video Surveillance System for Helping Law Enforcement Agencies in Detecting Knife Related Crimes . . . . . . . . . . . . . . . . . . . . . . Raed Abdallah, Salima Benbernou, Yehia Taher, Muhammad Younas, and Rafiqul Haque Biologically Inspired Variational Auto-Encoders for Adversarial Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sameerah Talafha, Banafsheh Rekabdar, Christos Mousas, and Chinwe Ekenna
65
79
ix
x
Contents
Blockchain Technology and Protocols Detecting Illicit Ethereum Accounts Based on Their Transaction History and Properties and Using Machine Learning . . . . . . . . . . . . . . Amel Bella Baci, Kei Brousmiche, Ilias Amal, Fatma Abdelhédi, and Lionel Rigaud
97
Identifying Incentives for Extortion in Proof of Stake Consensus Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Alpesh Bhudia, Anna Cartwright, Edward Cartwright, Julio Hernandez-Castro, and Darren Hurley-Smith Three-Valued Model Checking Smart Contract Systems with Trust Under Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Ghalya Alwhishi, Jamal Bentahar, and Ahmed Elwhishi Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Blockchain and Applications
Apply Trust Computing and Privacy Preserving Smart Contracts to Manage, Share, and Analyze Multi-site Clinical Trial Data Yusen Wu1,2(B) , Chao Liu2 , Lawrence Sebald2 , Phuong Nguyen1,2 , and Yelena Yesha1 1
2
University of Miami, Coral Gables, FL 33146, USA [email protected] University of Maryland, Baltimore County, Halethorpe, MD 21227, USA {ywu5,chaoliu717,lsebald1,phuong3}@umbc.edu
Abstract. Multi-site clinical trial systems face security challenges when streamlining information sharing while protecting patient privacy. In addition, patient enrollment, transparency, traceability, data integrity, and reporting in clinical trial systems are all critical aspects of maintaining data compliance. A Blockchain-based clinical trial framework has been proposed by lots of researchers and industrial companies recently, but its limitations of lack of data governance, limited confidentiality, and high communication overhead made data-sharing systems insecure and not efficient. We propose Soteria, a privacy-preserving smart contracts framework, to manage, share and analyze clinical trial data on fabric private chaincode (FPC). Compared to public Blockchain, fabric has fewer participants with an efficient consensus protocol. Soteria consists of several modules: patient consent and clinical trial approval management chaincode, secure execution for confidential data sharing, API Gateway, and decentralized data governance with adaptive threshold signature (ATS). We implemented two versions of Soteria with non-SGX deploys on AWS blockchain and SGX-based on a local data center. We evaluated the response time for all of the access endpoints on AWS Managed Blockchain, and demonstrated the utilization of SGX-based smart contracts for data sharing and analysis. Keywords: Permissioned Blockchain Clinical trials · Patient consent
1
· Healthcare · Smart contracts ·
Introduction
Clinical trials are experiments done in clinical research (e.g., to determine the safety or effectiveness of drugs) that involve human subjects. Centralized clinical trial systems are commonly used but insecure and inefficient when managing and c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Awan et al. (Eds.): DBB 2022, LNNS 541, pp. 3–14, 2023. https://doi.org/10.1007/978-3-031-16035-6_1
4
Y. Wu et al.
sharing data across multiple disparate organizations, and it is difficult without compromising patient and data privacy. In addition, patient enrollment, data confidentiality, and privacy, traceability, data integrity, and reporting in centralized systems are all critical aspects to maintain data compliance. Traditional solutions use informed consent [7] or electronic consents (E-consents) to create a process of communication between patients and health care providers that often generates agreement or permission for care, treatment, or services. As every patient owns the right to ask questions or get all sensitive information before treatment, current electronic documents, such as E-consents, are just electronic paperwork where the centralized signatures can lead to a lack of traceability and trustworthiness. Furthermore, multiple parties, such as hospitals, can not audit consents stored in electronic medical records (e.g., EMRs [13]), and also the clinical research generally is managed in local systems, such as REDCap [8]. Permissioned Blockchain, such as Hyperledger Fabric [12], is a shared, immutable ledger that facilitates the process of recording transactions and tracking assets in a decentralized network. Its advantages of immutability, visibility, and traceability bring unprecedented trust to sensitive data, for example, recording medical transactions in a multi-copy, immutable ledger shared with different organizations. In fact, Blockchain has seen adoption by a wide range of applications and uses in healthcare recently, such as Akiri [1], BurstIQ [3], Factom [5], etc. These applications have implemented different types of functions in healthcare with both public and permissioned Blockchain, for example, keeping a decentralized and transparent log of all patient data in permissioned Blockchain to share information quickly and safely or verify the sources and destinations of data in real-time. Leveraging Blockchain technology for informed consent processes and patient engagement in a clinical trial pilot is a new research field that is recently being proposed [23,28,29]. The rationale for the use of Blockchain technologies is to give patients control over who can access their data and when the consent expires. The innovation here is to surface data ownership, increase data confidence and prevent the leakage of sensitive information. Current work, however, has at least the following limitations: (L1): Heavy communication overhead. Some patient consents and clinical trials are stored in a public Blockchain, such as Ethereum. The patient consents in a public Blockchain are shared with all the organizations or users of that Blockchain transparently; sensitive data must be encrypted via cryptographic functions and only the user who gets the private key can access the ciphertext. These public Blockchains could have the most negative impact on data sharing, their limited scalability and speed are core limitations. A public Blockchain network typically requires all the nodes to validate transactions; the consensus and validation of all the nodes in a network increase the usage of storage, bandwidth, and communication costs. (L2): Limited confidentiality for public smart contract. The advantages of using a permissioned Blockchain to store patient data are explicit. For example, as a new member needs to be invited and approved by a plu-
Soteria: Manage, Share, and Analyze Multi-site Clinical Trial Data
5
rality of participants, and typically there are fewer participants in the permissioned Blockchain, the communication overhead is more efficient than on a public Blockchain. A permissioned Blockchain, however, is less resistant to malicious attacks, abusive behaviors, and arbitrary faults. For instance, a smart contract on a permissioned Blockchain can not keep a secret because its data is replicated on all peer nodes. A trusted member, though a majority of the participants accepted the invitations, can easily get access to the smart contract and distribute sensitive data to a third party. (L3): Lack of data governance. Most of the applications and research papers in Blockchain with healthcare lack clear data governance. Data governance in our platform is a way that investigators1 (e.g., attending doctors, patients, or directors) in the system can decide whether a sensitive record can be stored in the ledger with a valid signature or not, or grant permission to other users for access. To remedy the current limitations, we first propose to use Fabric Private Chaincode (FPC) and Trusted Execution Environments (TEEs), in particular Intel Software Guard Extensions (SGX), to protect the privacy of chaincode data and computation from potentially curious peers in the same Blockchain network. We also propose adaptive threshold signature (ATS) to strengthen data governance. We list the advantages as follows: Confidential and integrity-protected chaincodes. Fabric Private Chaincode has designed a secure solution for a smart contract executing on a permissioned Blockchain using Intel SGX. The outputs of a consensus algorithm are always final, which avoids the protocol-inherent rollback attack [15,24]. In addition, FPC extends Hyperledger Fabric Blockchain (Fabric) to execute a chaincode in an enclave and isolate the execution even from system applications, the OS, and hypervisor. Confidential ledger data. FPC clients can send encrypted data to chaincodes inside an SGX enclave, then these chaincodes commit encrypted data as keyvalue pairs to the ledger. Enclaves can be programmed (and verified) to process and release data following regulatory compliance procedures2 . Trusted channels for access control. FPC can establish secure channels for access control based on hardware attestation. Authorized members can be invited into different channels and members can only access the ledger of their own channels. Reducing delegated privileges. FPC chaincode is an active actor that manages data compliance. It can prevent sharing and using data without prior consent, and it can prevent using data that does not belong to registered patients. Moreover, investigators for data governance and experimenters’ actions are more constrained; investigators can not authorize data sharing for unapproved trials; experimenters cannot run arbitrary experiments; experimenters can not use 1 2
An individual who conducts a clinical investigation. Regulatory compliance is an organization’s adherence to laws, regulations, guidelines, and specifications relevant to its business processes.
6
Y. Wu et al.
arbitrary data. As we can see that FPC chaincode uses real-time compliance to reduce trust-but-verify approach 3 and delegated privileges. Decentralized data governance. As we mentioned in limitation (L3), a decentralized governance system can guarantee that even if some investigators are faulty or offline, the transactions can still be delivered correctly, and the trials can be stored in the immutable ledger shared with different organizations only after a plurality of the investigators signed the clinical trials, that is to say, a trial needs to be certified by a majority of the investigators. Contributions. We list the main contributions of this research here: – We propose Soteria, an FPC-based, clinical trials sharing platform, using SGX-based chaincodes and private ledgers. – We implemented an API to verify the FPC client’s requests, and only the verified requests can be committed to the enclave chaincodes and be stored in the ledger. The API can wire up all the functions and components between the front-end and back-end (a Blockchain network). – In order to eliminate centralized data governance, we use an adaptive threshold signature to strengthen decentralized data governance in clinical trials to tolerate arbitrary faults between different investigators. – We finally evaluated Soteria including the latency of SGX-based endorsement, the response time (GET/POST) of clinical trials on AWS cloud, and our local Intel clusters. Organization. Section 2 introduces related work. We introduce detailed privacy-preserving patient consent and IRB chaincodes for better explaining the role of FPC and the framework of the entire system in Sect. 3. A detailed IRB clinical trial example will be discussed in Sect. 4. We introduce the implementations in Sect. 5, evaluation of Soteria in Sect. 6.1, discussion in Sect. 6.2, and conclusion in Sect. 7.
2
Related Work
A number of researchers have highlighted the potential of using Blockchain technology to address existing challenges in healthcare. For instance, Mettler [26] aims to illustrate possible influences, goals, and potentials connected to Blockchain technology with healthcare, he implemented a smart health management system with Blockchain to fight counterfeit drugs in the pharmaceutical industry. McGhin [25] listed some security challenges in healthcare, such as access control, authentication, and non-repudiation of records, and proposed using the Blockchain network as the underlying approach to manage data securely. J. Gordon [20] proposed to use Blockchain for facilitating the transition to patientdriven interoperability through the benefit of data management mechanisms of Blockchain. Dwivedi [17] proposed a decentralized privacy-preserving healthcare Blockchain system for IoT, in which he eliminated the concept of PoW to 3
https://en.wikipedia.org/wiki/Trust but verify.
Soteria: Manage, Share, and Analyze Multi-site Clinical Trial Data
7
make it suitable for smart IoT devices. Yesha proposed Chios [16], a lightweight permissioned publish/subscribe system, to securely collect HL7 format data in healthcare or other formats of medical records. One sub-module of the Chios system can also tolerate Byzantine faults in distributed machine learning [30] when training a model shared with different organizations or hospitals. In addition, several papers have proposed to store patient consent in a Blockchain to improve the security of patient records, surface data ownership and increase data confidence [14,18,19,23,27]. Two recent papers this year [11,22] propose to use Blockchain and IoT to manage healthcare data. The papers above are all mentioned data security in Blockchain, the limitations of data security and communication overhead still made the sensitive patient data vulnerable to malicious attacks. As a result, applying privacypreserving smart contract to manage, share, and analyze sensitive data become necessary.
3
The Soteria System
Soteria framework provides a modular framework allowing trade-offs between functionality, security, and efficiency. Soteria currently supports three main modules, including patient consent and clinical trial chaincode, API gateway, and decentralized data governance, as shown in Fig. 1. We describe the detailed functions of these modules as follows. P A new consent sends to IRB for
IRB Peer N
Peer 1
ledger
ledger
2
3
Fabric network 4
Peer
User Registration
Patient Consent
Trial approval 9
8
Experimental chaincode
Experimental chaincode
SGX enclave
SGXenclave enclave SGX
Clinical trial chaincode
Clinical trial chaincode
SGX enclave
SGX enclave
encrypted data
Data Registration
ledger
encrypted data
sig_2 sig_3
Organization B
1
FPC C
Adaptive Threshold sig_4 Signature Send
Get Permission
Investigators 6
5
Send request to investigators directly for permission
Experiment provisioning
Other organization users need ask for permission
Experimentation Services
ledger
7
Organization A
sig_1
Sig=Combine (sig_1, sig_2, sig_3, sig_4)
Experimental approval
Experimental services for researchers
Data 10
Data Secure Data
Fig. 1. Soteria framework and workflow. A detailed IRB demo for secure data sharing is given in Sect. 4.
8
3.1
Y. Wu et al.
Patient Consent and Clinical Trials Chaincode
Our patient consent chaincode includes eight main functions for interacting with the fabric client and Blockchain ledgers, such as CreateRecord or QueryConsentByID. Patient consent can be stored in the ledger and queried by a patient identity or record ID from the ledger. All the consents have a start time and an end time for legal access. Patient consent can be revoked by the admins or the patient, but it needs to be approved by half of the investigators (data governance). The clinical trials chaincode includes nine main functions for interacting with the fabric client and Blockchain ledger. A trial needs to be registered and signed before being in the ledger; it can be queried by institution id or patient id; investigators can change a trial’s status to Approved, Pending, Completed, and Revoked after being verified. 3.2
API Gateway
API Gateway is an independent module deployed outside the Blockchain network as a middleware application written in NodeJS. We deploy it through the AWS Serverless architecture. The API can help enroll a user, assign secret keys for a specific user, and load the Fabric client via a connection profile. Then, the front end can send GET and POST requests to the chaincode through different endpoints to register users, query patient records, create patient consent, and so on. 3.3
Decentralized Governance with (t, n) Adaptive Threshold Signature
We use the (t, n) adaptive threshold signature scheme as the core function for governing the clinical trials as the trials need to be signed before storing. The detailed steps are shown as follows. Step 1: Parameters generation phase. A group of investigators generates two key pairs, one is yes key and another for no key. The yes key pair is (priyes i , secret and publish its public verifivkiyes ) where a investigator keeps the priyes i cation key vkiyes . Similarly, the group of investigators generates the no key pairs no (prino i , vki ). Every investigator in this step generates two key pairs yes/no, the yes key pair is for confirming that the message is valid and investigators can use to sign the message. the key priyes i Step 2: Transaction submitting phase. A fabric client wants to submit a message m to the Blockchain network. The investigator group consists of multiple n members c1, c2, ..., cn. We take four members as an example here. Fabric client calculates the hash code of the message hash(m) or h(m). Finally, the fabric client sends < m, h(m) > to each of the investigators in the group. Step 3: Sign a signature. All the investigators received the message m and its hash value h(m). First, they will check the content of the message m (e.g., recalculate hash code of m, h (m)). If a member ci confirms that this message is
Soteria: Manage, Share, and Analyze Multi-site Clinical Trial Data
9
valid (h(m) == h (m)), then ci uses its private yes key to sign the message and , m). After that, the ci send generates the share signature sigi = sign(priyes i this share signature sigi to the consensus node to vote. Step 4: Make a consensus and deliver the result. A consensus node (an investigator) will receive the message m, h(m), and share signature sigi from the Fabric client after signing the signature. The node can first verify this share signature sigi with the ci ’s two public verification keys vkiyes and vkino . When the consensus node receives more than t thresholds (t, n) from the all the peer investigators, that is the yes or no number in an array, the consensus node can run the combination algorithm to recover the final signature f sig = combine(sig1 , sig2 ,..., sigt ). Finally, the consensus node can verify the final signature with the public key pairs pk yes or pk no . After this final signature is verified by the yes or no public key true/f alse = verif y(m, f sig, pkyes ) (pk no ), the consensus node can determine if this message m (a patient trial) can be submitted to the Blockchain ledger or not. Figure 1 displays the message flow from the client sending the message to the Blockchain ledger. We give a detailed IRB example for data sharing and analysis in Sect. 4 for better understanding the Soteria framework.
4
A Detailed IRB Use Case for Data Sharing
We introduce a detailed IRB use case and its workflow in this section for a better understanding of Soteria architecture.
Fig. 2. Workflow overview.
Workflow Overview. We show an example of how a user Tom creates his clinical study, other involved parties get notified and verify transactions in Fig. 2. In this example, the role of Institutional Review Boards (IRBs) is for clinical trial approval, investigators conduct the clinical trials, and researchers typically monitor subjects and assess changes. In our IRB demo, a patient needs to be registered before storing the patient consents and clinical trials ( 9 ). Every patient’s sensitive data will be uploaded
10
Y. Wu et al.
to an AWS cloud database and only the experimenters or researchers who get permission and a private key from the Blockchain can retrieve data ( 10 ) from the AWS data storage. Patients can submit their consents ( 8 ) through user interfaces (a Web or an App) to grant the permit to experimenters or researchers. When an IRB member sends a clinical trial to FPC client ( 2 ), the FPC client will broadcast ( 1 ) the h(m) and m to every investigators, then every investigator verify input trials m, sign a signature and send their shares σi to combination, function ( 3 ). If the output is true, this clinical trial will be successfully committed to the Blockchain ledger ( 4 ). The experimenters can be researchers or students, they need permission to access all the patient trials ( 5 and 6 ) for research. All of the researchers and experimenters can securely download patient data for their research after they get consent and private keys after getting approved by investigators.
5
Implementations
Soteria consists of a Golang/C++ module (chaincode), a Python module (for data analysis), a NodeJS module (API gateway), and a Terraform automation development tool with about 40,000 lines of code in total. We deploy our Soteria on AWS Managed Hyperledger Blockchain. The IRB/trials chaincodes are written in Golang. We also implemented a consent API in NodeJS with about 1,000 lines of code to interact with the front end. We implemented the front end through AWS Amplify. We deploy SGX-based FPC locally in a simulation mode to evaluate the IRB. It allows for writing chaincode applications where the data is encrypted on the ledger and can only be accessed in clear by authorized parties. The SGX-based IRB chaincode is written in C++ with 1,000 lines of code because FPB only supports C/C++ language currently. Soteria client is written in Golang and we use gRPC [6] to commit transactions between different languages. The Blockchain user is registered (created) in the Hyperledger Fabric Certificate Authority, and their enrollment credentials are stored in AWS Secrets Manager [2]. A corresponding user is also created within a Cognito User Pool [4], with a custom attribute, fabricUsername, that identifies this user within the Certificate Authority. Each portal attempts to authenticate the user (via username and password through sign-up or invite participant users) against a Cognito User Pool. Upon successful authentication, Cognito returns an identity token, which is a JSON Web Token (JWT). The client application includes this JWT in requests sent to the API Gateway, which authorizes the user to invoke the API route, as shown in Fig. 3. API Gateway retrieves the fabricUsername custom attribute from the JWT, and sends this to the Lambda function that will be executing the Blockchain transaction. The Lambda retrieves the Blockchain user’s private key from AWS Secrets Manager and retrieves the connection profile for connecting to the Amazon Managed Blockchain network from Amazon Systems Manager (Parameter
Soteria: Manage, Share, and Analyze Multi-site Clinical Trial Data
Register + enroll user
11
Fabric Authority
Admin
Send Credentials
2 Save Fabric user credentials in AWS Secrets Manager
Save Credentials Admin AWS Secrets Manager
3 Create an Amazon Cognito User Create User
Set Fabric username as a custom property
Admin
Cognito User Pool
4 Create a Fabric transaction
Authenticate with Cognito Credentials Send JWT
Admin
HTTP request w/ Auth header: JWT
Cognito User Pool
Trigger the Lambda, passing the Fabric username as an event properly API Gateway
Parse the JWT and custom Properities for the Fabric username
Fabric Lambda
Create a blockchain transaction with the Fabric SDK and user credentials
Management Blockchain
AWS Secrets Manager
Fig. 3. Sequence diagram. This diagram shows the sequence of events that transpire to authenticate a user and invoke Blockchain transactions on their behalf.
Store). IAM policies [9] are used to restrict access to the Lambda function to only the Secrets Manager and Systems Manager [10]. The query and update functions are written in NodeJS using the Hyperledger Fabric NodeJS API. An AWS IAM user will be needed for provisioning the AWS Blockchain network. We also implemented an IAM policies sample which can be associated with this IAM user. Default IAM associated with users have credentials to bootstrap AWS managed Blockchain and other AWS resources.
6 6.1
Evaluations and Discussion Evaluations
Experimental setup. For the non-SGX version chaincode as we said before, all of the functions are deployed on the AWS Managed Blockchain. An AWS IAM user will be needed for provisioning the AWS Blockchain network. The AWS Blockchain created a unique ordering service endpoint and VPC endpoint
12
Y. Wu et al. Table 1. Endpoints response time. Request
Method TW
Register Consent Grant Consent PatientId Consent Revoke Consent Acknowledge Consent Consent Validate StudyNumber IRB Trials (all trials) IRB Trials Register IRB Trials Status IRB Trials Institutions IRB Trials Join IRB Trials StudyNumber Status Hospital Trials StudyNumber Invitation
POST POST GET GET POST GET GET GET POST POST GET POST POST POST
233 ms 5.23 s 623 ms 256 ms 516 ms 3.91 s 4.64 s 4.97 s 4.36 s 153 ms 4.51 s 751 ms 45 ms 38 ms
for our access. We create one member and each member has a unique certificate authority endpoint and several peer nodes (peer endpoints). For SGX version chaincode, the code is managed on the Github4 . Latency of SGX-based endorsement. We referenced the latency with an increasing number of clients from FPC [15]. The best endorsement latency is 8 and 16 clients (around 15 ms) and it starts to increase after 16 clients. The latency breakdown for submitting transactions with 4 clients showing the average response time as follows: the mean of Decrypt a transaction is 0.2 ms, getState is 0.37 ms, Ledger enclave time is 0.68 ms, and Decryption and Verify state time is 0.06 ms. Latency of consent and clinical modules on AWS Managed Blockchain. After deploying the API and Blockchain correctly, all the endpoints can be accessed through GET/POST requests. It includes user registration, data confirmation, grant, queries, trial revoke, trial registration, trial validation, query registered institutions, query trials by institution, query all trials, invite participants to trial, update trial status, list participants by study number, acknowledgment, participants invitation, link registration, etc. We tested the main endpoints’ response time in Table 1. Tw refers to a response time of each request in the WAN settings. We tested 10 times and take the average value. In addition, some requests’ Tw , such as Query (GET), are affected by the number of patient trials stored in the ledger. Apparently, the time increases with more trials in store. 6.2
Security Discussion
For permissioned Blockchain and FPC. Though SGX encrypts sections of memory using security instructions native to the CPU, attackers inject malicious 4
https://github.com/hyperledger/fabric-private-chaincode/tree/main/samples/ demos/irb.
Soteria: Manage, Share, and Analyze Multi-site Clinical Trial Data
13
data into a running program, and stealing sensitive data and keys is possible. That’s the reason we involve permissioned Blockchain and FPC as a private platform only for authorized organizations, and decentralized data governance for other uncertain third parties, then sensitive data can be securely exchanged between different hospitals and organizations. For SGX. TEEs can not be directly used for non-final consensus protocols, such as PoW in Bitcoin or Ethereum, because TEEs generally are stateless [21], and it only works for the consensus decisions are final. As a consequence, we use TEEs in Fabric Blockchain because it supports finality. In each round, the BFT consensus always delivers a result, enclaves do not need to keep a state for the next round of consensus. By running all the ledger and smart contracts within an enclave, the smart contracts maintain confidentiality and secure chaincode execution.
7
Conclusions
We propose Soteria, an SGX-based privacy-preserving smart contracts framework for sensitive clinical trials in healthcare, including three main modules: patient consent and clinical trials chaincode, API gateway, and decentralized data governance. We evaluated the response time of clinical trials through the API endpoints and latency of SGX-based endorsement. Acknowledgement. We gratefully acknowledge the support of the NSF through grant IIP-1919159. We also acknowledge the support of Andrew Weiss, and Mic Bowman from Intel.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.
Akiri. https://akiri.com/ AWS Secret Manager. https://aws.amazon.com/secrets-manager/ Burstiq. https://burstiq.com/ Cognito. https://docs.aws.amazon.com/cognito Factom. https://www.factomprotocol.org/ grpc. https://grpc.io/ https://www.ama-assn.org/delivering-care/ethics/informed-consent https://www.project-redcap.org/ Iam policy. https://docs.aws.amazon.com/iam/latest/userguide/access.html Lambda. https://aws.amazon.com/lambda/ Adere, E.M.: Blockchain in healthcare and IoT: a systematic literature review. Array 100139 (2022) 12. Androulaki, E., et al.: Hyperledger fabric: a distributed operating system for permissioned blockchains. In: Proceedings of the Thirteenth EuroSys Conference, pp. 1–15 (2018) 13. Bates, D.W., Ebell, M., Gotlieb, E., Zapp, J., Mullins, H.: A proposal for electronic medical records in US primary care. J. Am. Med. Inform. Assoc. 10(1), 1–10 (2003) 14. Benchoufi, M., Porcher, R., Ravaud, P.: Blockchain protocols in clinical trials: transparency and traceability of consent. F1000Research 6 (2017)
14
Y. Wu et al.
15. Brandenburger, M., Cachin, C., Kapitza, R., Sorniotti, A.: Blockchain and trusted computing: problems, pitfalls, and a solution for hyperledger fabric. arXiv preprint arXiv:1805.08541 (2018) 16. Duan, S., et al.: Intrusion-tolerant and confidentiality-preserving publish/subscribe messaging. In: 2020 International Symposium on Reliable Distributed Systems (SRDS), pp. 319–328. IEEE (2020) 17. Dwivedi, A.D., Srivastava, G., Dhar, S., Singh, R.: A decentralized privacypreserving healthcare blockchain for IoT. Sensors 19(2), 326 (2019) 18. Genestier, P., et al.: Blockchain for consent management in the ehealth environment: a nugget for privacy and security challenges. J. Int. Soc. Telemed. eHealth 5, GKR-e24 (2017) 19. Gilda, S., Mehrotra, M.: Blockchain for student data privacy and consent. In: 2018 International Conference on Computer Communication and Informatics (ICCCI), pp. 1–5. IEEE (2018) 20. Gordon, W.J., Catalini, C.: Blockchain technology for healthcare: facilitating the transition to patient-driven interoperability. Comput. Struct. Biotechnol. J. 16, 224–230 (2018) 21. Kaptchuk, G., Miers, I., Green, M.: Giving state to the stateless: augmenting trustworthy computation with ledgers. Cryptology ePrint Archive (2017) 22. Mamun, Q.: Blockchain technology in the future of healthcare. Smart Health 23, 100223 (2022) 23. Mann, S.P., Savulescu, J., Ravaud, P., Benchoufi, M.: Blockchain, consent and prosent for medical research. J. Med. Ethics 47(4), 244–250 (2021) 24. Matetic, S., et al.: {ROTE}: rollback protection for trusted execution. In: 26th {USENIX} Security Symposium ({USENIX} Security 17), pp. 1289–1306 (2017) 25. McGhin, T., Choo, K.-K.R., Liu, C.Z., He, D.: Blockchain in healthcare applications: research challenges and opportunities. J. Netw. Comput. Appl. 135, 62–75 (2019) 26. Mettler, M.: Blockchain technology in healthcare: the revolution starts here. In: 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), pp. 1–3. IEEE (2016) 27. Rantos, K., Drosatos, G., Demertzis, K., Ilioudis, C., Papanikolaou, A., Kritsas, A.: ADvoCATE: a consent management platform for personal data processing in the IoT using blockchain technology. In: Lanet, J.-L., Toma, C. (eds.) SECITC 2018. LNCS, vol. 11359, pp. 300–313. Springer, Cham (2019). https://doi.org/10. 1007/978-3-030-12942-2 23 28. Rupasinghe, T., Burstein, F., Rudolph, C.: Blockchain based dynamic patient consent: a privacy-preserving data acquisition architecture for clinical data analytics (2019) 29. Tith, D., et al.: Patient consent management by a purpose-based consent model for electronic health record based on blockchain technology. Healthc. Inform. Res. 26(4), 265–273 (2020) 30. Wu, Y., Chen, H., Wang, X., Liu, C., Nguyen, P., Yesha, Y.: Tolerating adversarial attacks and byzantine faults in distributed machine learning. In: 2021 IEEE International Conference on Big Data (Big Data), pp. 3380–3389. IEEE (2021)
Design Principles for Interoperability of Private Blockchains Suha Bayraktar(B)
and Sezer Gören
Yeditepe University, Istanbul, Turkey {sbayraktar,sgoren}@cse.yeditepe.edu.tr
Abstract. Interoperability is one of the most promising research areas of blockchain technologies. When state-of-the-art solution architectures are designed, interoperability is expected to address many business challenges and speed up time to market. We see many digital transformation projects integrated with interrelated business processes to improve business performance. In the meantime, we see many businesses and industry solutions that are also built using blockchain technologies. Most of these blockchain solutions are running as standalone industry processes as a common practice. There is an increasing demand to interconnect and interoperate these blockchain solutions to enable end-to-end tracking and visibility. This demand also brings the need for the standardization of blockchain interoperability. Until now, we see most of the interoperability research projects concentrating on interconnecting 2 independent blockchain networks or they are mostly built for public cryptocurrency blockchain networks. Most of these projects have less or minimum standardization approaches and do not provide critical business privacy requirements. Part of these research projects is also breaking the general decentralized architecture of blockchain in many ways. Our proposed architectural approach and design principles are aiming to serve as a foundation for future private interoperability projects. Our proposed solution is using smart contracts to design a publish & subscribe architecture. This architecture can be used to integrate 2 or more private blockchains at smart contracts level. Keywords: Blockchain · Decentralized · Interoperability · Smart contracts · Supply chain · Permissioned
1 Introduction Blockchain technologies are introduced in many industries as promising solutions for the future. These solutions proved to solve many business challenges such as better product quality, faster time to market, improved security, cost reduction, traceability, and immutability. The effort to introduce new blockchain solutions brought one major need: Interoperability. Interoperability is defined in [1] as “Interoperability among components of large-scale, distributed systems is the ability to exchange services and data with one another.” Interoperability can be considered as a general term that shows or indicates that 2 separate software applications can talk to each other. Our paper mainly covers the interoperability of Business-to-Business (B2B) applications. Let us assume 2 or more © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Awan et al. (Eds.): DBB 2022, LNNS 541, pp. 15–26, 2023. https://doi.org/10.1007/978-3-031-16035-6_2
16
S. Bayraktar and S. Gören
B2B applications each governed by different parties need to connect, and exchange information required for the business. Based on [1], this integration can be described as interoperability. In a typical blockchain use case, interoperability is far more than simple integration. Due to its complexity and security, it will require integration on multiple levels such as interconnectivity and secure identity management. As blockchain is already applied to a wide area, different business needs might still require alternative and multiple interoperability designs. Therefore, interoperability solutions shall not be considered and designed as a “One size fits all” approach. In our paper, we propose an alternative interoperability solution with its design principles. Our proposed design principles can be considered as guidelines for private blockchain interoperability and aim to provide a standardized, reusable solution. For the initial use case, we assume that there will be at least 3 separate private business areas. Each of these business processes is deployed in separate blockchain networks. In the final solution, these 3+ business networks are interoperated to form a new, single end-to-end business process. The proposed design guidelines and candidate solution also aim to ensure high compatibility with the existing blockchain decentralization concepts. In background, we explain the need for private blockchain interoperability. By demonstrating existing challenges, we further demonstrate why we need a new design approach. In related work chapter, we go through well-known interoperability solutions and research projects. The proposed approach initially provides design principles for candidate interoperability solutions. Finally our proposed interoperability solution architecture is introduced and further explained.
2 Background The idea of integrating 3+ independent blockchain networks is simply triggered by the need to integrate interrelated businesses and their related processes. In our paper, we frequently mention end-to-end supply chain management, as it is a popular research area for the interoperability of businesses. The existing research examples will give readers indications about the difficulties of 3+ blockchain network interoperability and well-known shortcomings. Before going into the details of our research, we will briefly cover crucial topics related to interoperability as follows: Decentralization, as one of the most important features of blockchain, provides independency on blockchain networks. We build our research on a basic rule: “Each blockchain needs to stay independent and decentralized when they interact with other blockchains.” Therefore, any interoperability solution that offers an intermediary, 3rd party solution, framework, or governing authority might introduce a dependency on interoperated blockchains and have privacy concerns. Jin et al. [2] in their paper show the disadvantages of centralized interoperability approaches. Single Point of Failure is defined as any non-redundant part of a system that would cause the entire system to fail when it is dysfunctional. In the related work section, we will see some well-known use-cases and projects that introduce intermediaries or hubs between blockchain networks. In these projects, terms such as mediator, hub, authority, intermediator, trustee, bridge, or relay chain are commonly used. This is a general warning for a candidate interoperability solution. Such intermediary designs might be
Design Principles for Interoperability of Private Blockchains
17
suitable for public interoperability solutions. These solutions might also address certain interoperability requirements. However, we need to keep in mind that such intermediaries might introduce an end-to-end process to fail or stop in case the intermediator goes down. According to Bahtia [3], this is considered as a single point of failure and shall be eliminated. [3] covers such approaches called as sidechain and notary schemes in his paper. Privacy is a critical factor for business networks. Any interoperability solution built for business networks shall cover privacy needs. Hyperledger [4] provides permissible, decentralized, and distributed architecture for business privacy. Therefore, we have selected to use Hyperledger [4] based blockchain architecture in our existing project. Hyperledger type blockchain architectures are good candidates to provide strict access rule and privacy features. Interoperability approaches can be classified on different levels. The most common approaches are code-level, smart contracts, intermediary, middleware, and side-by-side interoperability, which will be discussed next. Code Level Interoperability can be taken as the simplest approach. Although such simple implementation might not be taken as a valid approach, we decided to mention such implementation intention. A developer might develop straightforward code to integrate 2 blockchain networks. However, a code-level integration is considered a legacy, non-reusable solution. There are already various research papers [5, 6] that cover known challenges. Legacy code is also considered as laborious when needed to be migrated to reusable, standardized services. With simple code-level interoperability, all critical levels of interoperability might be hard to achieve. A legacy interoperation code might also break the rule of decentralization, remove blockchain independency, and will not serve for standardized interoperability. Smart Contracts Interoperability is also known as enhanced code-level interoperability. As interoperability is coded on multiple levels, a smart contract is the last mile of a blockchain transaction where a final contract (public or private) or business agreement (private) is executed. A smart contract level integration allows interacting blockchains to talk to each other on the business process level. Once other interoperability levels are achieved, the smart contract level can serve as the final integration level. Many researchers claim Smart Contracts as the most critical level of interoperability. This approach is also commonly considered in dApps [7] interoperability [9]. Intermediary Interoperability is frequently seen in interoperability solutions that appear as Intermediary, Hub, Relay Chain, Bridge, Trustee, or similar. This kind of interoperability is aiming to provide a bridge solution that is offered by a notary 3rd party or a consortium. When interoperating blockchains are not the owner of such a bridge solution, this approach is likely to create a dependency on such a notary party. These types of solutions are commonly seen for public cryptocurrency-based blockchain networks and already in use. These interoperability solutions are often designed as the Internet of blockchains (IoB). We cover some of these intermediary solutions in the related work section. Middleware Interoperability is like intermediaries. An intermediary solution such as Overledger might be designed as middleware architecture to provide interoperability.
18
S. Bayraktar and S. Gören
Overledger [18] with its middleware design aims to decouple transaction and communication levels from business logic. This approach aims to provide a more refined integration between blockchains. In [8], a middleware interoperability approach called Hermes is covered to create an internet of blockchains. Side by Side Interoperability allows, instead of using an intermediary or hub, an interoperability solution to be deployed on each blockchain network to enable interoperation between independent blockchain networks. This will require each blockchain type to develop adapters on their side to integrate this standardized solution. This can be seen as modular architecture where standard interoperability modules are deployed in each blockchain network. With this approach single point of failure is mostly eliminated. A single interoperated blockchain can still become a single point of failure, once all distributed nodes of this blockchain are down. However, this is a very low probability and can be considered as a failure-free single point of the solution. Although, we will not be covering details of interoperability challenges throughout this paper, it will be still useful to remind some well-known challenges from existing research. According to existing research [14, 22], an interoperability solution still needs to cover some critical challenges such as intercommunication, security, latency, scalability, and high availability before final deployment. As one of the biggest advantages of blockchain networks is to ensure always-on runtime for users, our concentration in this paper will be mainly on high availability. Although this challenge is mainly addressed by the general blockchain architectures, it needs to be ensured that interoperated blockchain networks also ensure always-on, highly available runtime and do not have any single point of failure.
3 Related Work In our paper, we examined well-known and highly cited interoperability approaches for independent blockchain networks. We also covered a well-known survey for interoperability approaches. After our analysis, we categorized interoperability solutions in two separate approaches: Intermediaries and Side-by-Side. We believe this approach will give a clearer view of interoperability approaches. Let us go through these projects: Cosmos Hub [10, 11] is offered for public blockchain networks. Cosmos provides a cryptocurrency swapping solution for different cryptocurrency blockchain networks. Cosmos simply concentrates on the interconnection level of interoperability. Cosmos is also called as Internet of Blockchains (IoB). Cosmos with its IBC (inter-blockchain communications) addresses the communication part of the interoperability. As shown in Fig. 1, Cosmos introduces a mediator architecture called Cosmos Hub. Integrated blockchain networks or zones use this hub to interoperate with other blockchains. Such a mediator architecture is likely to introduce a single point of failure when the mediator hub goes down. In general, intercommunication alone will not be sufficient for interoperability if private networks need to interoperate. In private networks, interoperability requires further integration areas such as smart contracts, and access control for interoperated networks. This is not addressed in the initial Cosmos design. To enhance the existing Cosmos scope, a new project called Cosmwasm [12] is deployed to address interoperability
Design Principles for Interoperability of Private Blockchains
19
Fig. 1. Intermediary cosmos architecture (blockchain = Zones) [8]
between smart contracts. Even if Cosmos provides a further interoperable solution with Cosmwasm, the nature of being public network will not be a solution candidate for private networks due to missing access control and privacy requirements. PolkaDot [13] is known as an enhanced interoperability solution. Polkadot was initially designed to interoperate any type of blockchain network. In [13] a mediator Relay Chain is introduced. Through Relay Chain all blockchain networks interoperate and communicate to each other. Although Polkadot introduces smart contract interoperability, the nature of being a mediator architecture will introduce centralization for private networks interoperability. In [13], access control is also another issue. As private businesses will not allow private transactions to be processed through a public mediator, the design will not be suitable for a final solution. Dinh et al. [14] offer an alternative interoperability approach as “a blueprint for interoperable blockchains”. They mention 3 main challenges for an interoperable architecture: access control, cross-chain transactions, and communication. Dinh et al. clearly state that without access control interoperability will not be satisfied. For business use cases, private organizations require certain data to be accessed by selected users. Knowing this, access control to any data between interconnected blockchains becomes an important issue. In public networks, users can see all data in distributed nodes. Therefore, Din et al. offer to use a private network to provide controlled access. Dinh et al. extend this approach by offering further features such as fine-tuned access policy based on data, time, province, and aggregate. The interoperability approach in [14] satisfies the requirement of private blockchains. With newly added modules, the design also addresses the access issue. The solution offered by [14] can be taken as a possible candidate for further research and development. In [14] we see no signs of functionality for multiple blockchain interoperability. We consider this interoperability solution as limited to 2 blockchains. The design introduces a reusable solution that can be deployed on both blockchains and can be part of a final interoperability solution. In Fig. 2, you can see the architecture from Din et al. We consider this interoperability approach side by side with the added modules in each blockchain. Ghaemi et al. [15]: is offering a traditional publish and subscribe architecture. This research project is supported by Hyperledger. The offered pub-sub architecture in Fig. 3 allows the interconnection of 2 blockchains. The broker blockchain is introduced as a
20
S. Bayraktar and S. Gören
Fig. 2. Din et al. [14] interoperability architecture with added side-by-side components
mediator to provide the interoperability. We drop this solution out of the candidate list due to intermediary blockchain architecture. Broker blockchain simply means a possible single point of failure and centralized approach. Publish and subscribe idea offered here can be still considered as an idea for the final interoperability solution.
Fig. 3. Pub-Sub architecture with intermediary blockchain [15]
Bellavista et al. [16]: research paper (July 2021) offers an approach called relay scheme. The relay scheme is deployed in each blockchain and responsible to update interconnected blockchains. This paper aims to offer interoperable blockchains for a private business process. This project is also covering real-world examples which were missing in [14]. Bellavista et al. suggest not to create a mediator architecture that breaks the rule of decentralization. This approach in Fig. 4 can also be considered as a design idea for the final interoperability solution.
Fig. 4. Relay scheme side-by-side architecture from Bellavista et al. [16]
Hyperledger Cactus [17]: is offered by Hyperledger which is part of Linux Foundation. Cactus is aiming to interconnect 2 homogeneous private blockchains. The project is designed for the interoperation of Hyperledger and Ethereum [18]. In the Fig. 5 shown architecture design is covered in Iulia et al. [19] paper.
Design Principles for Interoperability of Private Blockchains
21
Fig. 5. Intermediary Hyperledger Cactus solution [21]
The Cactus is designed as an intermediary. Single point of failure design and centralized approaches eliminates Cactus from a candidate interoperability list. Overledger: whitepaper [20] is published by Verdian et al. Later, they formed a company called Quant [21]. Verdian et al. claim their solution as a comprehensive solution for interoperability. According to Verdian et al., their solution decouples the complexity of interoperating blockchains from business logic. Overledger introduces a mediator architecture and offers a network where interconnected blockchains communicate with each other using the Quant architecture. Like Polkadot Quant can be considered as a more suitable solution for public networks due to its mediator-based middleware architecture. Khan et al. [22]: published a survey on the 17th of August 2021 as one of the most comprehensive papers on blockchain interoperability. They analyze both permissionless(public) and permissioned(private) networks. As public networks are out of our scope and not suitable for B2B interoperability, we eliminated these parts from our scope of work. Khan et al. mainly concentrated on the role of smart contracts for blockchain interoperability. They claim smart contracts to be the most critical part of interoperability. For full interoperability, Khan et al. claim Identity Management, Consensus Mechanism, Cryptographic Management, and Smart Contracts (Chain Code in Hyperledger) to be all addressed together. Like our research, Khan et al. propose Smart Contracts to be the highest concentration area for interoperability in [22] they have analyzed 13 solutions from existing research from the Smart Contracts approach perspective. Further in their research, they also deep dive into covered smart contracts proposed approaches and analyze projects in 4 additional points: Privacy & Security, Scalability, Degree of Confidence, Bidirectional Transactions. They consider these areas to be very critical for interoperability. The comparison of these projects concludes only 34 of existing research projects use the smart contract required areas. As Khan et al. mainly concentrate on Smart Contracts interoperability, we do not see Khan et al. covering the critical design principles we address in our research. After our provided examples of well-known interoperability solutions and research projects, we categorized these solution approaches in the following table: We would like to mention some important highlights by considering the Table 1: From a business perspective, Bellavista et al. [16] show how real business use cases can be deployed for a possible interoperability approach. [16] also provides a decentralized, single point of failure-free side-by-side architecture. However, we cannot see any additional design approach for 3+ blockchain interoperability in Bellavista et al. paper.
22
S. Bayraktar and S. Gören Table 1. Well-known interoperability solutions comparison
Khan et al. [22] survey concentrates on interoperability using smart contracts. Khan et al. provide the most advanced survey that can act as a rich research source for candidate interoperability solutions. Khan et al. approach using smart contracts and further critical interoperability areas provides good coverage for a final smart contracts-based interoperability solution. Cosmos, Polkadot, and Quant intermediary type solutions are mainly suitable for permissionless (public) networks and will not be considered as candidates for a private network interoperability solution. We see the combination of the side-by-side approach in [14, 16], and the pub-sub approach in [15] to play a design idea for our final interoperability architecture. We also take the Smart Contracts-based interoperability approach in [22] as a final design point for interoperability.
4 The Proposed Approach By looking at the comparison in Table 1, we can conclude the following key design principles for the interoperability of private blockchain networks: Decentralization: is the key feature of blockchain and distributed ledger [23] technologies (DLT). Although a single blockchain network is decentralized, an interoperability solution that introduces a centralized solution such as a notary or intermediary might weaken the existing decentralization of a blockchain. While designing an interoperability solution, we need to consider decentralization as a whole and reconsider whether any part of the solution creates centralization. Homogeneous Networks are ideal for faster and easier blockchain interoperability deployment. According to Khan et al. [22] homogeneous blockchains refer to similar blockchains such as Ethereum, Hyperledger. Heterogenous blockchains refer to different blockchains such as Bitcoin, Ethereum. However, some heterogeneous blockchains might be still required due to the existing business processes. Also, part of these business processes might have already been deployed as heterogeneous blockchains. In such cases, much higher integration efforts will be required for deploying a heterogeneous blockchain between homogeneous blockchains. If the business processes are planned to
Design Principles for Interoperability of Private Blockchains
23
be designed from the scratch, using only homogeneous blockchain networks will help to minimize integration efforts according to [22]. Smart Contracts: According to Magazzeni et al. [27] smart contracts introduce an important feature for blockchains: the ability to identify participants. This means each user needs to be permissioned to be part of the process. They also introduce the ability to execute workflows which is critical for businesses. For this purpose, a blockchain network needs to offer Hyperledger. Ethereum and Corda [24] are good examples of this. As they both use Smart Contracts to develop solutions, it is much easier to overcome interoperation challenges and executed private workflows. If we assume Hyperledger and Corda are interoperated as private networks, access control will be a much easier topic to address in comparison to heterogeneous networks. Wide range of Contributors: It is highly suggested to use blockchain solutions that have the highest contributors, developer community, and address real-world use cases. Any solutions that are bound to a limited group of developers or private investments might not serve the need for a longer lifetime of an interoperability solution. Hyperledger is one of the key projects of the Linux Foundation and is supported by big organizations like IBM [25]. Therefore, it can be taken as a good candidate for a final interoperability solution. Single Point of Failure: It is a common approach to build an intermediary or relay chain type of solution for interoperability. Please keep in mind that intermediaries are much suitable for cryptocurrency-based public, permissionless networks. Coin exchanges are mostly designed for public end-users. In such solutions, the single point of failure might not be very critical. For privacy reasons, a business network might have less interest to execute transactions through an intermediary solution. It is also likely that they will keep away from intermediaries, notaries that create dependency architectures. As such intermediaries are likely to be centralized architectures, they have a high potential to introduce a single point of failure, once it goes down in a mission-critical business network. Side by Side Architecture: For interoperability, different architectural designs such as cross-chain, Blockchain of Blockchains (BoB), Internet of Blockchains (IoB), and Relay Chains are offered in the existing solutions. Most of these solutions create a single point of failure with their intermediary, notary designs. In comparison to public blockchains, private blockchains are more likely to stay decentralized and independent. Therefore, an interoperability solution shall be deployed on the side of each interoperating blockchain instead of creating an intermediary in between. In Fig. 6, we see each business process to contain a blockchain network. Each blockchain is also expected to stay independent and continue to operate as a standalone solution. Therefore, any intermediary between these blockchains might introduce centralization and cause a single point of failure when the intermediary goes down. To overcome this, we offer a standardized interoperability solution deployed in each blockchain so that all involved blockchains continue to stay independent. A final side-by-side solution shall be designed with standardized modules and distributed ledgers. The solution we offer in Fig. 7 is using Table 1 as a reference for the final solution. To keep the decentralized and single point of failure-free solution, the offered architecture is deployed side-by-side in each interoperating blockchain. With
24
S. Bayraktar and S. Gören
Fig. 6. Supply chain process: from supply to customs
this approach, the independence of each blockchain is ensured and the final end-to-end solution does not introduce any centralized, single point of failure architecture. By using Hyperledger, we enable critical Smart Contracts features such as privacy, and workflows covered in [27].
Fig. 7. Proposed side-by-side modular interoperability architecture
The final architecture offers smart contracts based on publishing, and subscription modules. For each function described below, a smart contract rule is deployed and executed between interoperated blockchains. Interoperability-specific separate DLTs are also deployed in each blockchain so that independent blockchains processed continue to operate. In our final solution in Fig. 7, each blockchain network will act as a publisher and subscriber at the same time. The following features will be deployed as smart contracts (features) in each network: • • • • • •
Feature 1: Publishing the intention to interoperate Feature 2: Publishing services for the interoperability Feature 3: Allowing invited users to be part of the interoperating network Feature 4: Subscribing services published by interoperating blockchains Feature 5: Intention to become part of the end-to-end chain Feature 6: Intention to operate as a member chain for tracing the end-end-process
The above features are the main properties of our publish and subscribe architecture. Existing research and our work brought us to the conclusion that such an interoperability approach will be at a much higher level of standardization in comparison to existing research for private networks.
Design Principles for Interoperability of Private Blockchains
25
5 Conclusions Before going into any interoperability design, we suggest researchers consider our design principles for reusable and standardized interoperability solutions. We offer a new side-by-side design approach based on our unique publish and subscribe model. This new architecture is expected to ease the interoperability and faster deployment of blockchain-based private business processes. The design offered in this project needs further enhancement to increase the traceability and quality ranking of business processes that are dependent on suppliers and service providers as shown in Fig. 6. In our research project, we will further concentrate on publish and subscribe-based interoperability architecture and develop the modules in the next stage. We believe this architecture will serve the expected quality and features for the interoperability of private blockchain business networks.
6 Future Research Potential The results and conclusion of our research give us a chance for future research potential. As industry processes such as end-to-end supply chain tracking [26] become more enhanced, customers will require to see more traceability and visibility for better product quality and time to market. High-quality interoperability and standardized design can be an ideal solution to reach this level.
References 1. Heiler, S.: Semantic interoperability. ACM Comput. Surv. 27 (1995). dl.acm.org 2. Jin, H., Dai, X., Xiao, J.: Towards a novel architecture for enabling interoperability amongst multiple blockchains. In: IEEE 38th International Conference on Distributed Computing Systems (2018). ieeexplore.ieee.org 3. Bhatia, R.: Interoperability solutions for blockchain. In: Conference on Smart Technologies in Computing (2020). ieeexplore.ieee.org 4. Hyper Ledger. https://www.hyperledger.org/ 5. Weide, B.W., Heym, W.D., Hollingsworth, J.E.: Reverse engineering of legacy code exposed. In: Proceedings of the 17th International Conference on Software Engineering (1995). dl. acm.org 6. Furda, A., Fidge, C., Zimmermann, O., Kelly, W., Barros, A.: Migrating enterprise legacy source code to microservices: on multitenancy, statefulness, and data consistency. IEEE Softw. 35(3), 63–72 (2018). https://doi.org/10.1109/MS.2017.440134612 7. Madine, M., et al.: appXchain: Application-Level Interoperability for Blockchain Networks (2021). https://ieeexplore.ieee.org/abstract/document/9455384/ 8. Belchior, R., Vasconcelos, A., Correia, M., Hardjono, T.: Fault-tolerant middleware for blockchain interoperability. Future Gener. Comput. Syst. 129, 236–251 (2022) 9. Wu, K., Ma, Y., Huang, G., Zhe, X.: Key Lab of High-Confidence Software Technology MoE (Peking University), A First Look at Blockchain-based Decentralized Applications. arXiv: 1909.00939v1 [cs.SE] (2019) 10. Cosmos. https://cosmos.network/ 11. Cosmos Whitepaper. https://wikibitimg.fx994.com/attach/2020/12/16623142020/WBE166 23142020_55300.pdf
26
S. Bayraktar and S. Gören
12. Cosmwasm. https://cosmwasm.com/ 13. PolkaDot. https://polkadot.network/ 14. Dinh, T.T.A., Datta, A., Ooi, B.C.: A blueprint for interoperable blockchains. arXiv preprint arXiv:1910.00985 (2019) 15. Ghaemi, S., Rouhani, S., Belchior, R., Cruz, R.S., Khazaei, H., Musilek, P.: A Pub-Sub Architecture to Promote Blockchain Interoperability (2021) 16. Bellavista, P., Esposito, C., Foschini, L., Giannelli, C., Mazzocca, N., Montanari, R.: Interoperable blockchains for highly integrated supply chains in collaborative manufacturing. Sensors 21, 4955 (2021). https://doi.org/10.3390/s21154955 17. Hyper Ledger Cactus. https://www.hyperledger.org/use/cactus 18. Ethereum. https://ethereum.org/en/ 19. Mihaiu, I., Belchior, R., Scuri, S., Nunes, N.J.: A framework to Evaluate Blockchain Interoperability Solutions (2021). https://www.researchgate.net/publication/356907351 20. Verdian, G., Tasca, P., Paterson, C., Mondelli, G.: Quant overledger whitepaper (2018). https://girishmehta.com/projects/quant/wp-content/uploads/2020/07/Quant_Overle dger_Whitepaper-Sep-1.pdf 21. Quant. https://www.quant.network/ 22. Khan, S., Amin, M.B., Azar, A.T., Aslam, S.: Towards interoperable blockchains: a survey on the role of smart contracts in blockchain interoperability. IEEE Access 9, 116672–116691 (2021). https://doi.org/10.1109/ACCESS.2021.3106384 23. Kuo, L.T.-T., Kim, H.-E., Ohno-Machado, L.: Blockchain distributed ledger technologies for biomedical and health care applications. J. Am. Med. Inform. Assoc. 24(6), 1211–1220 (2017). https://doi.org/10.1093/jamia/ocx068 24. Corda. https://www.corda.net/ 25. IBM Blockchain. https://www.ibm.com/blockchain 26. Chang, Y., Lakovou, E., Shi, W.: Blockchain in global supply chains and cross border trade: a critical synthesis of the state-of-the-art, challenges and opportunities. Int. J. Product. Res. 58(7), 2082–2099 (2020) 27. Magazzeni, D., McBurney, P., Nash, W.: Validation and verification of smart contracts: a research agenda. Computer 50, 50–57 (2017). ieeexplore.ieee.org
Blockchain for Proposal Management Mustafa Sanli(B) Aselsan, Ankara, Turkey [email protected]
Abstract. In business world, proposals are used to sell a company’s products or services and provide solutions in response to a need. Companies use a proposal management process to manage and coordinate the tasks required to submit proposals. The proposal management process includes a well-defined set of tasks, but inherently has its challenges when applying the process to the dynamic business environment. As a form of digital trust and a platform for immutable records, blockchain technology has a lot to offer in solving these problems. This paper describes how blockchain technology can help solve the challenges in proposal management. Our work presented in this paper includes the design and implementation of two blockchain platforms. The first is a decentralized autonomous organization that coordinates the proposal management workflow. The second is a decentralized application that manages document registration activities. These use cases act as two examples of how blockchain technology can contribute to proposal management. Keywords: Blockchain · Proposal · Bid · Smart contract
1 Introduction In today’s business environment, companies are required to regularly respond to RFPs (Request for Proposal) issued by government agencies or other private companies. The response is usually in the form of a proposal stating not only the products and services to be delivered, but also the price, timeline, technical specifications and terms of sale for delivery [1]. Every RFP is a new business opportunity that can contribute to a company’s growth. As a result, preparing and delivering a winning proposal greatly impacts the company’s business. Companies use a proposal management process to manage and coordinate the tasks required to prepare proposals. While this process is often modified and tailored to the business sector, company profile, and many other factors specific to the proposal, the overall scheme and key steps can be applied to a wide range of proposals [2]. Although the proposal management process includes a well-defined set of tasks separated by decision gates, it has inherent problems and challenges when applying the process to the dynamic business environment. Lack of trust between the parties involved, slow and biased decisions, late and incomplete delivery of documents are just a few examples of the problems caused by human behavior. These issues hinder the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Awan et al. (Eds.): DBB 2022, LNNS 541, pp. 27–35, 2023. https://doi.org/10.1007/978-3-031-16035-6_3
28
M. Sanli
success of submitting a winning proposal. As a form of digital trust and a platform for immutable records, blockchain technology has a lot to offer in solving these problems. Blockchain technology provides a distributed network where non-trusting members interact which each other in a verifiable manner without a trusted intermediary [3]. The data stored in the blockchain is immutable. The data is shared publicly or privately between the parties, but cannot be changed by anyone. Blockchain ensures that users can trust the data [4, 5]. This inherent trust enables the creation of applications that eliminate risks from human intervention and technological threats [6–8]. Blockchain technology offers two important concepts that can be used to improve proposal management process: Decentralized applications (dApp) and Decentralized Autonomous Organizations (DAO). A dApp is a computer application running on the blockchain. dApps basically consist of two components: A front-end user interface, which is usually a web page, and a back-end code that runs on the blockchain in the form of a smart contract. Smart contracts consist of a set of data and functions located at a specific address on the blockchain. Smart contracts contain the business logic of dApps. They enable writing applications where users can define rules to be executed when pre-defined conditions are met [9]. Consequently, when using smart contracts, the user can trust not only the data but also the process that deals with the data. Smart contracts can be used in many areas: Decentralized finance, property registration, asset issuance, voting, identification and much more. Their main advantage is the ability to eliminate intermediaries and to complete fast transactions. Smart contract functions can be public or private, automated or executable after launch. Smart contracts are based on algorithms and work according to clear sequences of actions. Although smart contracts are complex in their structure, they greatly simplify and speed up the verification and transaction process [10]. A DAO is a self-governing and decentralized business organization running on the blockchain [11]. The organization’s laws are embedded in the code of a smart contract, using complex token governance rules. DAOs are fully autonomous and transparent. Smart contracts not only define the ground rules of DAOs, but also execute agreed upon decisions. Transactions, voting, and even the code itself can be audited if desired [12]. DAOs provide a way to organize people and teams around the world without knowing each other and making corporate decisions autonomously, all of which are coded on a blockchain. In this paper, we present the possible contributions of blockchain technology to solving problems in proposal management. To the best of our knowledge, there is no other published work on the possible benefits of using blockchain technology in proposal management. Firstly, we demonstrate the proposal management process and show how blockchain can add value to this process. Secondly, we present a DAO that we designed and deployed as part of this study. This DAO serves as an example to show how a blockchain platform can be used to solve organizational challenges in proposal management by eliminating trust related issues and providing a well-defined sequence of activities with the participation and support of all parties. Finally, the third contribution of our work is the design and deployment of a dApp to solve an important documentation problem in proposal management. This dApp registers the ownership and confirms the
Blockchain for Proposal Management
29
authenticity of a document without revealing the content of the document. The DAO and dApp we propose in this study are two examples of how blockchain-based platforms can be used to improve the proposal management process. The remainder of our paper is organized as follows. Section 2 provides a brief overview of the proposal management process. Section 3 illustrates the challenges in this process and presents how blockchain technology can help solve these problems. Section 4 presents a DAO and a dApp that we designed and deployed on the blockchain. Our conclusions are given in Sect. 5.
2 Proposal Management Process The proposal management process includes the tasks required to respond to a complex RFP. These tasks require inputs from multiple parties. It starts with deciding to respond to an RFP, includes many planning activities, decision gates and is concluded by submitting the proposal [13]. Large companies have dedicated proposal teams made up of a proposal manager, coordinators, financial analysts, technical advisors, sales professionals, writers and graphical designers. As a result of insufficient human resources in smaller companies, these roles are assumed by people from various departments. It should be noted that proposal development is part of the business development lifecycle that includes preRFP and capture planning activities [2]. Figure 1 shows the 6 steps involved in the proposal management process.
Pre-proposal activities
Market identification Account planning Opportunity development
RFP release Go/No-Go Decision Proposal planning
Kickoff meeting Proposal development
Proposal management process
Business development process
Sending out Closing out
Fig. 1. Proposal management and business development processes
30
M. Sanli
2.1 Go/No-Go Decision The first step is determining whether the opportunity could be suitable for the company. In this step the feasibility of delivering a solution for the opportunity is assessed. Additionally, the competition is analyzed and the rationality of attempting to place effort on developing a proposal is evaluated. At the end of this step, it is decided whether to prepare a proposal or not. It is important to define the standards that may be utilized in making a final decision at his step. 2.2 Proposal Planning After the decision to submit a proposal, the following step is to plan the development effort. In this step, a winning strategy is developed, tasks and related responsibilities are identified, a development calendar is drawn up and a proposal is outlined. In this step different departments in the company, subcontractors and consultants must be coordinated to produce a consistent budget of time, resources and price for proposal development. 2.3 Kickoff Meeting After the planning phase, everyone involved in proposal development should get together and be briefed on the tasks, expectations, roles and responsibilities within the proposal work. 2.4 Proposal Development In this step, the proposal plan developed in the previous steps is activated. The proposal is written, the document is designed and graphics are prepared. There are also review and editing tasks performed by different departments. 2.5 Sending Out This step is about ensuring that the proposal is properly packed, sent out to the correct place, through an appropriate method (email, hand delivery, postage, etc.) and before the submission deadline. 2.6 Closing Out The proposal work does not end when submitted. The proposal is archived with the RFP and other supporting documents. Lessons learned from the proposal work are recorded in the company’s knowledge repository. The proposal team is released.
Blockchain for Proposal Management
31
3 Value of Blockchain for Proposal Management When proposal management is concerned, blockchain technology provides a secure method to organize and coordinate the process flow from start to finish. New data blocks are built to contain records and completed tasks down the proposal development process. As a result of the encryption inherent in the blockchain, data is immune to tampering. Additionally, there is no central authority that controls and regulates the process. This simplifies interactions between people and also businesses [14]. Many processes in proposal management can be instituted using blockchain technology. Some of the possible applications and benefits of using blockchain in proposal management may include: • Transparency and monitoring: Transparent record keeping facilitates project management by allowing everyone to monitor the progress of development work. This creates more data collection points. Blockchain technology help users ensure the authenticity of documents generated during proposal preparations with the help of digital signatures. Additionally, the blockchain allows tracking the delivery and approval status throughout the proposal process. The transparency provided by blockchain technology ensures that critical credentials such as authorizations, certificates and authenticity are not compromised. • Embedded trust: Today’s business environment often includes interactions between people from different remote teams, companies or geographical locations who do not know each other. Many steps in the proposal development process require mitigating risk between unknown parties. A decentralized activity ledger that logs all proposal development related tasks develops a platform of trust for the project that helps develop better business relationships. Blockchain technology helps to improve cooperation between people because such cooperation relies on trust. • Streamlining processes: Blockchain technology automate tasks through smart contracts. This automation provides benefits for regulating the flow of documents and organizing the sequence of tasks with a visible ledger. In many companies, decisions, documentation, organization of sub-tasks are still managed manually. Blockchain technology can be used to streamline the proposal development process. As the workflow of proposal management involves many pipelined processes that require completion of one process or publication of a document before the following process can begin, smart contracts can help organizing and facilitating the management of these processes automatically. The terms and requirements of the smart contracts are fully integrated and visible to all parties and each stage of process is recorded on blockchain when the contract conditions are executed. • Security and settlement: Every transaction is permanently added to the ledger. If any changes are made, they are also recorded. Consequently, there is lower probability of deception. • Decentralization: There is no single point of failure in decentralized platforms. Therefore, they are resistant to security attacks.
32
M. Sanli
4 Design and Implementation of Decentralized Applications for Proposal Management To demonstrate how blockchain technology can help build a better proposal management process, two sample platforms have been designed and implemented on Ethereum Ropsten [15] and Polygon Blockchains [16]. Smart contracts for both projects are written in Solidity Programming Language [17] using Remix Integrated Development Environment [18]. In order to test smart contracts, test conditions are written in Javascript and unit tests are performed on Truffle Suite [19]. Before deploying contracts to the Ethereum Ropsten Blockchain, behavioral tests are conducted on local Ganache Blockchain [20]. In the user interface, Metamask Browser Extension [21] is used to access the blockchain via Ethereum-compatible JSON RPC Application Programming Interface (API) [22]. Users can interact with the blockchain via a web page using Web3 Javascript Library [23]. Finally, after all testing, contracts have been deployed on the Polygon Blockchain. Although Polygon Blockchain was chosen as the production environment, the smart contracts designed in this study can run on all Ethereum Virtual Machine compatible blockchains such as Ethereum Mainnet and Testnets [24], Binance Smart Chain [25], Avalanche C-Chain [26], TOMO Chain [27], Tron Chain [28] and many more. The reason to choose Polygon for deployment is the short transaction times and low fees this blockchain offers. 4.1 A Decentralized Autonomous Organization for Proposal Management A Decentralized Autonomous Organization (DAO) is actually an organization that is managed by code and runs on the blockchain [11]. All the rules are written in code. No managers, coordinators or boards of directors are required to enforce the governance rules. As a blockchain platform that can contribute to the proposal management process, a DAO is designed and coded in a smart contract. The implemented contract has following features: • The user who deployed the contract on the blockchain is the owner of the contract. After deployment, ownership can be transferred to another user or renounced by transferring it to a burn address. • The owner can add or remove members to the DAO using a smart contract function. • The owner can distribute the native tokens of the DAO to the members. • Any member can add a decision gate in the form of an Ethereum transaction. When a decision gate is added to DAO, the task following the decision gate is also coded as a smart contract and added to the blockchain. • All members can vote for or against the decision gate for a certain period of time. • Members’ voting power is proportional to the number of native tokens of the DAO they held. • If a certain minimum number of members voted and the deadline has passed, the smart contract will count the votes. If there is enough margin for the winner, the decision gate will be marked as passed.
Blockchain for Proposal Management
33
• If the decision gate is passed, it is announced on the dApp web page and the next task is executed. The following use case of this DAO may help explain the possible application scenario in proposal management. • The business opportunity is posted on the dApp web page and representatives of different departments such as marketing, business development and sales are defined as DAO members. Members receive native tokens of the DAO representing their voting power. • A go/no-go decision gate is submitted to the DAO contract. At the same time, the task following the decision gate is defined as “start proposal development” and is coded as a smart contract. These tasks require the submission of specific documents from different departments, such as proposal body, technical appendices and budgetary documents. • DAO members vote for the decision gate. If accepted, the “start proposal development” smart contract is automatically executed. • When authorized members upload documents before the deadline, the smart contract marks the task as completed. • At this point, a new task can be defined, triggered by the completion of the previous task. This simplifies the management of sequential tasks. In this way, all activities in the proposal management process are recorded on a blockchain, which provides a secure and at the same time transparent platform without a central authority. This approach helps resolve organizational challenges in proposal management by eliminating trust related issues and providing a well-defined sequence of activities with the involvement and support of all parties. 4.2 A Decentralized Application for Document Registration Document registration is an important component of the proposal management process. Often different parts of documents are generated by different users, departments or even companies. Therefore, tracking ownership, versions, and authenticity of documents is essential for a healthy proposal management workflow. Basically, there are three functions associated with document registration: • Providing proof of the authenticity of the document • Recording the submission time and the submitter’s identity • Managing ownership of the document A document registration dApp has been developed to meet these requirements. dApp consists of a smart contract and a Web3 enabled web page to interact with the blockchain. The main features of the developed dApp are: • When a user submits a document to the dApp’s web page, the web page calculates the hash of the document. After that, the web page sends the hash, the sender’s Ethereum
34
M. Sanli
address, and the timestamp to the smart contract for recording on the blockchain. It should be noted here that the document hash is calculated in the user’s browser without the need to send the document out of the user’s computer. Only the computed hash of the document is sent to the blockchain. This increases security. • The user who submitted the document is registered as the owner of the document. The owner has the right to transfer ownership to any other user after submission. • The owner and submission time of a document can be queried by submitting the document from the dApp web page. Again, the hash of the document is calculated and sent to the smart contract to query the owner of the document without revealing the document to the Internet. • The authenticity of a document can be verified by submitting it on the dApp web page. The smart contract compares the hash of the submitted document with the previous hash records to see if there is a match. The developed dApp solves a major problem in proposal management by registering ownership of a document and confirming its authenticity without disclosing the content of the document.
5 Conclusion Blockchain technology provides a wide range of benefits for the proposal management process. Transparency, traceability, embedded trust, streamlining the process, decentralization, security and settlement are some of these benefits. Blockchain technology offers powerful tools such as DAOs and dApps that can be used to achieve these benefits. In this study, a DAO is designed to coordinate the tasks in the proposal management process. In addition, a dApp is developed to manage document registration activities. Both the DAO and the dApp are deployed on Polygon Blockchain. These two use cases illustrate the possible contributions of using blockchain technology in proposal management.
References 1. Kalin, C.S.: The Ultimate Bid and Proposal Compendium. CSK Management, Herrliberg (2019) 2. Blakney, B., Divine, C.: APMP Foundation Study Guide. APMP (2016) 3. Lantz, L., Cawrey, D.: Mastering Blockchain. O’Reilly Media, Sebastopol (2020) 4. Tapscott, D., Tapscott, A.: Blockchain Revolution. Penguin, New York (2016) 5. Gao, W., Hatcher, W.G., Yu, W.: A survey of blockchain: techniques applications and challenges. In: International Conference on Computer Communication and Networks (ICCCN). IEEE (2018) 6. Kaushik, A., Choudhary, A., Ektare, C., Thomas, D., Akram, S.: Blockchain - literature survey. In: International Conference on Recent Trends in Electronics Information & Communication Technology (RTEICT). IEEE (2017) 7. Tasatanattakool, P., Techapanupreeda, C.: Blockchain: challenges and applications. In: International Conference on Information Networking (ICOIN). IEEE (2018) 8. Abbas, E., Sung-Bong, J.: A survey of blockchain and its applications. In: International Conference on Artificial Intelligence in Information and Communication (ICAIIC). IEEE (2019)
Blockchain for Proposal Management
35
9. Szabo, N.: The idea of smart contracts. Nick Szabo’s Papers and Concise Tutorials, vol. 6 (1997) 10. Swan, M.: Blockchain: Blueprint for a New Economy. O’Reilly Media, Sebastopol (2015) 11. Antonopoulos, A.M., Wood, G.: Mastering Ethereum. O’Reilly Media, Sebastopol (2019) 12. Wood, G.: Ethereum: A secure decentralised generalised transaction ledger. Ethereum Project Yellow Paper (2014) 13. Williams, J., Lownie, B.: Proposal Essentials. Strategic Proposals, Great Britain (2013) 14. Shermin, V.: Disrupting Governance with Blockchains and Smart Contracts. Wiley, Strategic Change (2017) 15. Ropsten Blockchain. http://ropsten.etherscan.io. Accessed 1 May 2022 16. Polygon Blockchain. http://polygon.technology. Accessed 1 May 2022 17. Solidity. http://soliditylang.org. Accessed 1 May 2022 18. Remix IDE. http://remix.ethereum.org. Accessed 1 May 2022 19. Truffle Suite. http://trufflesuite.com. Accessed 1 May 2022 20. Ganache Blockchain. http://trufflesuite.com/ganache. Accessed 1 May 2022 21. Metamask. http://metamask.io. Accessed 1 May 2022 22. JSON RPC API. http://ethereum.org/en/developers/docs/apis/json-rpc. Accessed 1 May 2022 23. Web3 Javascript Library. http://ethereum.org/en/developers/docs/apis/javascript/. Accessed 1 May 2022 24. Ethereum Blockchain. http://ethereum.org. Accessed 1 May 2022 25. Binance Smart Chain. http://www.binance.org/smartChain. Accessed 1 May 2022 26. Avalanche Blockchain. http://www.avalanche.network. Accessed 1 May 2022 27. Tomo Blockchain. http://tomochain.com. Accessed 1 May 2022 28. Tron Blockchain. http://tron.network. Accessed 1 May 2022
Machine and Deep Learning
One-Shot Federated Learning-based Model-Free Reinforcement Learning Gaith Rjoub1 , Jamal Bentahar1(B) , Omar Abdel Wahab2 , and Nagat Drawel1 1
Concordia Institute for Information Systems Engineering, Concordia University, Montreal, Canada {g rjoub,n drawe}@encs.concordia.ca, [email protected] 2 Department of Computer Science and Engineering, Universit´e du Qu´ebec en Outaouais, Quebec, Canada [email protected]
Abstract. The Federated Learning (FL) paradigm is emerging as a way to train machine learning (ML) models in distributed systems. A large population of interconnected devices (i.e. Internet of Things (IoT)) acting as local learners optimize the model parameters collectively (e.g., neural networks’ weights), rather than sharing and disclosing the training data set with the server. FL approaches assume each participant has enough training data for the tasks of interest. Realistically, data collected by IoT devices may be insufficient and often unlabeled. In particular, each IoT device may only contain one or a few samples of every relevant data category, and may not have the time or interest to label them. In realistic applications, this severely limits FL’s practicality and usability. In this paper, we propose a One-Shot Federated Learning (OSFL) framework considering a FL scenario wherein the local training is carried out on IoT devices and the global aggregation is done at the level of an edge server. Moreover, we combine model-free reinforcement learning with OSFL to design a more intelligent IoT device to infer whether to label a sample automatically or request the true label for the one-shot learning set-up. We validate our system on the SODA10M dataset. Experiments show that our solution achieves better performance than DQN and RS benchmark approaches. Keywords: Federated learning · Internet of things learning · Model-free reinforcement learning
1
· One-shot
Introduction
Federated Learning (FL) has gained recently increasing attention due to its ability to improve the quality of machine learning (ML) and Artificial Intelligence (AI) predictions by building models in a distributed manner with the aim to leverage the strengths of each individual model [29]. Indeed, FL works following the meta-learning approach, which can be defined as learning to learn from other learners. It improves the generalization ability of models and avoids overfitting. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Awan et al. (Eds.): DBB 2022, LNNS 541, pp. 39–52, 2023. https://doi.org/10.1007/978-3-031-16035-6_4
40
G. Rjoub et al.
In FL, each model is trained on a subset of the training data over nodes (i.e., Internet of Things (IoT) devices). Then, an aggregated model is produced from all the models over the centralized server (i.e., edge server). Finally, the model predictions are outputted to the client applications without going through the centralized server. By doing so, this drastically reduces the amount of data that needs to be transferred to the centralized server, improving the overall efficiency and performance of the system. However, FL approaches currently assume that each client has adequate training data to perform the relevant tasks. Despite the fact that some client nodes (e.g., IoT devices) collect large amounts of data, these data usually are inadequate and unlabeled. This severely limits the applicability and scalability of FL in many real-world applications [13]. One-shot learning is a recent method that is developed to address the problem of insufficient training data in the field of artificial intelligence [6,26]. One-shot learning is a technique in machine learning that uses a limited amount of labeled data to train a device, and then it uses the labeled data to train all the other devices in the system. Technically speaking, one-shot learning aims to learn information about object categories from only a handful of labelled examples, per category. In this paper, we propose a One-Shot Federated Learning (OSFL) paradigm, which enables one-shot classification models to be efficiently trained from multiple sources. First, OSFL performs local updates with one-shot tasks based on local data on the IoT devices. Then, it sends local models to a central server for model aggregation and coordination. While this strategy can be quite effective, it disregards the fact that there are many irrelevant examples which could be classified as belonging to the same object category in question. After training the model, it can be used to predict which objects will be misclassified. To overcome this problem, we utilize a model-free reinforcement learning technique. Learning from experience is a valuable capability that can be applied to all aspects of IoT. Despite this, interacting with the world can be an expensive undertaking due to factors such as power consumption and human supervision. Therefore, any IoT learning system has a priority of minimizing the amount of interaction with the outside world required to learn a task. Several model-free reinforcement learning systems have already been demonstrated to be effective in solving complex problems modeled as Markov Decision Processes (MDPs) in challenging domains [7,17,22,23]. In this work, we integrate a model-free reinforcement learning into OSFL. The objective is to design a more intelligent IoT device system that enables to (1) perform object detection and reduce the amount of data transferred during training by using object detection models trained on different local datasets; and (2) select a new set of IoT devices for the one-shot learning set-up. 1.1
Contributions
The main contributions of the paper can be summarized as follows:
One-Shot Federated Learning-based Model-Free Reinforcement Learning
41
– We propose a novel One-Shot Federated Learning (OSFL) framework which enables one-shot classification models to be efficiently trained from multiple sources. – In order to determine an optimal strategy, we integrate a model-free deep reinforcement learning technique into our solution. This model-free technique requires no prior knowledge about any information on the system model. – We study the performance of the proposed solution experimentally on the SODA10M1 Dataset. The experimental results suggest that our proposed method achieves a better performance compared to the Deep Q-learning (DQL) and Random Scheduling (RS) benchmark models for object detection. 1.2
Organization
The rest of the paper is organized as follows. Section 2 describes the existing literature on one-shot FL models and FL scheduling techniques. Section 3 provides a detailed description of the proposed methodology. Section 4 provides the details of the experimental setup, conducted experiments and results. Finally Sect. 5 provides some concluding remarks and future research directions.
2
Related Work
In this section, we first survey the main approaches that employ FL with oneshot learning. Then, we study the approaches that study FL scheduling over IoT systems, and give an overview of the Deep Learning (DL) approaches that are used for FL model scheduling over IoT systems. 2.1
One-Shot Federated Learning
Fusion Learning is an approach for distributed learning presented in [9] that is able to achieve similar accuracy to a Federated setup using only one communication round. In addition to the client’s local model parameters, the distribution parameters of the client’s data are sent to the server. From these distribution parameters, the server regenerates the data and fuses all the data from multiple devices. A global model is then built from this combined dataset, which is transmitted back to the individual devices. Research in distributed learning could take a new direction with this approach, and it has its own set of challenges that need to be addressed in more detail. In [10], the authors propose FedKT, a one-shot federated learning algorithm for cross-silo settings, motivated by the rigid multi-round training of current federated learning algorithms. According to the authors, FedKT can be applied to any classification model and can achieve differential privacy guarantees by utilizing the knowledge transfer technique. In [25], the authors propose a one-shot FL framework, called XorMixFL, based on 1
https://soda-2d.github.io/.
42
G. Rjoub et al.
a privacy-preserving XOR-based mixup data augmentation technique. It consists of collecting other devices’ encoded data samples and decoding them only using the data samples from each device. Until inducing an IID dataset, which is used for model training, the decoding provides synthetic-but-realistic samples. In order to maintain data privacy, both encoding and decoding procedures use bit-wise XOR operations that distort raw samples. 2.2
FL Scheduling Models
To maximize the model accuracy within a given total training time budget for latency constrained wireless FL, the researchers in [24] propose a joint device scheduling and resource allocation policy. In terms of training rounds and scheduled devices per round, they derive a lower bound on the reciprocal of the training performance loss. In order to solve the accuracy maximization problem, the bound is decoupled into two subproblems. In the first step, the optimal bandwidth allocation suggests allocating more bandwidth to devices with less reliable channels or fewer computation capabilities. A greedy device scheduling algorithm is then presented. As the lower bound increases in every step, the algorithm selects the device consuming the least amount of updating time by the optimal bandwidth allocation in each step, until the lower limit starts to rise, indicating that scheduling more devices will reduce the model accuracy. Without knowing wireless channel state information and statistical client characteristics, the authors in [31] present a multi-armed bandit-based framework for online client scheduling (CS) in FL to reduce training latency. First, they provide a CS algorithm based on the upper confidence bound policy (CSUCB) under ideal circumstances in which local datasets of clients are equally balanced and independently dispersed (i.i.d.). The suggested CS-UCB method is given with an upper bound of the predicted performance regret, which shows that the regret increases logarithmically over communication rounds. Then, to handle non-ideal cases with non-i.i.d and imbalanced qualities of local datasets, as well as variable client availability, they suggest a CS algorithm based on the UCB policy and virtual queue approach (CS-UCB-Q). Under certain situations, an upper bound is also determined, indicating that the projected performance regret of the proposed CS-UCB-Q algorithm can have a sub-linear increase across communication rounds. Furthermore, the convergence performance of FL training is investigated. A remote parameter server (PS) aids power-limited devices in training a joint model by collaboratively using local datasets in [1], where federated learning is studied at the wireless edge. Several devices are assumed to be connected to the PS through a shared wireless channel with a limited bandwidth. To accommodate link capacity, each participating device must compress its model update to accommodate transmission to the PS over orthogonal channel resources at each iteration of FL. According to the policies, only a subset of the devices are transmitted at each round, and how resources are allocated according to channel conditions and local model updates. Wireless FL algorithms are then combined with device scheduling where devices can only transmit messages to a limited
One-Shot Federated Learning-based Model-Free Reinforcement Learning
43
number of recipients. As the authors argue in [15], trust should be a critical component in the decision-making process. Because of this, they design a mechanism for establishing trust between edge servers and IoT devices. This trust mechanism identifies those IoT devices that are over- or under-utilizing their resources during local training. In order to make appropriate scheduling decisions for the IoT devices, they design a DDQN-based scheduling algorithm. This algorithm takes into account trust scores and energy levels. In [32], the researchers provide asynchronous distributed computing by enabling model building within a multi-UAV network without transmitting raw sensitive data to UAV servers through an asynchronous federated learning framework (AFL). An AFL framework also includes device selection strategies in order to prevent low-quality devices from adversely affecting learning efficiency and accuracy. To enhance the speed and accuracy of the federated convergence, the authors propose a synchronous advantage actor-critic (A3C) based joint device placement, UAV placement, and resource allocation algorithm. Existing object identification techniques for IoT and AVs are still far from flawless. The primary deficiency of these algorithms is that they cannot deal with the massive amount of unlabeled data faced in real-world and can only operate with limited data [27]. In this paper, we propose a multifaceted strategy that blends state-of-the-art techniques, including federated learning, one-shot Learning, and DQL, to enhance object detection across IoTs. To the best of our knowledge, no current strategy has yet examined the integration and connectivity of these technologies for object detection in IoT contexts.
3 3.1
Problem Formulation Deep Q-Network
Deep Q-Network (DQN) was an important step in the process of development of artificial intelligence. It allowed computers to learn and adapt like the human mind. With DQN, the system learns to predict the next move in an action space by observing what happened in the past. The system is hence capable of learning from millions of examples without any human intervention. Afterwards, we present a brief introduction to both Q-Learning (QL) and DQN algorithm. Q-Learning: QL is a model-free reinforcement learning method that learns to maximize some fitness function in a given environment. Because it is modelfree, QL requires very little supervision, which in turn makes it well-suited to situations wherein human supervision is not available or practicable, such as in autonomous vehicles, robotics, and manufacturing. QL has been successfully applied to a variety of challenging domains, including text classification, image classification, mission planning, and more. The goal of the QL agents Tis to select actions in a fashion that maximizes the cumulative future reward t=1 γ t−1 rt , where γ ∈ (0, 1] is a discounting factor of future rewards. Specifically, the Qfunction is defined as per Eq. (1):
44
G. Rjoub et al.
Qπ (st , a) = Eπ
∞
γ
t−1
rt |st = s, at = a, π
t=1
(1)
= Est+1 ,a [rt + γQπ (st+1 , a)|st , at ] , with rt being the maximum sum of rewards at each time step t, where π is the behaviour policy mapping from states to probabilities of selecting possible actions. Deep Q-Network: Despite the fact that QL methods have a good performance for small-sized models, their performance deteriorates as the scale of the model increases. The Deep Q-Learning (DQN) [12], which combines QL with a class of artificial neural network known as deep neural networks (DNN), overcomes this issue by bridging the gap between high-dimensional sensory inputs and actions, to produce the first artificial agent capable of learning to excel at a diverse array of challenging tasks. In contrast to QL, DQL uses a DNN to seek the optimum action-value function Q∗ (st , a), which is the maximum expected cumulative reward returned from state s and is given in (2): (2) Q∗( st , a) = Est+1 rt + γ max Q∗ (st+1 , a)|st , a , a
The DNN’s input is one of the model owner’s states, and the output comprises Q-values Q(s, a; θ) for all potential actions, where θ is the DNN’s weights. In order to obtain the approximate values of Q∗ (s, a), the DNN needs to be trained based on experiences < s, a, r, s >. According to the DQL algorithm, weights θ of the DQN are updated in order to minimize the loss function defined by the following equations: L(θ) = E{yDQN (t) − Q(s, a; θ)2 }
(3)
where yDQN is the target value that is given by
Q(s , a ; θ ), yDQN = rt + γ arg max a
(4)
where θ is the weights of the DNN from the previous iteration, and the action a is selected according to the -greedy policy [19]. 3.2
One-Shot Federated Learning
We are interested in designing a framework that integrates meta-learning procedures into the FL framework. The objective is to enable distributed devices to learn models for one-shot tasks. Therefore, we describe in this section, OSFL, the federated one-shot learning model. OSFL strives to find an optimal model that can perform one-shot tasks best when learned from distributed data sources. Initially, let nu be the number of samples of client u, n = u nu be the total
One-Shot Federated Learning-based Model-Free Reinforcement Learning
45
number of samples across the devices, and w be the learning model. The average loss over all data samples is considered a local objective for client u: Lu (w) =
nu 1 f (xi , yi ; w), nu i=1
(5)
where f is a loss function that evaluates the prediction of model w on a data sample (xi , yi ). All clients know the type of loss function f that is used for each task. As an example, f is often selected as the cross-entropy loss applied to the models’ probabilistic outputs in classification tasks using DNNs. Global targets are based on local targets that have been weighted min L(w) = w
U
Pu Lu (w),
(6)
u=1
where Pu = nnu . Due to FL’s prohibition on direct data sharing, maximizing the global objective (Eq. 6) directly requires performing a full batch gradient descent on all data held by each client and performing model aggregation after each client update. As a result, clients and servers need to exchange models frequently, and hence use a lot of memory. To solve this issue, many algorithms approximate the global objective, such as FedAvg [3,21,30], which optimizes each local objective individually, and FedProx [14,20], which solves local objectives with proximal terms to regularize training. A one-shot federated learning method that incorporates DRL optimization techniques may be an ideal answer to real-world AI-based classification problems because of the vast state space and partially observed nature of one-shot classification under time and system resources limitations. Technically speaking, the samples might come from one-shot classes, which are relatively new classes with few or no training instances. The number of classes can change over time t, up to C classes at any given time. In order to classify events correctly, edge server (ES) assist in the classification process and manually label samples when requested. Let the action space A {c1 , . . . , cN , S, D}, where a(t) = ci when the system classifies the event under category i automatically without the help of the ES at time t, S sending to ES, and D when delaying classification. Every time a classifier performs an action ci , i = 1, . . . , N , a reward Rc is earned. In the case of a correct classification, Rc = RRight = RR , where RR > 0 is a positive constant. Otherwise, Rc = RW rong = RW , with RW < 0 being a negative constant. When the system transmits the event sample to the ES at time t in order to request a label by conducting a more thorough examination, action a(t) = S. When the classification decision of the current sample is delayed, action a(t) = D is taken. As a matter of intuition, a good policy would delay the transmission of samples while an ES is waiting for their labels to be completed when a similar event is observed. The algorithm can then judiciously manage system resources by classifying the delayed sample with high reliability once the label for the similar sample is received.
46
G. Rjoub et al.
Let ηS (t) ∈ {1, . . . , X} denote the number of ESs currently working on labeling samples, ρ(t) be the number of pending samples, and δD (t) ∈ {1, . . . , Z} be the number of delayed samples at time t. Following these definitions, ESs’ load S (t) and delayed sample load D (t) can be defined as follows: S (t)
ηS (t) + ρ(t) , X
(7)
δD (t) , (8) Z where 0 D (t) 1, and 0 S (t) < 1 when there is at least one available ES, and S (t) 1 when all ESs are active. As described in Sect. 3.1, the reward is determined by a complex partially observed environment with temporal correlations among actions. D (t)
=
T
γ t−1 rt ,
(9)
t=1
where T is the time horizon of the entire classification process of the total examined samples. The objective is to find a policy θ that maximizes the expected accumulated discounted reward: max Ξ[R(θ)], θ
(10)
where Ξ[R(θ)] denotes the expected accumulated discounted reward when the system performs strategy θ.
4 4.1
Implementation and Experiments Experimental Setup
To carry out our experiments, we capitalize on the large-scale 2D Self/semisupervised Object Detection Dataset for Autonomous driving (SODA10M) [8]. The dataset consists of 10 million unlabeled images and 20000 labeled images with 6 sample object categories. The images are taken every 10 s per frame over the course of 27833 driving hours in various weather conditions, times, and location scenarios in 32 distinct cities to increase diversity. We train a Convolutional Neural Network (CNN) in a federated fashion. To perform the machine learning and computations on decentralized data, we use the TensorFlow Federated (TFF) framework. TFF facilitates open experimentation and research with a variety of collaborative learning scenarios on a number of heterogeneous devices with different resource characteristics. Our data was partitioned across 1000 IoT devices, which are Autonomous Vehicles (AVs) in our case. To create the test set, a percentage of 25% of each device’s own data was selected; the rest was used for the training set. We compare the performance of our solution with two existing approaches which are the traditional Deep Q Network (DQN) and Random Scheduling-based Federated Learning (FL-RS).
One-Shot Federated Learning-based Model-Free Reinforcement Learning
4.2
47
Experimental Results
In Fig. 1, we study the accuracy of our solution against the classic FL-RS, and DDQN that does not include our one-shot algorithm. In order to examine the scalability of the different studied solutions, we ran the experiment over 1000 iterations. The first observation that can be drawn from the figure is that our OSFL solution achieves higher accuracy compared to the other approaches that do not include the one-shot learning component into their design, i.e., FL-DQN, and FL-RS. In particular, the accuracy level obtained by our solution is 96%, whereas the accuracy levels obtained by the FL-DQN, and FL-RS approaches are 82% and 73% respectively. The second observation that can be drawn from Fig. 1 is that our solution converges faster to a stable accuracy level compared to the DQN and FL-RS approaches. The improvements brought by the one shot-based approach mainly stem from the ability to use the results of one shot to generate an ensemble of many shots, and the use of labeling for improved efficiency.
Fig. 1. Comparison of accuracy of final global model
In Fig. 2, we measure the learning time entailed by our solution, compared to the DQN and FL-RS approaches on the SODA10M dataset while varying the number of AVs from 10 to 100. The main conclusion that can be taken from this figure is that augmenting the number of AVs leads to a modest increase in the learning time under the different studied solutions. The figure also reveals that our proposed model achieves the highest learning time. This is justified by the
48
G. Rjoub et al.
fact that our one-shot learning component classifies objects from one, or only a few, samples, thus decreasing the time needed to learn from the data.
Fig. 2. Learning time versus the number of AVs
In Fig. 3, we measure the execution time of the different studied approaches, while varying the number of AVs from 10 to 1000 and percentage of unlabeled data from 25% to 75%. The main observation that can be drawn from this simulation is that increasing the number of AVs leads to a modest increase in the execution time in our solution (i.e., OSFL) compared to the other models. This is because our solution employs deep Q-learning to select the AVs that achieve the best combinations in terms of label availability. On the other hand, increasing the percentage of unlabeled data results in a slight increase in the execution time in our model compared to the high increase in all the studies approaches. In particular, the execution times obtained by our model, FL-DQN, and FL-RS approaches respectively are 76–15498 s, 88–19017 s, and 89–22981 s with 25% of unlabeled data, 144–19371 s, 176–26218 s, and 191–30176 s with 50% of unlabeled data, and 366–33659 s, 423–41847 s, and 475–49737 s with 50% of unlabeled data. We compare in Fig. 4 the test accuracy of our solution against the DQN and FL-RS approaches while varying the number of AVs from 100 to 1000 and the percentage of unlabeled data from 25% to 75%. The test accuracy quantifies the accuracy obtained by each AV after using the global model trained in a federated fashion to make predictions on its own data. We observe from this
One-Shot Federated Learning-based Model-Free Reinforcement Learning
Fig. 3. Execution time versus the number of AVs
Fig. 4. Comparison of prediction accuracy of local model
49
50
G. Rjoub et al.
figure that the test accuracy obtained by our model is much higher than those obtained by the DQN and FL-RS approaches. On the other hand, increasing the percentage of unlabeled data results in a slight decrease in the accuracy of our model compared to the high decrease when it comes to the DQN and FLRS approaches. In particular, the average test accuracy obtained by our model, the FL-DQN, and FL-RS approaches respectively are 96%, 91%, and 72% with 25% of unlabeled data, 94%, 84%, and 61% with 50% of unlabeled data, and 93%, 76%, and 58% with 75% of unlabeled data. This means that our model solution enables the AVs to better learn and predict objects with unlabeled data, thus improving the accuracy and efficiency of the whole process of autonomous driving.
5
Conclusion
In this work, we designed OSFL, a novel framework that considers a federated learning scenario wherein the local training is carried out on IoT devices and the global aggregation is done at the level of an edge server. We integrate a modelfree reinforcement learning into OSFL to design an intelligent decision-making component that enables the IoT devices to infer whether to label a sample automatically or request the true label for the a one-shot learning component. Simulations conducted on the SODA10M dataset show that our solution, OSFL, outperforms two existing benchmark federated learning scheduling approaches under different scenarios and settings in terms of global model accuracy, learning time, execution time and prediction accuracy of local model. In the future, we will extend this work to real-world scenarios employing edge cloud computing [2,16] and examine federated transfer learning scheduling by combining reinforcement learning and trust models [4,5,11,18,28] to further reduce the cost and training time by avoiding unnecessary computations on untrusted IoT devices. Additionally, we will explore the effectiveness of zero and few-shot learning as compared to one-shot learning for our solution.
References 1. Amiri, M.M., G¨ und¨ uz, D., Kulkarni, S.R., Poor, H.V.: Convergence of update aware device scheduling for federated learning at the wireless edge. IEEE Trans. Wirel. Commun. 20(6), 3643–3658 (2021) 2. Bataineh, A.S., Bentahar, J., Abdel Wahab, O., Mizouni, R., Rjoub, G.: A gamebased secure trading of big data and IoT services: blockchain as a two-sided market. In: Kafeza, E., Benatallah, B., Martinelli, F., Hacid, H., Bouguettaya, A., Motahari, H. (eds.) ICSOC 2020. LNCS, vol. 12571, pp. 85–100. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65310-1 7 3. Chen, H., Huang, S., Zhang, D., Xiao, M., Skoglund, M., Poor, H.V.: Federated learning over wireless IoT networks with optimized communication and resources. IEEE Internet Things J. (2022). https://doi.org/10.1109/JIOT.2022.3151193
One-Shot Federated Learning-based Model-Free Reinforcement Learning
51
4. Drawel, N., Bentahar, J., Laarej, A., Rjoub, G.: Formalizing group and propagated trust in multi-agent systems. In: Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pp. 60–66 (2021) 5. Drawel, N., Bentahar, J., Laarej, A., Rjoub, G.: Formal verification of group and propagated trust in multi-agent systems. Auton. Agent. Multi-Agent Syst. 36(1), 1–31 (2022) 6. Fei-Fei, L., Fergus, R., Perona, P.: One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 594–611 (2006) 7. Gronauer, S., Diepold, K.: Multi-agent deep reinforcement learning: a survey. Artif. Intell. Rev. 55(2), 895–943 (2022) 8. Han, J., et al.: Soda10m: a large-scale 2d self/semi-supervised object detection dataset for autonomous driving (2021) 9. Kasturi, A., Ellore, A.R., Hota, C.: Fusion learning: a one shot federated learning. In: Krzhizhanovskaya, V.V., et al. (eds.) ICCS 2020. LNCS, vol. 12139, pp. 424– 436. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50420-5 31 10. Li, Q., He, B., Song, D.: Practical one-shot federated learning for cross-silo setting. arXiv preprint arXiv:2010.01017 (2020) 11. Mehdi, M., Bouguila, N., Bentahar, J.: Probabilistic approach for QoS-aware recommender system for trustworthy web service selection. Appl. Intell. 41(2), 503– 524 (2014). https://doi.org/10.1007/s10489-014-0537-x 12. Mnih, V., Kavukcuoglu, et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015) 13. Nguyen, D.C., Ding, M., Pathirana, P.N., Seneviratne, A., Li, J., Poor, H.V.: Federated learning for internet of things: a comprehensive survey. IEEE Commun. Surv. Tutorials 23(3), 1622–1658 (2021) 14. Rjoub, G.: Artificial Intelligence Models for Scheduling Big Data Services on the Cloud. Ph.D. thesis, Concordia University, September 2021. https://spectrum. library.concordia.ca/id/eprint/989143/ 15. Rjoub, G., Abdel Wahab, O., Bentahar, J., Bataineh, A.: A trust and energy-aware double deep reinforcement learning scheduling strategy for federated learning on IoT devices. In: Kafeza, E., Benatallah, B., Martinelli, F., Hacid, H., Bouguettaya, A., Motahari, H. (eds.) ICSOC 2020. LNCS, vol. 12571, pp. 319–333. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65310-1 23 16. Rjoub, G., Bentahar, J.: Cloud task scheduling based on swarm intelligence and machine learning. In: 2017 IEEE 5th International Conference on Future Internet of Things and Cloud (FiCloud), pp. 272–279. IEEE (2017) 17. Rjoub, G., Bentahar, J., Abdel Wahab, O., Saleh Bataineh, A.: Deep and reinforcement learning for automated task scheduling in large-scale cloud computing systems. Concurrency Comput. Pract. Experience 33(23), e5919 (2021) 18. Rjoub, G., Bentahar, J., Wahab, O.A.: Bigtrustscheduling: trust-aware big data task scheduling approach in cloud computing environments. Future Gener. Comput. Syst. 110, 1079–1097 (2020) 19. Rjoub, G., Bentahar, J., Wahab, O.A., Bataineh, A.: Deep smart scheduling: a deep learning approach for automated big data scheduling over the cloud. In: 2019 7th International Conference on Future Internet of Things and Cloud (FiCloud), pp. 189–196. IEEE (2019) 20. Rjoub, G., Wahab, O.A., Bentahar, J., Bataineh, A.: Trust-driven reinforcement selection strategy for federated learning on IoT devices. Computing 1–23 (2022). https://doi.org/10.1007/s00607-022-01078-1
52
G. Rjoub et al.
21. Rjoub, G., Wahab, O.A., Bentahar, J., Bataineh, A.S.: Improving autonomous vehicles safety in snow weather using federated YOLO CNN learning. In: Bentahar, J., Awan, I., Younas, M., Grønli, T.-M. (eds.) MobiWIS 2021. LNCS, vol. 12814, pp. 121–134. Springer, Cham (2021). https://doi.org/10.1007/978-3-03083164-6 10 22. Sami, H., Bentahar, J., Mourad, A., Otrok, H., Damiani, E.: Graph convolutional recurrent networks for reward shaping in reinforcement learning. Inf. Sci. 608, 63–80 (2022) 23. Sami, H., Otrok, H., Bentahar, J., Mourad, A.: AI-based resource provisioning of IoE services in 6g: a deep reinforcement learning approach. IEEE Trans. Netw. Serv. Manag. 18(3), 3527–3540 (2021) 24. Shi, W., Zhou, S., Niu, Z., Jiang, M., Geng, L.: Joint device scheduling and resource allocation for latency constrained wireless federated learning. IEEE Trans. Wirel. Commun. 20(1), 453–467 (2020) 25. Shin, M., Hwang, C., Kim, J., Park, J., Bennis, M., Kim, S.L.: Xor mixup: privacypreserving data augmentation for one-shot federated learning. arXiv preprint arXiv:2006.05148 (2020) 26. Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. In: Lee, D. D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, 5–10 December 2016, Barcelona, Spain, pp. 3630–3638 (2016) 27. Wahab, O.A.: Intrusion detection in the IoT under data and concept drifts: online deep learning approach. IEEE Internet Things J. (2022). https://doi.org/10.1109/ JIOT.2022.3167005 28. Wahab, O.A., Cohen, R., Bentahar, J., Otrok, H., Mourad, A., Rjoub, G.: An endorsement-based trust bootstrapping approach for newcomer cloud services. Inf. Sci. 527, 159–175 (2020) 29. Wahab, O.A., Mourad, A., Otrok, H., Taleb, T.: Federated machine learning: survey, multi-level classification, desirable criteria and future directions in communication and networking systems. IEEE Commun. Surv. Tutorials 23(2), 1342–1397 (2021) 30. Wahab, O.A., Rjoub, G., Bentahar, J., Cohen, R.: Federated against the cold: a trust-based federated learning approach to counter the cold start problem in recommendation systems. Inf. Sci. 601, 189–206 (2022) 31. Xia, W., Quek, T.Q., Guo, K., Wen, W., Yang, H.H., Zhu, H.: Multi-armed bandit-based client scheduling for federated learning. IEEE Trans. Wirel. Commun. 19(11), 7108–7123 (2020) 32. Yang, H., Zhao, J., Xiong, Z., Lam, K.Y., Sun, S., Xiao, L.: Privacy-preserving federated learning for UAV-enabled networks: learning-based joint scheduling and resource management. IEEE J. Sel. Areas Commun. 39(10), 3144–3159 (2021)
A New Approach for Selecting Features in Cancer Classification Using Grey Wolf Optimizer Halah AlMazrua and Hala AlShamlan(B) Information Technology Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia [email protected], [email protected]
Abstract. The need to detect cancer in early stages is essential for cancer treatment. One of the best ways to classify cancer is with feature (gene) selection to choose the genes that hold the most promise. This step contributes significantly to the classification performance of microarrays and solves the issue of high dimensionality in microarray data. This paper proposes two novel feature selection approaches, both based on the Grey Wolf Optimizer (GWO), to compare and determine the best classifier, whether k-nearest neighbors (KNN) or support vector machine (SVM), with a leave-one-out cross-validation (LOOCV) classifier to classify high dimensional cancer microarray data and solve the feature selection problem. The experiments were implemented on six public cancer microarray data sets to show the remarkable results of the proposed methods. In addition, we compared the proposed algorithms with other recently published algorithms to demonstrate the proposed algorithms’ effectiveness. Keywords: Bio-inspired algorithms · Bioinformatics · Cancer classification · Evolutionary algorithm · Feature selection · Gene expression · Grey Wolf Optimizer · k-Nearest neighbor · Support vector machine
1
Introduction
In order to achieve successful treatment for cancer and save lives, early detection is critical. Therefore, the need for cancer classification is tremendous. As machine learning and biological sciences have grown rapidly, gene microarray technology has produced large data sets of gene profiles that are widely used for cancer diagnosis. The use of microarray data sets has gained popularity in research for cancer classification because it has shown great results, but microarray data have a high dimensionality problem. The high dimensions of gene expression data pose a significant disadvantage due to their greater number of genes, which can outnumber the samples. Multiple approaches and techniques for cancer classification c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Awan et al. (Eds.): DBB 2022, LNNS 541, pp. 53–64, 2023. https://doi.org/10.1007/978-3-031-16035-6_5
54
H. AlMazrua and H. AlShamlan
have been developed for identifying informative genes for cancer prediction and diagnosis by utilizing microarray gene expression data. There have been significant developments in recent years that have made it possible to, in a single experiment, identify cancer from among thousands of simultaneous noisy genes. Gene expression profiling microarrays have proven useful for the diagnosis and classification of cancer. By removing redundant and irrelevant features, feature selection techniques can boost classification accuracy. Feature selection in cancer classification is mainly done using evolutionary and nature-inspired optimization techniques, which are known to improve the accuracy and efficiency of classification. In this paper, we present two new feature selection methods, Grey Wolf Optimizer (GWO) support vector machine (SVM) and GWO k-nearest neighbor (KNN), with leave-one-out cross-validation (LOOCV) used to select relevant genes for cancer classification based on features (genes). We also compared the two methods to determine which would perform best, having better accuracy while selecting fewer genes. The remainder of this paper is organized as follows: Sect. 2 explains the background of the GWO algorithm, the inspiration for it, the social behavior of gray wolves, and the mathematical model of GWO. In Sect. 3, we describe the proposed framework, and Sect. 4 presents the experimental setup and results obtained. Section 5 concludes the paper with recommendations for future researchers.
2 2.1
Background Grey Wolf Optimizer (GWO)
The inspiration for the GWO method is described first in this section. Following that is a discussion of the social behavior of gray wolves. Finally, the mathematical model is explained. 2.2
Inspiration
The GWO algorithm is a new swarm intelligence optimization wrapper featureselection algorithm that imitates gray wolves’ coordinated behavior while finding prey and hunting in the wild. The GWO algorithm was proposed by [1] in 2014. In order to design the best solution for any given problem, GWO uses only a nature-inspired optimization technique. As apex predators, gray wolves are at the top of the food chain. Gray wolves usually live in packs averaging 5–12 members. What is particularly interesting about gray wolves is that there is a very strict social hierarchy among them, as shown in Fig. 1. GWO is based on the idea that there are four types of wolves in the wild, categorized according to the wolf’s role in hunting. These are Alpha(α), Beta(β), Omega(ω), and Delta(δ). GWO facilitates the selection of features because it provides the best methods for increasing accuracy. As an added benefit, unlike other methods, it does
A New Approach for Selecting Features in Cancer Classification
55
not require any threshold parameter to remove useless features. It is worth noting that wrapper feature selection is less time-consuming than other methods because the complexity of selecting the best subset of features increases exponentially as more variables are added, and GWO is able to avoid the local minima that multiple other bio-inspired methods face. 2.3
Social Behavior of Gray Wolves
As stated previously, gray wolves have a well-defined social hierarchy. Alpha(α) is the name given to the wolves who lead the pack. The alpha is primarily in charge of hunting, sleeping arrangements, and waking times, among other things. The pack is controlled by the alpha’s judgments. The alpha wolf is also known as the dominant wolf because the pack must obey their commands. Surprisingly, the alpha is not often the strongest member of the pack, but rather the best at managing it. This demonstrates that a pack’s organization and discipline are far more crucial than their size. Beta(β) is the second rank in the gray wolf hierarchy. In addition to helping the alpha make decisions, betas assist with other pack duties. In the event of the death or aging of the current alpha wolf, they are the most likely candidate to become the next alpha. In addition to acting as an advisor and pack disciplinarian, the beta enforces pack rules. In the gray wolf, Omega(ω) is the lowest ranking. Often, the omega is used as a scapegoat. These wolves have to submit to everyone else. Omega is the last wolf allowed to eat. However, if the omega is lost, the whole pack faces internal conflict and problems. Omegas can also be a babysitter in some instances. Delta(δ) wolves are those that are not alpha, beta, or omega wolves. While delta wolves are subordinate to alphas and betas, they dominate omegas. Delta wolves are responsible for keeping watch over the boundaries of the territory, providing warnings of any danger to the pack. They also provide safety and protection to the pack in a number of ways. In addition to a social hierarchy, gray wolves also exhibit group hunting behavior. According to [2], the nature of gray wolf hunting involves each of the following behaviors: – Continually track and chase the prey, hunting it as they approach – Encircling and harassing the prey until it eventually stops moving – Attacking the prey 2.4
Mathematical Modeling
When creating a mathematical model of gray wolves’ social hierarchy in order to design GWO, Mirzali et al. choose the fittest solution as the Alpha(α). As a result, the second and third best solutions are known as Beta(β) and Delta(δ). All of the remaining possible solutions are presumed to be Omega(ω). The GWO algorithm uses α, β, and δ to lead the search. These three wolves are pursued by the ω wolves.
56
H. AlMazrua and H. AlShamlan
Fig. 1. Leadership hierarchy of gray wolves.
As already noted, wolf packs prey on other animals through three stages: hunting, encircling, and attacking. Encircling. Gray wolves encircle prey during the hunt, and in GWO, Eqs. (1), (2), and (3) explain the encircling behavior: D = |C · Xp (t) − X(t)|
(1)
X(t + 1) = Xp (t) − A · D
(2)
where t is the number of iterations, X(t) is one gray wolf, X(t + 1) will be the next position it lands at, and Xp (t) corresponds to one of the following: α, β, δ. The coefficient vectors A and C are represented as follows. A = 2ar1 − a
(3)
C = 2r2
(4)
where r1 and r2 are random vectors in the range [0, 1] and is a decreasing number in the range [0, 2], with a = 2 − 2t/M axIteration being the most prominent example. Hunting. It is possible for gray wolves to spot and encircle their prey as a group. Hunting can be done by the beta or delta at times, but it is usually the alpha who hunts. Despite this, in an abstract search space, we have no idea
A New Approach for Selecting Features in Cancer Classification
57
where the optimal space (the prey) is located. To mathematically imitate the hunting behavior of gray wolves, we assume the alpha, which we consider the best candidate solution, beta, and delta have superior knowledge of the probable location of any prey. Therefore, we save the three best solutions developed so far and demand other search agents update their positions according to what is found in the best search agent placements. This is illustrated by (5), (6), and (7). ⎧ ⎨ Dα = |C1 Xα − X(t)| Dβ = |C2 Xβ − X(t)| (5) ⎩ Dδ = |C3 Xδ − X(t)| ⎧ ⎨ X1 = |Xα (t) − A1 Dα | X2 = |Xβ (t) − A2 Dβ | (6) ⎩ X3 = |Xδ (t) − A3 Dδ | Xp (t − 1) =
X1 + X2 + X3 3
(7)
Attacking. The gray wolves encircled the prey and were preparing to catch it (convergence and get results). Because of A ∈ [−2a, 2a], this operation was normally performed by lowering a in Eq. 3. When |A| ≥ 1 occurs, the gray wolves stay away from the prey to achieve global search; when |A| < 1 occurs, the gray wolf pack approaches the prey and finally finishes it.
3
Proposed Algorithm
In this section, we will describe the two proposed approaches, GWO-SVM and GWO-KNN, for the classification of different types of cancer. In both approaches, the aim is to identify which genes are most significant for improving SVM and KNN classifier performance and then determine which classifier performs the best. To address the issue of overfitting, GWO is used to reduce the dimension of the microarray data. The steps of the GWO-SVM and GWO-KNN algorithms are shown in Fig. 2. As a result of the small size of the data, the classification accuracy is obtained using LOOCV. When using the LOOCV technique, a single observation from the original sample gets treated as the validation data, and the remaining observations become the training data. By repeating this process, each observation in a sample is used as validation data once. Due to its unbiased nature and speed, LOOCV is almost unbiased. In Algorithm 1, we present pseudo code for the proposed algorithm. As shown in Eq. 8, the fitness function is calculated, which allows the algorithm to exclude as many features as feasible while maintaining high levels of accuracy. F = w · accuracy + (1 − w) · 1/(N ) (8) where N is the number of selected features, and w = 0.9
58
H. AlMazrua and H. AlShamlan
Fig. 2. GWO-SVM and GWO-KNN flowchart
A New Approach for Selecting Features in Cancer Classification
59
Algorithm 1 Pseudo-Code of GWO Algorithm Initialize maximum number of iteration, (M axIteration) Initialize grey wolf population (searchAgent) Initialize a, A, and C Calculate the fitness of each searchAgent by Eq. 8 Alpha (α) = the best search agent Beta (β) = the second best search agent Delta (δ) = the third best search agent Omega (ω) = The rest of the candidate solutions for M axIteration = 1, 2, . . . , N do for searchAgent = 1, 2, . . . , N do Update the position of the current search agent by Eq. 7 end for Update a, A, and C Calculate the fitness of all search agents Update α, β, and δ end for return α
4
Experimental Results and Discussions
In this section, we describe the experimental setup, the results of applying the proposed approaches to microarray cancer data sets, and the gene expression data sets used in the study. 4.1
Microarray Data Set
Six publicly available microarray benchmark data sets representing different types of cancer have been extensively used in the microarray data analysis field for evaluating gene selection methods. These data sets were used in the present study to test the proposed hybrid approach, with two types of microarray data sets used, binary and multiclass. The three binary data sets used were Colon Tumor [3], Lung Cancer [4], and Leukemia1 [3]. In addition, there were three multiclass data sets, which were Leukemia3 [5], Lymphoma [6], and SRBCT [6]. Table 1 provides an overview of these five data sets. 4.2
Parameter Settings
The best solutions were evaluated using the KNN method with k = 7 and the SVM. In the experiments, the value of k = 7 was chosen because it performed best across all data sets. According to [7], the number of iterations (Max iter) and the dimension (dim) are two important criteria that determine the practicality of a given method. The values of k, dim, up, and lp are among the parameters that were chosen by running the algorithm and checking which parameter value gives the best accuracy. Table 2 outlines these parameters’ settings.
60
H. AlMazrua and H. AlShamlan Table 1. Description of microarray data sets Data set
No. total genes No. samples No. classes
Colon Tumor [3] 2000
62
2
Lung Cancer [4] 7129
96
2
Leukemia2 [3]
7129
72
3
Leukemia3 [5]
7129
72
2
SRBCT [6]
2308
83
4
Lymphoma [6]
4026
66
3
Table 2. Parameter settings for GWO Parameter
Value
Dimension (dim)
No. genes in data set
No. iterations (Max iter)
100
Lower bound (lp)
0
Upper bound (up)
1
No. wolves (SearchAgents no) 5
4.3
No. runs (m)
30
w
0.9
k
7
Results and Analysis
The goal of feature selection is to improve classification accuracy while reducing the number of features used. We ran the algorithm on a different number of features for each data set. For example, we used 1 to 30 genes to apply the proposed techniques in the Colon Tumor data set (features). The experimental results are presented in this section for all of the cancer data sets used. On the Colon dataset, Table 3 shows the best, worst, and average classification accuracy using the GWO-KNN and GWO-SVM algorithms. Interestingly, the highest classification accuracy was obtained when the KNN classifier was applied, with 23 genes selected as the number of genes. Other numbers of the selected genes did not improve classification accuracy. Based on the Lymphoma data set, Table 4 shows the best, worst, and average classification accuracy of applying the GWO-KNN and GWO-SVM algorithms. Among the selected genes, 100% accuracy was achieved in most cases. Based on the KNN classifier, the highest average accuracy rate was achieved with 20 genes and a 99.95% accuracy rate. Looking at Table 5, we can see both GWO-KNN and GWO-SVM gave the same best accuracy, which was 98.61%, by selecting the same number of genes (10 genes). But with KNN, the average and worst accuracy for selecting 10 genes was higher than with SVM.
A New Approach for Selecting Features in Cancer Classification
61
Table 3. Colon data set results No. genes Best
Average Worst
GWO-KNN 30 23 22 18 11 6
90.32% 95.16% 93.55% 91.94% 91.94% 88.71%
87.37% 87.26% 87.74% 87.53% 86.56% 85.05%
83.87% 83.87% 83.87% 83.87% 82.26% 82.26%
GWO-SVM 30 23 22 18 11 6
87.10% 87.10% 90.32% 91.94% 88.71% 88.71%
82.96% 83.06% 83.28% 81.61% 83.28% 80.91%
67.74% 72.58% 74.19% 74.19% 75.81% 67.74%
Table 4. Lymphoma data set results No. genes Best
Average Worst
GWO-KNN 20 10 5 4 3
100% 100% 100% 100% 98.48%
99.95% 99.09% 96.72% 96.06% 95.05%
98.48% 96.97% 93.94% 92.42% 92.42%
GWO-SVM 20 10 5 4 3
100% 100% 100% 98.48% 98.48%
98.74% 97.83% 96.06% 95.25% 94.39%
93.94% 95.45% 90.91% 86.36% 87.88%
The results of implementing GWO-SVM and GWO-KNN algorithms in the Leukemia3 data set are shown in Table 6. When using KNN, the best classification accuracy was achieved when 28 genes were selected. The classification accuracy increased to more than 90% when more than 10 genes were selected. Based on the implementation of GWO-SVM and GWO-KNN algorithms in the SRBCT dataset, Table 7 compares the average, best, and worst accuracy performance. Also shown are the highest rate of accuracy, 98.80%, when selecting 28 genes for both KNN and SVM. Interestingly, unlike with the other data sets, SVM gave the best accuracy with the SRBCT data set. The accuracy performance of average, best, and worst GWO-SVM and GWOKNN algorithms in the Lung data set is presented in Table 8. In addition, it shows
62
H. AlMazrua and H. AlShamlan Table 5. Leukemia1 data set results No. genes Best
Average Worst
GWO-KNN 30 24 10 5 3
98.61% 97.22% 98.61% 98.61% 97.22%
95.97% 94.81% 94.03% 92.31% 91.44%
91.67% 90.28% 88.89% 83.33% 87.50%
GWO-SVM 30 24 10 5 3
97.22% 98.61% 98.61% 94.44% 95.83%
94.12% 92.59% 92.73% 90.37% 89.03%
87.50% 80.56% 86.11% 80.56% 70.83%
Table 6. Leukemia3 data set results No. genes Best
Average Worst
GWO-KNN 30 28 15 10 5
97.22% 98.61% 94.44% 95.83% 90.28%
90.00% 90.79% 88.80% 86.81% 83.56%
81.94% 84.72% 81.94% 80.56% 77.78%
GWO-SVM 30 28 15 10 5
95.83% 97.22% 94.44% 95.83% 87.50%
84.77% 88.24% 84.91% 82.82% 80.23%
72.22% 76.39% 76.39% 70.83% 72.22%
Table 7. SRBCT data set results No. genes Best
Average Worst
GWO-KNN 30 28 15 10 5
96.39% 98.80% 93.98% 89.16% 86.75%
92.21% 91.89% 87.27% 83.53% 79.28%
85.54% 87.95% 79.52% 75.90% 72.29%
GWO-SVM 30 28 15 10 5
98.80% 98.80% 93.98% 93.98% 89.16%
91.77% 92.01% 86.18% 81.73% 74.38%
85.54% 85.54% 74.70% 66.27% 55.42%
A New Approach for Selecting Features in Cancer Classification
63
Table 8. Lung data set results No. genes Best
Average Worst
GWO-KNN 6 4 3
100.00% 99.55% 100.00% 98.92% 99.55% 98.82%
97.92% 97.92% 97.92%
GWO-SVM 6 4 3
100.00% 95.94% 98.96% 96.77% 98.68% 96.67%
89.58% 89.58% 89.58%
the highest accuracy of 100% when selecting 4 genes for KNN and 6 genes for SVM. 4.4
Comparative Evaluations
To validate and compare the effectiveness of the GWO algorithm, we compared it to previous meta-heuristic bio-inspired methods of gene selection. Table 9 shows a comparison of results based on the accuracy of our classification and the number of genes selected. Table 9. Comparison between the proposed selection methods and previous methods in terms of the number of selected genes and accuracy Algorithms
Colon
GWO-KNN
95.16%(23) 100%(4)
Lung
Leukemia2 Leukemia3 Lymphoma SRBCT 98.61%(10) 98.61%(28) 100%(4)
GWO-SVM
91.94%(18) 100%(6)
98.61%(10) 97.22%(28) 100%(5)
98.80%(28)
HS-GA [8]
95.9%(20)
-
97.5%(20)
-
-
FF-SVM [9]
92.7%(22)
100%(2)
99.5%(11)
-
92.6%(19)
97.5%(14)
GBC [10]
98.38%(20) -
100%(5)
-
-
-
-
98.80%(28)
MIM-mMFA [11] 100%(20)
100%(20)
100%(6)
100%(15)
100%(4)
100%(23)
QMFOA [12]
100%(27)
100%(20)
100%(32)
100%(30)
-
100%(23)
BQPSO [13]
83.59%(46) 100%(46)
93.1%(48)
-
100%(49)
-
PCC-GA [14]
91.94%(29) 97.54%(42) 100%(35)
-
100%(39)
100%(20)
5
Conclusion
An analysis of two novel feature selection methods based on GWO were presented and compared in this paper for gene selection and classification of high dimensional microarray data sets. On well-known gene expression data sets, the experimental results are illustrated, and with all of the data sets, we can see that we reached 100% accuracy
64
H. AlMazrua and H. AlShamlan
only for lung and Lymphoma datasets. Also the overall accuracy for all dataset is higher than 90%. Lastly, GWO-KNN outperformed GWO-SVM, with the exception of the SRBCT dataset, where GWO-SVM outperformed GWO-KNN. We advocate combining GWO with another wrapper bio-inspired feature selection approach to create a hybrid method that improves GWO accuracy for future works while selecting fewer genes, as we noticed significant potential for GWO when employed alone.
References 1. Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014) 2. Muro, C., Escobedo, R., Spector, L., Coppinger, R.P.: Wolf-pack (Canis lupus) hunting strategies emerge from simple rules in computational simulations. Behav. Proc. 88(3), 192–197 (2011) 3. Golub, T.R., et al.: Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science 286(5439), 531–537 (1999) 4. Beer, D.G., et al.: Gene-expression profiles predict survival of patients with lung adenocarcinoma. Nat. Med. 8(8), 816–824 (2002) 5. Armstrong, S.A., et al.: MLL translocations specify a distinct gene expression profile that distinguishes a unique leukemia. Nat. Genet. 30(1), 41–47 (2002) 6. Khan, J., et al.: Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nat. Med. 7(6), 673–679 (2001) 7. Al-wajih, R., Abdulakaddir, S.J., Aziz, N.B.A., Al-tashi, Q.: Binary grey wolf optimizer with K-nearest neighbor classifier for feature selection. In: 2020 International Conference on Computational Intelligence (ICCI), pp. 130–136. IEEE (2020) 8. Vijay, S.A.A., GaneshKumar, P.: Fuzzy expert system based on a novel hybrid stem cell (HSC) algorithm for classification of micro array data. J. Med. Syst. 42(4), 1–12 (2018). https://doi.org/10.1007/s10916-018-0910-0 9. Almugren, N., Alshamlan, H.: FF-SVM: new firefly-based gene selection algorithm for microarray cancer classification. In: 2019 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), pp. 1–6. IEEE (2019) 10. Alshamlan, H.M., Badr, G.H., Alohali, Y.A.: Genetic bee colony (GBC) algorithm: a new gene selection method for microarray cancer classification. Comput. Biol. Chem. 56, 49–60 (2015) 11. Dabba, A., Tari, A., Meftali, S., Mokhtari, R.: Gene selection and classification of microarray data method based on mutual information and moth flame algorithm. Expert Syst. Appl. 166, 114012 (2021) 12. Dabba, A., Tari, A., Meftali, S.: Hybridization of Moth flame optimization algorithm and quantum computing for gene selection in microarray data. J. Ambient. Intell. Human. Comput. 12(2), 2731–2750 (2020). https://doi.org/10.1007/s12652020-02434-9 13. Xi, M., Sun, J., Liu, L., Fan, F., Wu, X.: Cancer feature selection and classification using a binary quantum-behaved particle swarm optimization and support vector machine. Comput. Math. Methods Med. 2016 (2016) 14. Hameed, S.S., Muhammad, F.F., Hassan, R., Saeed, F.: Gene selection and classification in microarray datasets using a hybrid approach of PCC-BPSO/GA with multi classifiers. J. Comput. Sci. 14(6), 868–880 (2018)
A Smart Video Surveillance System for Helping Law Enforcement Agencies in Detecting Knife Related Crimes Raed Abdallah1 , Salima Benbernou1 , Yehia Taher2(B) , Muhammad Younas3 , and Rafiqul Haque4
2
1 LIPADE, Universit´e de Paris, Paris, France {raed.abdallah,salima.benbernou}@u-paris.fr DAVID Lab, UVSQ - Universit´e Paris-Saclay, Versailles, France [email protected] 3 Oxford Brookes University, Oxford, UK [email protected] 4 Intelligencia, R&D Department, Paris, France [email protected]
Abstract. With recent technological developments, criminal investigation has witnessed a revolutionary change in identifying crimes. This has empowered Law Enforcement Agencies (LEAs) to take benefit of such revolution and build a smart criminal investigation ecosystem. Generally, LEAs collect data through surveillance systems (e.g., cameras); which are implemented on public places in order to recognize people behaviors and visually identify those who may form any danger or risk. In this paper, we focus on knives-related crimes or attacks that have been increased in recent years. In order to ensure public safety, it is crucial to detect such type of attacks in an accurate and efficient way in order to help LEAs in reducing potential consequences. We propose a smart video surveillance system (SVSS), which is based on a modified Single Shot Detector (SSD) and is combined with InceptionV2 and MobileNetV2 models. The proposed system is believed to enable LEAs to analyze big data collected from sensor cameras in a real-time and to accurately detect knivesbased attacks. Experimental result show that SVSS can achieve better results in real-life scenario in terms of obtaining rapid and accurate attack warnings.
Keywords: Deep learning analytics
1
· Neural network · Computer vision · Video
Introduction
Due to an increase in number of crimes across towns and cities, governmental organizations give more attention and resources to criminal investigation and law enforcement agencies (LEAs). Criminals commit various types of crimes. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Awan et al. (Eds.): DBB 2022, LNNS 541, pp. 65–78, 2023. https://doi.org/10.1007/978-3-031-16035-6_6
66
R. Abdallah et al.
However, in some cities, knives-based attacks have emerged as serious kind of homicide and have become a crucial challenge for LEAs [2] to tackle. Due to technological developments, LEAs take advantages of the emergence of cuttingedge technologies in order to digitize investigation activities and to enhance efficiency of criminal investigation. They take advantage of different technologies. For instance, they combine surveillance systems with computer vision and big data technologies in order to effectively detect knife-based attacks. However, such combination faces a main challenge, i.e., the trade-off between accuracy and speed adopted in most computer vision, especially Convolutional Neural Networks (CNNs) models. Let’s consider the two detection approaches of CNNs: one-stage and two-stage approaches. One-stage detection approach performs better in speed while the two-stage detection approach performs better in accuracy. Therefore, designing efficient models that take benefit of both approaches has become essential as it can help optimize the trade-off between accuracy and speed when detecting knife-based attacks. In this paper, we propose a smart video surveillance system (SVSS) that allows LEAs to detect knife-based attack in real-time while optimizing the tradeoff between accuracy and speed. Typically, SVSS is based on one-stage detection approach. It introduces a modified architecture of Single Shot Detector (SSD) model that enhances both speed and accuracy of detection as compared to its base network. Subsequently, SVSS uses InceptionV2 and MobileNetV2 as feature extractors instead of standard VGG-16 used in SSD base network [9]. Furthermore, a pre-trained model on the Microsoft Common Objects in Context (COCO) dataset [13] is fine-tuned in SVSS based on 2078 knife detection images dataset built in [1]. Such dataset contains images of knives with different types, shapes, colors, sizes that are located at different distances with being occluded sometimes. The rest of the paper is organized as follows. Section 2 gives an overview of existing CNN architectures which are related to sharp objects detection. In Sect. 3, we describe different models used in SVSS which include, data collection, knives-based attack detection, etc. Performance evaluation of SVSS and analysis of results are discussed in Sect. 4. We conclude the paper and provide directions for future work in Sect. 5.
2
Related Work
A number of existing approaches have focused on integrating computer vision and big data analytics into criminal investigation in order to enable LEAs in performing criminal operations. However, existing works in detecting knives or sharp objects are mainly based on machine and deep learning models. Machine learning techniques have been used in detecting (sharp) objects in baggage imagery in which X-ray and millimetric images are used [5,12]. They are also used in video surveillance where RGB images are used [6]. The authors in [6] proposed an algorithm to detect knives in RGB-images using a combination of the sliding window search approach and support vector machine (SVM) classifier.
A Smart Video Surveillance System
67
Similarly, authors in [12] introduced an efficient approach to detect firearms in X-ray images that uses both SVM and random forest classifiers. In [5], active appearance models (AAMs) have been proposed to detect knives in images. The efficiency of the AAMs is based on the fact that the knife-blade has a very specific interest point, which can be easily detected as a corner, i.e. its tip. Lastly, the authors in [11] introduced a SVM based dominant edge directions model to extract information on the spatial distribution of edges thus to detect knives. The potential knife related images are then verified through a pre-trained Histogram of Oriented Gradients (HOG) detector. Furthermore, some research works exploit deep learning models with CNN architectures in order to detect knife-based attack. Such architectures were used as standalone or combined with machine learning models when detecting sharp objects. In [4], the authors developed a Faster R-CNN model using two neural architectures; GoogleNet and SqueezNet. In this approach the first architecture is shown to give accurate results in terms of knife detection as compared to the second one. In [15], Faster R-CNN has been used to draw bounding box over objects and a performance evaluation among different pre-trained architectures (e.g., GoogleNet and Inception) has been performed. However, results showed that the best accuracy is reached with VGG-19 architecture. The authors in [16] introduces an extended version of Faster R-CNN by following three-step process to alert upon knife wielding threat. First, a classification based on MobileNet is tested and deployed followed by a pre-trained detection algorithm, which is called MaskRCNN. It allows to extract relative locative. The last process step uses a PoseNet architecture-based model in order to reduce misunderstood intentions. Although an extended version of Faster R-CNN was superior in terms of precision, a real-time knife detection was not possible due to its computation complexity. In [7], authors replaced the basic network of SSD (e.g., VGGNet, by ResNet101) to increase its speed. Subsequently, they use a multi-scale feature fusion method to allow the shallow feature map in integrating deeper features.
3
Smart Video Surveillance System
In this section, we propose a less-complex mechanism called Smart Video Surveillance System (SVSS). The goal is to boost both speed and accuracy and to produce adequate results that can help in detecting knife attacks in a real time. The proposed architecture of SVSS comprises three phases and is built on big data ecosystem and deep learning models. The potential benefit is to enable LEAs in automatically detecting attacker without much manual intervention (Fig. 1). In the following sub-sections, we illustrate various tools and techniques used in each phase of the proposed architecture. 3.1
Data Collection Phase
In this phase, data are collected through surveillance systems that are commonly deployed in zones susceptible to attack such as public places (city centers, parks,
R. Abdallah et al.
Sensor camera
Police patrol car with camera
Storage cluster
Knives dataset
Node 1
Node 2
Node 3
Node N
Raw video
Raw video
Raw video
Raw video
Processing cluster
Process 1
Process 2
Process 3
Process N
Splitting data
Offline data
80% Training
20% Testing
SSD model (Inception or MobileNet)
ATTACK DETECTION
Smart glass Camera
PHASE 3: DATA ANALYTICS
PHASE 2: DATA STORAGE AND PROCESSING DATA COLLECTOR CLUSTER
SURVEILLANCE SYSTEMS
PHASE 1: DATA COLLECTION
VIDEO FRAMES
68
Processed data
Mobile
Fig. 1. SVSS architecture.
etc.), commercial shops (malls, restaurants, etc.), or leisure sites (cinemas, casinos, sport halls, etc.), etc. Consequently, LEAs can benefit from wide range of surveillance devices which can be used to detect knife attacks in places such as cameras, smart glasses, sensors, police patrol cars, and mobile phones. After collection, data is sent to the second phase for storage and analysis purposes. 3.2
Data Storage and Processing Phase
In this phase, we take advantage of big data technology driven ecosystem to enable LEAs to collect data from a number of surveillance devices at any size and speed. Data can be stored in highly scalable data lake, and processed and analyzed using massively parallelized computational runtime environment. More specifically, we propose to use Spark framework in order to implement storage and processing operations in the second phase of our architecture. Indeed, Spark framework underpins agencies to build a highly scalable infrastructure as well as it ensures a high data processing compared to other distributed systems. Therefore, according to this solution, video data collected by the surveillance devices are loaded in storage cluster and parallel data can be loaded on processing engine for real-time detection of knives-attack. Furthermore, the processed data are stored in analytics storage cluster that is accessed by analytics applications used in the third phase. 3.3
Data Analytics Phase
The last phase in our mechanism, i.e., data analytics, takes the video frames collected by the surveillance devices and determines either the existence or nonexistence of an eventual knives-attack in each frame. Indeed, we use the single shot multibox detector accompanied with Inception and MobileNet models, which are described in the following sections. Single Shot MultiBox Detector (SSD): Typically, SSD is a single deep neural network method for detecting objects in images [14]. Unlike other models that utilize an additional object proposal like Faster R-CNN [4], the experimental
A Smart Video Surveillance System
69
results show that SSD has competitive accuracy and is much faster [7]. Basically, the architecture of SSD is based on a convolutional neural network (CNN) and is composed of a base network (mainly VGG16) - which is a standard image classification architecture used for feature extraction and truncated before any classification layers. Figure 2 shows two examples of 8 × 8 and 4 × 4 multi-scale feature maps for object detection in SSD.
(a) 4 × 4 feature map
(b) 8 × 8 feature map
Fig. 2. Multi-scale feature maps for object detection in SSD.
Additionally, by using a set of 3 × 3 convolutional filters, the added feature layers to SSD architecture can generate a fixed set of detection predictions which can be either confidence score for each object class or shape offset relative to the default bounding box coordinates. Figure 3 shows the SSD network architecture with its additional filters. Extra Feature Layers
38 19
19
Image
SSD
10 5 3 38
19
19
1
10 5 3
512
Conv4_3
1024
Conv6
1024
Conv7
512
Conv8_2
256
256
1
Non-Maximum Suppression
Base Network
256
Conv9_2 Conv10_2 Conv11_2
Fig. 3. SSD network architecture.
Unlike YOLO that uses fixed bounding boxes on the single scale feature map, a set of fixed default bounding boxes with different aspect ratios is associated with each multi-scale feature map cell in SSD. As mentioned before, the detection predictions generated by the filters are for each of those default boxes. Since
70
R. Abdallah et al.
each default bounding box has 4 shape offsets relative to its coordinates, i.e. offset values (cx ; cy ; w; h), and (N + 1) confidence scores, i.e. c, for each object class, where N is the number of classes to be predicted with an extra class for no object (i.e. background). Subsequently, a feature map of size m × n with k default boxes has m × n × k × (c + 4) predictions in a layer of SSD (Fig. 4(a)). After calculating the offsets relative to the default boxes, the model picks the class with the highest confidence score (Fig. 4(b)). Locaon: ( Confidence: ( Location: Δ (cx, cy, w, h) Confidence: (c1=0.95, c2=0.25)
C (x , y)
w
1. 2. 3.
Knife Gun Background
h
(a) Default bounding boxes with different aspect ratios
(b) Output bounding box
Fig. 4. Bounding box in SSD.
Base Network of SSD: Originally, VGG16 network was used as a base network for feature extraction in the SSD object detection model. The base network is a deep CNN for image classification with removed classification layers. These layers are then replaced with convolutional detection layers dedicated to SSD model. To seek better results in our proposed knife detection SSD model, we replace the base model of VGG16 by two networks called InceptionV2 and MobileNetV2. Adapted InceptionV2 to SSD Model: The basic building blocks of CNNs are convolutional layers, pooling layers, and fully connected ones. Thus, we have to choose which filter to use when designing these layers. Indeed, Filters can be 1 × 1, 3 × 3, 5 × 5 or we may simply apply pooling. Instead of struggling in choosing the filter size, using a convolutional or pooling layer, the Inception network proposed by Google researchers [19] can be used as an alternative. Initially, the Google researchers have applied convolution filters of size 1 × 1, 3 × 3, and 5 × 5 on the same input dimensional volume. They also applied max pooling, with same padding, and concatenate all the outputs together in a single vector given as input to the next layer. With this combination, they found that
A Smart Video Surveillance System
71
there was a high computational cost. Consequently, they used the idea of 1 × 1 convolutions to reduce input dimensions and computational cost without hurting the performance. Then, a 1 × 1 bottleneck convolution layer is added before the 3 × 3 and 5 × 5 filters. This led to an initial version of the Inception network architecture, i.e. InceptionV1 (Fig. 5(a)). Indeed, high computational cost of InceptionV1 was due to the use of big sizes convolution filters like 5 × 5. Using these filters also requires reducing the input dimension and, therefore, information loss is occurred, which leads to decrease in accuracy. To overcome these challenges, a new version of Inception network is introduced, i.e. InceptionV2. Beside the improvement of accuracy and computation, InceptonV2 scaled up the size of the network through factorized convolutions. More specifically, convolution filters of size n × n were factorized to 1 × n and n × 1 convolutions resulting in two layer convolutions instead of single layer convolution. Furthermore, 5 × 5 convolution layer was replaced with 3 × 3 one. As a result, the model was configured to go wider instead of deeper. Figure 5(b) presents the architecture of the InceptionV2 [20].
1×1 CONV
1×1 CONV
1×3 CONV
1×1 CONV
1×1 CONV
3×3 CONV
Previous Acvaon
Filter Concat 1×1 CONV
5×5 CONV
MAXPOOL 3 × 3,s=1 same
1×1 CONV
Previous Acvaon
3×1 CONV
Filter Concat 1×3 CONV
1×1 CONV
3×3 CONV 3×1 CONV
(a) InceptionV1
MAXPOOL 3 × 3,s=1 same
1×1 CONV
(b) InceptionV2
Fig. 5. Inception network architecture.
Adapted MobileNetV2 to SSD Model: Although their performance for object detection, CNN models suffer from high computation power which is a critical metric in real-time applications such as knife detection. This has led researchers at Google to introduce the MobileNet model [18] that can be embedded inside surveillance cameras and mobile devices with limited resources and computation power. In its first version (i.e. MobileNetV1), the filtering is divided into two separate layers. The first layer, which is depth-wise separable convolution, consists of applying a convolution filter of 3 × 3 on each input channel separately using a depth of 1. The second layer, which is point-wise convolution, combines the output results using a 1 × 1 kernel to create features, while preserving the input image depth. Using these concepts, the number of parameters is reduced 8 to 9 times compared to that needed in the standard convolution network, accompanied with small reduction in accuracy [17].
72
R. Abdallah et al.
Later, a new version of MobileNet, called MobileNetV2, was introduced where a bottleneck layer was added as a replacement for the point-wise convolution. The new layer further reduced the number of parameters compared to the first version. Indeed, two major changes in existing MobileNet architecture are introduced in order to avoid weakening of the convolutional network: the expansion-linear bottleneck layer and inverted residual block. First, an expansion layer of 1 × 1 is added before the depth-wise separable convolution layer in order to increase the number of channels by an expansion factor, thus the depth-wise separable convolution could be performed. Then, expansion layer followed by depth-wise separable convolution and linear bottleneck layer constitute the MobileNetV2 building block (Fig. 6). Subsequently, MobileNetV2 starts by applying a standard convolution of 32 filters on the input image followed by 19 blocks of residual bottleneck layers with a fixed expansion factor of 6 [17,18].
Fig. 6. MobileNetV2 network architecture.
4 4.1
Experiments and Results Knives Dataset and Data Preprocessing
In our experimentation, we used dataset [1] that takes into consideration different types, shapes, colors, sizes of cold steel weapon. Furthermore, it takes into account different distances of knives from camera, or partially occluded knives, or even an indoor and outdoor scenarios. The dataset contains 2078 images that are annotated in PascalVOC format where each image is represented by an Extensible Markup Language (XML) file containing information about the location and where the bounding box is located. We randomly split data into 80% for training and 20% for testing. In preprocessing phase, we converted XML annotation files into TFRecords that is recommended format by TensorFlow for enabling better performance with low memory usage. 4.2
Experimental Setup
In this work, we take benefit of transfer learning and fine-tuning where we used two pre-trained SSD models with different base networks: InceptionV2 and MobileNetV2. Both models are pre-trained on the COCO dataset and can be found in the TensorFlow Detection Model Zoo [8]. Accordingly, dataset images
A Smart Video Surveillance System
73
are resized to have a fixed size of 300 × 300 pixels along with applying data augmentation approach to the dataset. The augmentation is done by applying a random horizontal flip, a random adjust contrast and a SSD random crop. For simplicity and comparison purposes, we used same configuration parameters for two model base networks variation. We set batch size to 16, initial learning rate to 0.003, momentum to 0.9, weight to 0.001, step number to 200, 000 and RMSprop optimizer is used. Additionally, we used dropout with a probability of 0.8 to counter over fitting. The maximum number of detections per class is set to 16 while the other SSD properties are kept unchanged. Since we are interested in detecting knives only, we defined one class label, e.g. knife, for firearms detection considering everything else as background. Table 1 shows a summary of the parameter configuration adapted in our experiments. Furthermore, both models were trained through a Tesla machine server of type K80 GPU with 12 GB GDDR5 VRAM and 2469 CUDA cores that groups GPUs cores into one vector and helps in decreasing the processing time. Additionally, the machine allocates 12.6 GB RAM and 33 GB disk space with a single core Intel Xeon CPU of 2.20 GHz. Table 1. Hyperparameters configuration Parameter
Value
Image size
300 × 300
Batch size
16
Initial learning rate
0.003
Momentum
0.9
Weight
0.001
Step
200000
Dropout probability
0.8
Maximum number of detection per class 16
4.3
Optimizer
RMSprop
Data augmentation
Random horizontal flip, Random image scale, Random adjust contrast, SSD random crop
Performance Analysis Study
In this section, we test the performance of SSD model with both implemented feature extractors; InceptionV2 and MobileNetV2. We trained both models on detecting knives and cold weapons. Results showed a significant improvement after 200, 000 steps. We evaluated the performance based on total loss metric for
74
R. Abdallah et al.
each model as well as the precision of object detecting in correspondence to the recall values. Subsequently, the total loss in object detection is calculated based on weighted sum of classification loss and localization loss [10]; the localization loss is applied to train the model in order to determine the bounding box offset while the classification loss is applied to train the model to determine the type of target object. Table 2 shows the performance results of SSD with InceptionV2 and MobilNetV2 models. We show that both models produce important results in detecting knives-attacks with a small total loss values. After completing training, we observe that the total loss of SSD with MobileNetV2 based feature extractor was 5.434, while that of the SSD model with InceptionV2 base network reached 6.733. Whilst, on the last step, the total loss reached 2.472 in SSD with MobileNetV2 and 2.654 in SSD with InceptionV2. Figure 7 shows the total loss obtained at every step of the training phase for both models. We observe that the localization loss decreases and maintains a smooth curve while the classification loss curve is uneven in both models. This is mainly because knives are of small sizes and different shapes thus, they are difficult to be detected unlike other objects such as cars, faces, or animals. Furthermore, we calculated the mAP metric [3] in order to compare the model variation performance; mAP uses the intersection over union (IoU ) measure that calculates the overlap between the ground truth bounding box and the predicted bounding box according to a certain threshold. In our experimentation, we varied the value of IoU threshold to 0.5 and 0.75 respectively to evaluate both model variations on the col steel Table 2. Performance evaluation of total loss Model
Final step loss
Total loss (Step = 200K)
mAP (@IoU = 0.5)
mAP (@IoU = 0.75)
SSD with InceptionV2
2.654
6.733
0.776
0.494
SSD with 2.472 MobileNetV2
6.434
0.835
0.479
Fig. 7. Total loss evaluation at different steps.
A Smart Video Surveillance System
75
images test set. The obtained results of mAP for both models are shown in Fig. 8. For IoU = 0.5, we observe that SSD with MobileNetV2 outperforms SSD with InceptionV2; MobileNetV2 achieved a maximum accuracy of 83.5% and reached 82.9% in the final step, while InceptionV2 recorded a maximum accuracy of 77.6% and 76.1% in the final step. Contrarily, for IoU = 0.75, the results show that SSD with IncpetionV2 transcends SSD with MobileNetV2 with a maximum accuracy of 49.4% and 47.9% respectively.
Fig. 8. mAP evaluation at different steps.
Subsequently, Fig. 9 confirms the behavior of SSD with both models in detecting the knives-based attacks in real-time scenarios under various condition such as knife size, location in the image, distance to camera, occluded knives, etc. Moreover, the results show that both models can highly detect knives-attack under different circumstances thus helping LEAs in criminal investigation tasks.
76
R. Abdallah et al.
(a) InceptionV2, Scenario#1 (b) InceptionV2, Scenario #2
(c) InceptionV2, Scenario #3 (d) InceptionV2, Scenario #4
(e) MobileNetV2, Scenario #1 (f) MobileNetV2, Scenario #2
(g) MobileNetV2, #3
Scenario (h) MobileNetV2, #4
Scenario
Fig. 9. Samples of knive detection using SSD with InceptionV2 and MobilNetV2 models.
5
Conclusion
Integrating computer vision techniques into surveillance systems is a promising technology for criminal investigation. In this paper, we proposed a smart video surveillance system (SVSS) that allows LEAs to detect knives-based attacks that have significantly been increased in the recent years. After video data being collected through surveillance devices, SVSS introduced a modified Single Shot
A Smart Video Surveillance System
77
Detector (SSD) model that uses InceptionV2 and MobileNetV2 in its base architecture. The experimentation showed that SVSS can achieve remarkable results in real-life scenario in terms of obtaining rapid and accurate attack warnings. In future work, we seek to integrate other data sources, such as telecommunication and social media, in SVSS architecture in order to improve accuracy of the decision making process used by the LEAs.
References 1. Castillo, A., Tabik, S., P´erez, F., Olmos, R., Herrera, F.: Brightness guided preprocessing for automatic cold steel weapon detection in surveillance videos with deep learning. Neurocomputing 330, 151–161 (2019) 2. The United Nations Office on Drugs and Crime’s: Global study on homicide 2019: executive summary (2019) 3. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The PASCAL Visual Object Classes (VOC) challenge. Int. J. Comput. Vision 88(2), 303–338 (2010) 4. Fernandez-Carrobles, M.M., Deniz, O., Maroto, F.: Gun and knife detection based on faster R-CNN for video surveillance. In: Morales, A., Fierrez, J., S´ anchez, J.S., Ribeiro, B. (eds.) IbPRIA 2019. LNCS, vol. 11868, pp. 441–452. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31321-0 38 5. Glowacz, A., Kmie´c, M., Dziech, A.: Visual detection of knives in security applications using active appearance models. Multimedia Tools Appl. 74(12), 4253–4267 (2015) 6. Grega, M., Matiola´ nski, A., Guzik, P., Leszczuk, M.: Automated detection of firearms and knives in a CCTV image. Sensors 16(1), 47 (2016) 7. Guo, R., Zhang, L., Ying, Y., Sun, H., Han, Y., Tan, H.: Automatic detection and identification of controlled knives based on improved SSD model. In: 2019 Chinese Automation Congress (CAC), pp. 5120–5125. IEEE (2019) 8. Huang, J., et al.: TensorFlow object detection API, Code. https:// github.com/tensorflow/models.git. Documentation. https://modelzoo.co/model/ objectdetection 9. Huang, J., et al.: Speed/accuracy trade-offs for modern convolutional object detectors. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7310–7311 (2017) 10. Jiang, S., Qin, H., Zhang, B., Zheng, J.: Optimized loss functions for object detection and application on nighttime vehicle detection. arXiv preprint arXiv:2011.05523 (2020) 11. Kmie´c, M., Glowacz, A.: Object detection in security applications using dominant edge directions. Pattern Recogn. Lett. 52, 72–79 (2015) 12. Kundegorski, M.E., Ak¸cay, S., Devereux, M., Mouton, A., Breckon, T.P.: On using feature descriptors as visual words for object detection within x-ray baggage security screening. In: 7th International Conference on Imaging for Crime Detection and Prevention (ICDP 2016), pp. 1–6. IEEE (2016) 13. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1 48 14. Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0 2
78
R. Abdallah et al.
15. Navalgund, U.V., Priyadharshini, K.: Crime intention detection system using deep learning. In: 2018 International Conference on Circuits and Systems in Digital Enterprise Technology (ICCSDET), pp. 1–6. IEEE (2018) 16. Noever, D.A., Noever, S.E.M.: Knife and threat detectors. arXiv preprint arXiv:2004.03366, pp. 1–8 (2020) 17. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018) 18. Sarkar, D., Bali, R., Ghosh, T.: Hands-On Transfer Learning with Python: Implement Advanced Deep Learning and Neural Network Models Using TensorFlow and Keras. Packt Publishing Ltd. (2018) 19. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015) 20. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)
Biologically Inspired Variational Auto-Encoders for Adversarial Robustness Sameerah Talafha1(B) , Banafsheh Rekabdar2 , Christos Mousas3 , and Chinwe Ekenna4 1
Southern Illinois University Carbondale, Carbondale, IL, USA [email protected] 2 Portland State University, Oregon, USA [email protected] 3 Purdue University, West Lafayette, Indiana, USA [email protected] 4 University at Albany, Albany, NY, USA [email protected]
Abstract. Deep Neural Networks (DNNs) have recently become the standard tools for solving problems that can be prohibitive for human or statistical criteria, such as classification problems. Nevertheless, DNNs have been vulnerable to small adversarial perturbations that cause misclassification of legitimate images. Adversarial attacks show a security risk to deployed DNNs and indicate a divergence between how DNNs and humans perform classification. It has been illustrated that sleep improves knowledge generalization and improves robustness against noise in animals and humans. This paper proposes a defense algorithm that uses a biologically inspired sleep phase in a Variational AutoEncoder (Defense–VAE–Sleep) to purge adversarial perturbations from contaminated images. We are demonstrating the benefit of sleep in improving the generalization performance of the traditional VAE when the testing data differ in specific ways even by a small amount from the training data. We conduct extensive experiments, including comparisons with the state–of–the–art on three datasets: CelebA, MNIST, and Fashion-MNIST. Overall, our results demonstrate the robustness of our proposed model for defending against adversarial attacks and increasing the classification robustness solutions compared with other models: Defense–VAE and Defense–GAN. Keywords: Defense mechanism · VAE · Sleep algorithm · Adversarial robustness
1 Introduction Although DNNs have demonstrated success on various classification tasks, adversarial attacks [1] formulated as minute perturbations to inputs drive DNNs to classify incorrectly. For images, these perturbations can drastically impact the classifiers based on DNNs, even though these perturbed inputs are imperceptible to humans. Two general categories: (a.) white-box attack and (b.) black-box attack have been tested to attack DNNs, posing a severe threat to different safety-critical applications such as c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Awan et al. (Eds.): DBB 2022, LNNS 541, pp. 79–93, 2023. https://doi.org/10.1007/978-3-031-16035-6_7
80
S. Talafha et al.
autonomous driving, healthcare, and education [2]. Third generation Neural Networks (NNs), Spiking Neural Networks (SNNs) – being recent members of the neural network’s family – usage for combating adversarial attacks is still new and limited compared to what was achieved with DNNs. However, the latest researches show that SNNs are more robust than DNNs under some types of attacks [1]. Sleep mechanism is essential to animals’ and humans’ brain functions, including how their neurons contact each other. During sleep, neurons in the previously learned activity get reactivated, likely simulating similar observations of the spatiotemporal patterns as training in the awake phase. Sleep–inspired algorithm built using SNN has proved its ability to decrease catastrophic forgetting by reactivating the previous tasks, increasing the network’s efficiency to generalize on noisy or alternative variants of the training dataset letting the network perform forward transfer learning. In this paper, we introduce Defense–VAE–Sleep model, which combines Defense-VAE [3] with the sleep mechanism [1] by utilizing the sleep phase in Defense–VAE to increase generalization performance by decreasing the impact that imperceptible input changes might have on the task output. During the sleep phase, Mirrored Spike-Timing Dependent Plasticity (mSTDP or mirrored STDP) used as sleep functions’ learning rules [4] leads to an increased ability of neurons’ to form logical communication in-between memories, and hence could reduce VAE loss function and subsequently resulting in less dispersed latent codes and increased output interpretability. The downstream CNN target classifiers in our proposed model are fed a clean version of the input contaminated images by removing the adversarial perturbations. First, adversarial images are generated based on an attack. Then, these adversarial images are fed into Defense–VAE–Sleep for reconstruction. Defense VAE–Sleep can generate images reconstructed from the underlying clean image by removing most adversarial perturbations. This paper introduces a framework as well as reports our initial findings on mimicking the biological sleep phase for defense against adversarial attacks on deep generative models like VAE. Contributions to our work include: 1. We report that our proposed model works well for defense against white–box and black–box attacks by reporting our model’s performance on three different datasets (MNIST, Fashion–MNIST, and CelebA). For most results, after the sleep phase was applied in Defense–VAE, the reconstructed images resemble the underlying clean images more than the ones generated by Defense–VAE [3] (without sleep). 2. We demonstrate that using the Defense–VAE–Sleep algorithm leads to informationrich latent codes relative to the ones generated by Defense–VAE [3] and Defense– GAN [5]. 3. We show that Defense–VAE–Sleep’s architecture is more robust when compared to Defense–VAE [3] and Defense–GAN [5] which creates decision boundaries that more closely resemble the actual classes. 4. We demonstrate that Defense–VAE–Sleep algorithm is better suited for multilabel classification problems (by experiments on the CelebA dataset) compared to Defense–VAE [3] and Defense–GAN [5]. The rest of the paper is organized as follows: a review of literature providing background on adversarial attacks and defense mechanisms is provided in Sect. 2, followed
Biologically Inspired Variational Auto-Encoders for Adversarial Robustness
81
by our model’s description in Sect. 3. Section 4 delineates our experimental protocol and results. Finally, the paper is concluded with remarks on the results in Sect. 5.
2 Related Work Defense algorithms and adversarial attacks are widely researched machine learning fields, with many defense models developed last decade. The research focuses on the creation/design of robust models that perform well on training sets with perturbed adversarial augmentations. The defense models such as [6] have improved the accuracy classification scores and even enhanced the accuracy of the clean image in some image datasets. A twostep defense model is proposed in [7], where the adversarial input is detected and then reformatted using divergence amongst clean and adversarial examples’ manifolds. The use of various filters(Gaussian low–pass, median and averaging, etc., among others) is suggested in [8] as a mechanism to counter the effects of adversarial noise. [9] prepossessed images with JPEG compression, a standard, widely-used image encoding and compression method, where the impact of adversarial noise is reduced using the JPEG compression as a defense mechanism. [10] used a saturating network whose performance is robust in the presence of adversarial noise by modifying the loss function such that activations are in their saturating regime. Saman–gouei et al. proposed a Defense–GAN model that used generating model “GAN” [5] for adversarial defense. Initially, the authors trained a GAN model with clean images, and therefore, the model learned clean images’ distribution. Then, backpropagation was used to identify the optimal latent space of the clean image when provided with an adversarial image. Eventually, the GAN reconstructed an image through optimal latent space, which is expected to resemble the clean image. Xiang Li and Shihao Ji proposed Defense–VAE [3] removes the adversarial perturbations, which leads to reconstructed images that closely resemble the underlying clean image. Compared to Defense–GAN [5], Defense–VAE can directly identify an optimal latent code by using the forward–propagating algorithm an adversarial image through its encoder network; rendering the whole process to be much faster when compared with Defense–GAN. Subsequently, Defense–VAE’s decoder network reconstructs the clean image using that latent code. Recently, biologically inspired learning with sleep algorithms in DNNs has been mimicked in [1] to increase adversarial robustness and generalizations of DNNs by using a model which augments the backpropagation training phase with an STDP based unsupervised learning phase in the Spiking domain modeled after how the biological brain utilizes sleep to improve learning. The sleep algorithm improves the generalization accuracy for noise and blurs in the testing dataset, especially when training using clean images, and exploits using such attacks can be a severe threat; as mentioned before, security-sensitive applications of DNNs are vulnerable to adversarial attacks. To the best of our knowledge, Defense–VAE [3] is the state–of–the art for defense against adversarial attacks outperforming general DNNs’ defense mechanisms [1] and Defense–GAN [5]. In this work, we suggest increasing Defense–VAEs’ robustness by utilizing a biologically inspired sleep phase in VAE to both adversarial attacks and general image distortions, with high generalization performance.
82
S. Talafha et al.
3 Proposed Model
Fig. 1. Up left shows the training pipeline used for defense–VAE–sleep, and the classifier is shown on up right. The testing pipeline of defense–VAE–sleep is shown downwards.
3.1
Varitional Auto-Encoder (VAE)
VAE is a probabilistic generative architecture [11] consisting two independent cooperating networks: (a.) an encoder qφ (zx) and (b.) a decoder pθ (xz). These two networks are cooperating agents that perform space transformation were given an input (image in our case) x, encoder transforms it from feature space to what is referred to as a latent space to a latent variable z. The decoder’s role in the network is the opposite. During training, VAE regularizes its encoding distribution such that the transformation of input from feature space to latent space captures necessary properties to facilitate the reverse transformation. Assuming parameters θ and φ represent the weight and biases of VAE, its loss function (lower bound on log-likelihood log pθ (x)), also referred to as Evidence Lower BOund (ELBO) [11] can be calculated using Jensen’s inequality as follows: Lθ,φ;x = Eqφ (z|x) log pθ (x|z) − DKL (qφ (z|x)pθ (z)),
where,
log pθ (x) =
pθ (x|z)p(z)
qφ (z|x) dz Lθ,φ;x . qφ (z|x)
(1)
(2)
ELBO consists of two terms; (1.) the reconstruction term, which to maximizing the expected log-likelihood under the qφ distribution, and (2.) Kullback-Leibler (KL) term,
Biologically Inspired Variational Auto-Encoders for Adversarial Robustness
83
which compares the learned distribution qφ with prior distribution pθ . When the approximate and true posterior probabilities are equal i.e. qφ (zx) = pθ (zx), ELBO is minimized, where the bounds are determined by the KL divergence qφ (zx) and pθ (zx). When used with images, CNN architectures can be used to build VAE’s encoder and decoder models for enhanced performance on the transformations, such that encoders can better capture distinct perceptual features (example, spatial correlation) of the input image [12]. However, the model’s reconstruction process regarding fidelity and naturalness can be improved as the output of VAEs are generally blurry and unrealistic. 3.2 Spiking Varitional Auto-Encoder (SVAE) SVAE [12] maps probabilistic mapping of encoder and decoder as Leaky Integrate– And–Fire (LIF) neurons based SNNs. Hidden spiking signals XH (or latent spike trains) distributed according to a parameterized distribution are created by the encoder qφ (XH XV ), given the visible spiking signal XV , while the decoder pθ (XV XH ) will reconstruct the spiking signal XV given the hidden spike signal XH . Usually Bernoulli or Poisson filters are used to encode input images to the spike trains. Learning SVAE depends on changing the synaptic strengths between neurons. During training SVAE, the encoder should try to learn the simpler distribution qφ (XH XV ) such that it is as close as possible to the target distribution pθ (XH XV ) based on KL divergence [5], which is defined as: KL
qφ XH |XV pθ XH |XV
=
DXH qφ (XH |XV ) log
qφ (XH |XV ) , pθ (XH |XV )
(3)
where DXH is a measure of integration over latent spike trains. The distribution considered as “best fit” for the data is obtained by minimizing the KL divergence. Maximizing the marginal–log value leads to optimization of KL divergence, which can be formulated in terms of KL as:
log pθ (XV ) = log
DXH pθ (XV , XH ) =
DXH qφ (XH XV ) log
DXH qφ (XH XV ) log
qφ (XH XV ) + pθ (XH XV )
pθ (XV , XH ) + L(θ,φ;X ) , = KL V qφ (XH XV )pθ (XH XV ) qφ (XH XV )
(4)
where L(θ,φ;XV ) is the loss function (or ELBO) for the SVAE . The gradient of Eq. 4 ∇wij log p(XV ) of both the spike observed (or the visible spiking signal) XV and the latent spike neurons XH between two neurons i and j w.r.t to the synaptic efficacy wij , is given by; ∇wij log pθ (XV ) p
θ (XH |XV )
=
k∈(V∪H)
T 0
dτ
∂log ρk (τ ) [Xk (τ ) − ρk (τ )], ∂wij
(5)
where ρk (τ ) is firing rate function of LIF neuron guided by mSTDP. The gradient ∇wij log p(XV ) can be calculated by updating the weight according to gradient descent Δwij ∝ ∇wij log p(XV ) yields mSTDP learning rules.
84
3.3
S. Talafha et al.
VAE–Sleep
It has been hypothesized that the sleep mechanism is crucial to humans and animals’ brain functions, including but not limited to how neurons communicate with each other by connecting the recently learned memories and the old learned memories; leading to improvement in memories, learning, increased attention and robustness against noise [13]. Limitations of traditional VAEs include the creation of non–interpretable latent codes when the inputs are noisy/disturbed and not observed during training, which is problematic and makes them ineffective for classifications [14]. This lack of generalization presents issues when VAEs are utilized in the real world. Compared to DNNs, SNNs simulate the behavior of natural neural networks more closely [15], and they have been proven to be more robust vis-´a-vis deterioration to noisy inputs than their DNNs counterparts. Moreover, computing elements used in SNNs leverage the spike–domain processing that operates on spikes, making SNNs energy–efficient when compared to DNNs that consume energy sporadically [11]. The operation of VAE–Sleep [12], which consists of two distinct (sleep and awake) phases, can be summarized in the following points: 1. In the awake phase, train a regular VAE using the re–parameterization trick. 2. After training, convert the VAE network into a SVAE network (ConvertVAEToSVAE) by mapping the weights as in [16]. For the conversion, it is assumed that ReLU is used as an activation function and there are no biases. 3. This conversion facilitates the sleep phase where the input (pixel intensities) are converted to Bernoulli spike–trains XV which simulate the mSTDP sleep from the network’s visible to hidden layers 4. Reconvert the SVAE back to VAE (ConvertSVAEToVAE). We refer the readers to [12] for more details about VAE–Sleep. 3.4
Mirrored STDP (mSTDP)
STDP [17] and anti–Hebbian STDP (aSTDP) [18] combined together guide the feed0forward and feedback connections for mSTDP. Under mSTDP, for both visible and hidden pins, generated synaptic plasticity is identical for both feedback and feedforward connections (guided by a scaling factor), hence improving the performance of SVAE for input reconstruction. STDP considerably enhances synchrony in the feedforward network. Precise spike timing between pre and postsynaptic neurons induces plasticity in the standard STDP paradigm, where the strengthening(pre before post) and weakening(post before pre) of a synapse is determined by the spike timing. On the other hand, aSTDP is the opposite of Hebbian STDP, where (pre before post) leads to the weakened synapse. Maintaining this symmetry throughout the learning process requires the changes in feedforward feedback synaptic strengths to be constrained by the following rule: any weakening of the feedforward synaptic strength should result in an equivalent weakening of the synaptic feedback strength and vice versa. Feedforward and feedback synapses in the real neurons are two separate physical structures; a model mimicking them should explain aforementioned neurons will experience almost identical plasticity. In mSTDP, visible neuron spikes are substituted in place of presynaptic
Biologically Inspired Variational Auto-Encoders for Adversarial Robustness
85
neurons if STDP, and hidden neuron spikes are substituted in place of presynaptic neurons of aSTDP, and the postsynaptic neurons are substituted accordingly. The mSTDP rules are governed by the Eqs. 6 and 7. δi∈Svis ,j∈Shid =
− αφr a+ r + βφp ap + αφr a− r + βφp ap
tj − ti ≤ 0, tj − ti > 0,
Δwij = δij wij (1 − wij ),
(6) (7)
where Svis , and Shid are the set of visible neuron spikes, and the set of hidden neuron spikes respectively; ti and tj are the time of the neuron spike i and j respectively; − + − a+ r , ar , ap , and ap are the scale magnitude of weight change and signify the direction of weight change; α, β, φr , and φp are controlling factors; δij denotes one of the mSTDP functions (also called learning window), which ensures the weights remain between the range [0,1] ensuring stability of the weight changes until convergence; and Δwij is the synaptic weight modification. For more information regarding mSTDP rules and input prepossessing strategy in mSTDP, we refer readers to [4]. During the learning process, k–Winner Take All (k–WTA) ensures that earlier fired neurons perform mSTDP and thwart other neurons’ firing. k neurons with the quickest spike times are chosen, and those with the highest internal potentials are selected. Of the selected ones, a r × r inhibition window is used to indicate the winner where the winner neuron will be at the center of the window, which will be imposed on the feature maps to avert the selection. 3.5 Defense–VAE–Sleep
Algorithm 1. Defense–VAE–Sleep procedure M AIN x∗ = adv-attack(x, y, θ, ) Initialize (vae) Train vae(x∗ , x) Minimize Lθ,φ;x,x∗ in VAE Eqφ (zx∗ ) [log pθ (xz)] − DKL (qφ (zx∗ )pθ (z)) svae, scales = C ONVERT VAE TO SVAE(vae) svae = Sleep(svae, x∗ , scales) ∗ in SVAE Minimize Lθ,φ;XV ,XX log pθ (XV ) − KL( qφ (XH XV∗ )pθ (XH XX ) )
vae = C ONVERT SVAE TOVAE(svae) Reconstruct xrec Minimize Binary Cross-Entropy 1
rec ) + (1 − y) ln(1 − Q(xrec ))) C=− rec (y ln Q(x n x end procedure
86
S. Talafha et al.
We propose a new defense model that uses our proposed Defense–VAE–Sleep algorithm and a target deep learning classifier trained in an End–to–End (E2E) fashion within random weights initialization by taking into consideration the loss function of the Defense–VAE–Sleep algorithm as well as the classifer. Defense–VAE–Sleep can create the proper latent codes to correctly reconstruct adversarial examples (denoise examples) classified to their classes. The pseudo–code of our Defense–VAE–Sleep algorithm is shown in Algorithm 1. Given a clean image xm , we use different adverserial attack methods to generate multiple adverserial images x∗mk , resulting in a many–to–one mapping between adverserail samples and their clean counterpart rendering Defense–VAE– Sleep to serve as a robust yet generic defense algorithm for different types of attacks. After that, we apply Defense–VAE–Sleep that consists of two phases; the awake and sleep phase (see Fig. 1 (left)). In the awake phase, we initialize VAE’s parameters, and then train VAE to update the weights based on VAE’s loss function using reparameterization trick [19]. Then, we apply C ONVERT VAE TO SVAE to transfer the weights from VAE to SVAE. In the sleep phase, we train the SVAE using mSTDP wherein the loss function is optimized concerning parameters of encoder (φ) and decoder (θ) . The mSTDP lets us backpropagate based on ELBO. After that, C ONVERT SVAE TO VAE is applied to transfer the weights from SVAE to VAE. The encoder and decoder in Defense–VAE–Sleep, are defined as follows: z ∼ Enc(x∗ ) = qφ (z|x∗ ), x ∼ Dec(z) = pθ (x|z), (awake-phase),
XH ∼ Enc(XV∗ ) = qφ (XH |XV∗ ), XV ∼ Dec(XH ) = pθ (XV |XH ), (sleep-phase),
(8)
(9)
Finally, the reconstructed image xrec from Defense–VAE–Sleep is used to train a downstream target classifier Q (see Fig. 1 (right)) by minimizing the cross–entropy loss calculated between target label and prediction labels: y Q(xrec ) respectively [20]. The loss function for an E2E training pipeline(Defense–VAE–Sleep and the target classifier) is given as follows: LE2E = LV AE + LSV AE + LCross−Entropy
(10)
In the test stage (see Fig. 1 (down)), we investigate the possibilities of threats/remedies beyond classification tasks by (1.) Testing the classification accuracy of an adversarial perturbation elimination framework to eliminate the adversarial examples’ perturbation before feeding it into the original target classifier trained with clean images before the attack (a defense model), (2.) Testing the classification accuracy of a defense model E2E learning (a defense model E2E), (3.) Testing the classification accuracy of an adversarial image on the original target classifier (No Defense), (4.) Testing the classification accuracy of a clean image on the original target classifier (No Attack). In our model, we consider combining ensemble methods with our defense mechanism [21]; the following ensemble methods are used;
Biologically Inspired Variational Auto-Encoders for Adversarial Robustness
87
1. We train each target classifier (C1, C2, C3, and C4) (see Table 1) multiple times, initialized with different random initial weights, which leads to quite diverse classifiers with different final weights. Once an ensemble of classifiers is trained, it predicts by allowing each classifier in the ensemble to vote for a label, and then the predicted value is selected to be the label with the maximum or average of the softmax probability distribution of the output from each classifier. 2. We train multiple adversarial attacks (white and black box attacks) with Defense– VAE–Sleep to obtain a set of adversarial training samples to increase the model’s capability and, therefore, formulate a defense algorithm that is generic and has improved robustness for a spectrum of perturbations. 3. We train our model using Gaussian distortions samples besides multiple attack algorithms to make each target classifier more robust against noise/perturbation.
4 Experiments MNIST and Fashion–MNIST were used to evaluate the performance of our defense model, and additionally, CelebA was also used to classify celebrity gender (male, female) based on face image. 4.1 Network Architectures and Training Parameters
Table 1. Description of the substitute models and classifiers for white–box and black-box attacks. C1
C2
C3
C4
Dropout.0.2. 8 × 8 conv. 64 ReLU. stride 2. padding 5. 6 × 6 conv. 128 ReLU. stride 2. padding 0. 5 × 5 conv. 128 ReLU. stride 1. padding 0. Dropout.0.5 FC1.10. + Softmax
3 × 3 conv. 128 ReLU. stride 1. padding 1. 5 × 5 conv. 128 ReLU. stride 2. padding 0 Dropout.0.25. FC1.128. ReLU Dropout.0.5. FC2.10. + Softmax
FC1.200. ReLU Dropout.0.5. FC2.200. ReLU Dropout. 0.25. FC3.10. + Softmax
FC1.200 ReLU FC2.200 ReLU FC3.10 + Softmax
The details of VAE and SVAE architectures used to build our Defense–VAE–Sleep are given in Table 2, and Table 3 respectively. In the awake phase, the Convolutional VAE model is used. The encoder’s outputs (the mean and log standard deviation) are parameterized by a factorized Gaussian, while the decoder’s outputs are parameterized by Bernoulli distributions over the pixels. For the sleep phase, our SVAE model is constructed by mapping the convolution layers of the VAE model in awake phase into spiking convolution layers with LIF neurons. To apply mSTDP to a SVAE layer, the controlling factors are φr = 1, φp = 0, α = 1, and β = 0. The CNN target classifiers (C1, C2, and C3) and the substitute models [5] of black-box CNN models (C1, C4) referred in this section are depicted in Table 1.
88
S. Talafha et al.
Table 2. Details of the encoder and decoder architectures of defense–VAE–sleep in awake phase used in the experiments. Encoder
Decoder
Input 64 × 64 × 1 image 5 × 5 conv. 64 ReLU. stride 1. padding 2. +BN 4 × 4 conv. 64 ReLU. stride 2. padding 3. +BN 4 × 4 conv. 128 ReLU. stride 2. padding 1. +BN 4 × 4 conv. 256 ReLU. stride 2. padding 1. +BN FC1. 4096., FC2. 4096.
FC. 128. ReLU 4 × 4 deconv. 256 ReLU. stride 2. padding 1. +BN 4 × 4 deconv. 128 ReLU. stride 2. padding 1. +BN 4 × 4 deconv. 64 ReLU. stride 2. padding 3. +BN 5 × 5 deconv. 64 ReLU. stride 1. padding 2. +BN
Table 3. Values of the spiking-layer of defense–VAE–sleep in sleep phase parameters are used in the experiments. Threshold: neuronal firing threshold for each layer of neurons, αr+ : upper bounds range of weights, αr− : lower bounds range of weights, αp+ : upper bounds range of weights, αp− : lower bounds range of weights, k: The number of winners used in k–WTA mechanism [22], and r: The radius of lateral inhibition used in k–WTA mechanism [22]. Layer Number of feature maps Input window Stride Padding Threshold αr+
αr−
αp+ αp− k r
S1
64
5×5
1
2
36
0.004 −0.003 0
0
5 3
S2
64
4×4
2
3
23
0.004 −0.003 0
0
5 2
S3
128
4×4
2
1
23
0.004 −0.003 0
0
8 1
S4
256
4×4
2
1
36
0.004 −0.003 0
0
1 0
Fig. 2. Randomly chosen reconstructions of the latent variables using defense–VAE, defense– GAN, and defense–VAE–sleep (our model) on input perturbed with different white-box attacks on the MNIST dataset. The white–box attacks used are FGSM, RAND+FGSM, and CW.
4.2
Results of White-box Attacks
Complete knowledge of the network architecture, training data, and saved weights are assumed to be known for white-box attacks where the attacker can leverage any of these to perform an adversarial attack. They can range from having full information, such as a gradient-based attack, to score-based attacks that only utilize predicted scores of the model [5]. Experiments for different white-box attacks: FGSM [23], RAND+FGSM [21], and CW [24] were
Biologically Inspired Variational Auto-Encoders for Adversarial Robustness
89
performed. Table 4 depicts the performance of different classifier models (C1, C2, and C3) across the three datasets’ different attack and defense strategies. Nine adversarial samples: 3 different configurations × 3 attack scenarios were generated. For a performance comparison between our model vs. Defense–VAE [3] and Defense–GAN [5], we included the results of the models (under the same configurations used to evaluate our model) as well. As shown, our model achieves superior performance over the other two models in most attacks and recovers almost all the losses inaccuracy as a result of the adversarial attacks. We show examples of adversaries created by white-box attacks in Fig. 2. Table 4. Classification accuracies of various defense mechanisms under white-box attacks for different datasets: MNIST, Fashion–MNIST, and CelebA. Dataset Attack
MNIST Fahion–MNIST CelebA Classifier model No attack No defense Our model Our model (E2E) Defense–VAE Defense–GAN No attack No defense Our model Our model (E2E) Defense–VAE Defense–GAN No attack No defense Our model Our model (E2E) Defense–VAE Defense–GAN
FGSM = 0.3
C1 C2 C3
96.20 99.60 99.20
2.20 33.10 3.80
96.12 99.01 98.00
97.13 99.33 98.60
95.92 98.41 97.56
95.60 98.90 98.00
74.70 93.30 89.20
10.20 13.90 8.20
71.09 90.61 87.81
79.03 92.02 89.03
70.88 85.80 85.36
62.90 89.60 87.50
94.68 94.59 94.76
9.95 4.60 6.05
91.13 91.83 92.13
93.60 93.01 93.13
90.05 92.47 90.05
90.10 91.45 92.05
RAND+FGSM C1 C2 = 0.3 C3 α = 0.05
96.20 99.60 99.20
1.70 10.30 5.00
96.00 98.70 97.92
97.24 99.40 98.45
95.83 98.33 97.81
94.40 98.50 98.00
74.70 93.30 89.20
13.10 10.50 9.10
72.55 90.28 88.57
79.02 91.07 90.01
71.12 86.42 85.77
66.10 89.30 86.20
94.68 94.59 94.76
17.85 4.70 6.65
91.22 92.08 92.27
93.41 94.40 94.01
90.55 91.70 91.42
89.20 91.80 90.33
CW l2 norm
96.20 99.60 99.20
3.20 12.6 3.20
92.44 95.67 95.74
95.50 97.42 97.68
87.66 94.46 83.42
91.60 98.90 98.30
74.70 93.30 89.20
17.20 6.30 9.00
70.11 80.22 82.61
74.80 88.89 87.40
67.43 78.64 64.38
65.60 89.60 87.50
94.68 94.59 94.76
5.75 4.35 6.60
92.11 93.53 92.02
93.60 94.53 94.20
90.65 93.28 91.15
73.16 79.15 76.11
98.33
8.34
96.62
97.86
94.38
96.91
85.73
10.83
81.54
85.70
77.31
80.48
94.68
7.39
92.04
93.77
91.26
85.93
Average
C1 C2 C3
4.3 Results of Black-box Attacks
Fig. 3. Randomly chosen reconstructions of the latent variables using defense–VAE for different dataset: MNIST, fashion–MNIST and CelebA when tested under FSGM black–box attacks.
Black–box attacks are usually carried out under the scenarios where the target model is treated as a complete black box, and the attacker does not have access to the dataset the model was trained under where the model can be queried for collecting information on certain input/output pairs [25].
90
S. Talafha et al.
A small dataset is created by augmenting the samples labeled by the original target model. It is used to train a substitute model with the hopes that the substitute can be used as a surrogate for the actual target model. An adversarial example can be created by applying an attack on the generated substitute model. We used our Defense–VAE– Sleep model trained under white-box attacks to defend against black-box attacks. Only FGSM attack computed based on a substitute model [25] is considered, with the results of MNIST, and 5 reports the results of the datasets used for our experiments. The performance of defenses on the Fashion–MNIST dataset is noticeably lower than on MNIST and CelebA. A qualititive analysis of example reconstruction (see Fig. 3) demonstrates that our model (Defense–VAE–Sleep) is more robust when compared with Defense– VAE for black–box attacks. Assiging initial random weights performed an E2E training for Defense–VAE–Sleep and a target classifier to improve the target model’s accuracy. For some cases depicted in 5, it even improves over the original target classifier. In particular, according to the results in Table 5, Defense–VAE–Sleep trained in an E2E configuration outperforms Defense–VAE–Sleep with each network trained independently (without take classifier’s loss in consideration during the training). Table 5. Classification accuracies of various defense mechanisms under black-box attacks for different datasets: MNIST, Fashion–MNIST, and CelebA (Only FGSM black–box attack was used). Dataset MNIST Fahion-MNIST CelebA Classfier/Substitute No attack No defense Our model Our model (E2E) Defense–VAE Defense–GAN No attack No defense Our model Our model (E2E) Defense–VAE Defense–GAN No attack No defense Our model Our model (E2E) Defense–VAE Defense–GAN C1/C1 C1/C4 C2/C1 C2/C4 C3/C1 C3/C4
96.18 96.18 99.59 99.59 99.20 99.20
28.16 21.28 66.48 80.50 46.41 39.31
96.00 96.10 98.09 98.45 97.73 97.41
96.90 97.40 99.30 99.40 97.92 97.81
95.89 96.16 97.91 98.30 97.68 97.72
91.05 88.92 93.22 91.82 93.23 91.55
74.70 74.70 93.34 93.34 89.23 89.23
40.17 31.23 26.35 20.66 45.41 25.43
74.13 69.01 86.01 80.19 81.33 81.44
84.71 80.30 90.40 87.01 85.02 85.80
73.66 69.29 83.64 76.27 80.31 70.66
55.30 41.87 60.79 46.25 58.53 47.30
93.69 93.69 95.62 95.62 94.89 94.89
20.02 18.20 22.01 10.50 19.32 7.05
89.25 86.13 92.24 88.80 90.13 87.14
90.17 88.67 94.12 90.34 92.37 89.05
88.02 85.67 91.01 87.03 89.32 86.14
70.02 74.90 60.01 68.54 74.58 60.20
Average
98.32
47.02
97.30
98.12
97.23
91.63
79.54
31.54
78.69
85.54
75.64
51.67
94.73
16.18
88.95
90.79
87.87
68.04
4.4
How Come is Defense–VAE–Sleep Mighty and Efficient?
Why does the sleep phase [1] have such an enormous leap in increasing the robustness of Defense-VAE [3] to different attacks? It has been shown that the sleep phase based on SVAE [12] tends to promote an increase in more robust weights while pruning weaker weights, thus increasing the width of the weight’s distribution. This results in consolidating strong relations at the cost of diminishing weak connections between neurons. Strengthening the strong relations between neurons also makes our defense model robust and noise invariant, producing better generalization, and maintaining a high baseline accuracy. Moreover, VAE–Sleep algorithm [12] uses “weight normalization” [16] as a technique of adjusting the synaptic weights to acquire lossless transformation from VAE to SVAE. It considers each LIF neuron as the activation neuron function due to its functional resemblance to ReLU, without any leak or refractory period. The accuracy reported for training in this way is high, compared to traditional VAE, even for large-scale networks. The results demonstrate that our model is superior compared to Defense–VAE and Defense–GAN for the benchmark datasets(MNIST, Fashion–MNIST, and CelebA). CelebA dataset [26] is a face attributes dataset, each one with 40 attribute annotations. To prove the efficacy of Defense–VAE–Sleep over
Biologically Inspired Variational Auto-Encoders for Adversarial Robustness
91
the other two models tested, a multi-label classification was applied to distinguish more than one attributes such as “Attractive”, “Gray hair”, “bald”, “wearing lipstick”, etc. Table 6 shows that the images reconstructed by Defense–VAE–Sleep have a better classification accuracy (on the aforementioned multi-label classification), proving our hypothesis that the use of the sleep phase for Defense–VAE results in improved classification accuracy and increased robustness concerning the number of latent dimensions. Table 6. Classification accuracies for different number of attributes on the CelebA dataset for our model, defense–VAE and defense–GAN under FGSM based black–box attacks # Attributes 2 Attributes 4 Attributes 6 Attributes Classfier/Substitute No attack No defense Our model Our model (E2E) Defense–VAE Defense–GAN No attack No defense Our mode Our model (E2E) Defense–VAE Defense–GAN No attack No defense Our model Our model (E2E) Defense–VAE Defense–GAN C1/C1 C1/C4 C2/C1 C2/C4 C3/C1 C3/C4
92.25 92.25 94.61 94.61 93.22 93.22
18.14 16.11 20.31 8.01 17.10 5.71
90.9 85.04 91.08 89.22 90.00 87.10
91.01 89.80 94.10 92.11 93.33 90.00
90.01 78.91 83.11 75.33 82.01 78.02
65.03 55.02 62.11 69.34 69.41 55.54
89.33 89.33 91.13 91.13 90.15 90.15
15.34 14.27 18.23 6.04 15.23 4.91
84.67 82.15 86.01 85.17 86.45 83.65
89.40 87.77 91.05 91.03 90.04 88.24
76.24 74.44 78.11 69.90 77.16 74.66
63.23 52.11 55.41 55.22 58.44 55.34
85.14 85.14 88.45 88.45 85.35 85.35
13.77 12.55 15.44 5.77 14.22 3.88
80.31 82.11 82.70 83.12 81.45 80.15
84.91 85.04 88.00 88.09 85.01 84.90
70.23 69.25 70.06 65.05 69.01 68.03
58.40 48.56 50.04 48.19 52.41 50.43
Average
93.36
14.23
88.89
91.725
81.23
62.74
90.20
12.34
84.68
89.59
75.09
56.625
86.31
10.99
81.64
86.00
68.61
51.34
5 Conclusion and Future Research Directions Defense strategy inspired by the biological processes (sleep algorithm) presented in the paper yield increased robustness of classification models against adversarial attacks and distortion. Our experiments on standard computer vision datasets demonstrate that Defense–VAE–Sleep provides a better defense for adversarial attacks when compared with other models (Defense–GAN and Defense–VAE). We hypothesized that more realistic feature representations are created because of the sleep phase in Defense–VAE– Sleep, hence leading to more natural decision boundaries that closely resemble right classes, thus increasing the robustness of the network. Additionally, a comprehensive analysis of the performance of Defense–VAE–Sleep was presented using qualitative comparisons for different adversarial attacks. Although being a robust mechanism of defense against adversarial attacks, for some types of attacks, classification accuracy deteriorates with increased robustness. Future work includes addressing the deficiencies in our model to be robust for different types of attacks by enhancing spike encoding of the spiking convolution layer. Acknowledgement. This work is supported by Google Cloud credits for academic research. We thank the Google cloud platform for giving us access to computing power that will make the next big thing possible.
References 1. Tadros, T., Krishnan, G., Ramyaa, R., Bazhenov, M.: Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks. In: International Conference on Learning Representations (2019) 2. Han, X., et al.: Adversarial attacks and defenses in images, graphs and text: a review. Int. J. Autom. Comput. 17, 151–178 (2020)
92
S. Talafha et al.
3. Li, X., Ji, S.: Defense-VAE: a fast and accurate defense against adversarial attacks. In: Cellier, P., Driessens, K. (eds.) ECML PKDD 2019. CCIS, vol. 1168, pp. 191–207. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43887-6 15 4. Kendra S Burbank. Mirrored stdp implements autoencoder learning in a network of spiking neurons. PLoS Comput. Biol. 11(12), e1004566 (2015) 5. Samangouei, P., Kabkab, M., Chellappa, R.: Defense-GAN: protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605 (2018) 6. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013) 7. Meng, D., Chen, H.: Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147 (2017) 8. Osadchy, M., Hernandez-Castro, J., Gibson, S.J., Dunkelman, O., P´erez-Cabo, D.: No bot expects the deepcaptcha! introducing immutable adversarial examples with applications to captcha. IACR Cryptol. ePrint Arch., vol. 2016, p. 336 (2016) 9. Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068 (2014) 10. Nayebi, A., Ganguli, S.: Biologically inspired protection of deep networks from adversarial attacks. arXiv preprint arXiv:1703.09202 (2017) 11. Bagheri, A.: Probabilistic spiking neural networks: Supervised, unsupervised and adversarial trainings (2019) 12. Talafha, S., Rekabdar, B., Mousas, C., Ekenna, C.: Biologically inspired sleep algorithm for variational auto-encoders. In: Bebis, G., et al. (eds.) ISVC 2020. LNCS, vol. 12509, pp. 54–67. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-64556-4 5 13. Walker, M.P., Stickgold, R.: Sleep-dependent learning and memory consolidation. Neuron 44(1), 121–133 (2004) 14. Roy, S.S., Ahmed, M., Akhand, M.A.H.: Noisy image classification using hybrid deep learning methods. J. Inf. Commun. Technol. 17(2), 233–269 (2018) 15. Ankit, A., Sengupta, A., Panda, P., Roy, K.: Resparc: a reconfigurable and energy-efficient architecture with memristive crossbars for deep spiking neural networks. In: Proceedings of the 54th Annual Design Automation Conference 2017, pp. 1–6 (2017) 16. Rueckauer, B., Liu, S.C.: Conversion of analog to spiking neural networks using sparse temporal coding. In: 2018 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5. IEEE (2018) 17. Caporale, N., Dan, Y.: Spike timing-dependent plasticity: a hebbian learning rule. Annu. Rev. Neurosci. 31, 25–46 (2008) 18. Koch, G., Ponzo, V., Di Lorenzo, F., Caltagirone, C., Veniero, D.: Hebbian and antihebbian spike-timing-dependent plasticity of human cortico-cortical connections. J. Neurosci. 33(23), 9725–9733 (2013) 19. Kim, H., Mnih, A.: Disentangling by factorising. arXiv preprint arXiv:1802.05983 (2018) 20. Zhang, Z., Sabuncu, M.: Generalized cross entropy loss for training deep neural networks with noisy labels. Adv. Neural Inf. Process. Syst. 31, 8778–8788 (2018) 21. Tram`er, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204 (2017) 22. Maass, W.: Neural computation with winner-take-all as the only nonlinear operation. Adv. Neural Inf. Process. Syst. 12, 293–299 (2000) 23. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014) 24. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP), pp. 39–57. IEEE (2017)
Biologically Inspired Variational Auto-Encoders for Adversarial Robustness
93
25. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical blackbox attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519 (2017) 26. Liu, Z., Luo, P., Wang, X., Tang, X.: Large-scale celebfaces attributes (celeba) dataset. Retrieved August 15(2018), 11 (2018)
Blockchain Technology and Protocols
Detecting Illicit Ethereum Accounts Based on Their Transaction History and Properties and Using Machine Learning Amel Bella Baci1(B) , Kei Brousmiche2 , Ilias Amal1 , Fatma Abdelh´edi1 , and Lionel Rigaud1 1
CBI2 - TRIMANE, Paris, France {Amel.Bella-Baci,Fatma.Abdelhedi}@trimane.fr 2 The Blockchain Xdev, Paris, France [email protected]
Abstract. The Ethereum blockchain has been subject to an increasing fraudulent activity during the recent years that hinders its democratization. To detect fraudulent accounts, previous works have exploited supervised machine learning algorithms. We can identify two main approaches. The first approach consists in representing transaction records as a graph in order to apply node embedding algorithms. The second approach consists in calculating statistics based on the amount and time of transactions realized. The former approach leads to better results at this day. However, transactional data approaches - based on time and data only are not used to their full potential. This paper adopts a transactional data approach by expanding feature calculation to every transaction properties. We study three classification models: XGBoost, SVM Classifier and Logistic Regression and operate a feature selection protocol to highlight the most significant features. Our model results in a 26 features dataset providing an f-score of 0.9654. Keywords: Fraud detection XGBoost · Feature selection
1
· Machine learning · Ethereum ·
Introduction
Introduced in 2009 by an anonymous contributor under the alias Satoshi Nakamoto [9], the blockchain technology has recently gained a lot of interest from a variety of sectors such as government, finance, industry, health and research. A blockchain can be defined as a continuously growing ledger of transactions, distributed and maintained over a peer-to-peer network [15]. Based on several wellknown core technologies, including cryptographic hash function, cryptographic signature and distributed consensus, it offers some key functionalities such as data persistence, anonymity, fault-tolerance, auditability, resilience and execution in a trust-less environment among others. Its initial purpose consisted in c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Awan et al. (Eds.): DBB 2022, LNNS 541, pp. 97–108, 2023. https://doi.org/10.1007/978-3-031-16035-6_8
98
A. Bella Baci et al.
improving financial flow by removing intermediaries and censorship through an interplanetary network, sustained by a community of anonymous users. While it has been applied in various sectors, its major use case still remains the exchange of monetary and financial value using cryptocurrency, with 927B USD of circulating supply on Bitcoin only and 514B USD on Ethereum1 at the time of writing, without taking into account the market of tokens. A token is represented on the blockchain as a smart contract that embeds a table storing the information of token balance (i.e. an amount) for each token holder account, and some functions to create, destroy and transfer tokens between accounts while enforcing security checks (e.g. does the sender have enough funds to transfer tokens to the recipient). Any user can initiate his own token, distribute it and start a new economy. As a result, about 200,000 tokens2 that implement the ERC-20 Ethereum token standard have been deployed as of today. Those tokens can be tied to a project, an organization, a service, fiat money funds (e.g. USDT), financial bonds or any other financial instruments. More recently, smart contracts have been used to elaborate complex financial protocols on top of tokens such as liquidity pools and swap protocols enabling decentralized money exchange, lending protocols or order books and automatic market makers. Those Decentralized Finance protocols (aka. DeFi) that emerged in the mid-2019, hold about 100B USD3 at the time of writing. This democratization of cryptocurrencies and tokens has inevitably been followed by an increase in fraudulent activities. In July 2020, the stablecoin Tether has blacklisted 39 Ethereum addresses, holding millions of dollars worth USDT4 . Among malicious activities happening on blockchains, we can mention stealing, Ponzi schemes, attempt at fake organizations, money laundering and traffic of all kinds. To this day, several hundreds of millions of dollars have been stolen over the Ethereum platform5 . In order to protect citizens and control the activities related to the blockchain, new laws and regulations have been proposed around the world. As an example, one can quote Markets in Crypto-Assets (MiCA6 ), a proposal for cryptocurrency regulation in Europe or the PACTE7 law in France. To proactively enforce the laws and comply with regulations, the blockchain community have to build new tools to automatically detect fraudulent accounts. However, this task is hindered by the anonymity provided by the technology. In the classic financial field, methods to tackle this issue are statistical analysis [10], consortium data (i.e. data shared between different banks) [3] and anomaly monitoring [7]. In the last decade, machine learning has proven to be one of the most efficient tools for financial fraud detection [4]. Following those findings, researchers 1 2 3 4 5 6 7
coinmarketcap.com. investopedia.com. statista.com. theblockcrypto.com. forbes.com. europarl.com (European Parliament). legifrance.com (French Legislation).
Detecting Illicit Ethereum Accounts
99
started to apply machine learning algorithms to cryptocurrencies fraud detection challenges (cf. Sect. 2). In this paper, we present a novel approach based on supervised learning using exhaustive transaction properties on Ethereum. The next section presents an overview of research works that apply machine learning to blockchain data in order to detect fraudulent activity. Section 3 describes our methodology to build our dataset that will feed our model, detailed in Sect. 4. We then show, analyze and discuss the results of our model in Sect. 5 before concluding in Sect. 6.
2
Related Work
To tackle the anonymity provided by the blockchain and which protects malicious users, researchers resort to machine learning to detect with precision fraudulent addresses. Karolis Lasas et al. (2020) [8] builds a dataset of 420 illicit addresses coming from Etherscamdb8 and 53 licit addresses. Based on their transaction records, they calculate 10 features based on the transaction timestamp, the amount of ether and the number of transactions operated. Given those features, they compare three machine learning models: K-Means, Support Vector Classifier and Random Forest. The research concludes that the latter algorithm exhibits the best results with an F-score of 0.95 with Random Forest. Although results are encouraging, limited sample size as well as a limited feature space prevent the model from being efficient in real conditions. We assume that working on the dataset processing is the most appropriate way to enhance results at this point. Qi Yuan et al. and Jiajing Wu et al., respectively [14] and [12], both use richer datasets with 1259 illicit addresses and 1259 licit addresses. Also, they introduce a graph approach considering accounts as nodes and transactions as edges. From this point, one can extract structural information and reveal significant patterns that may distinguish illicit addresses from others. They use node embedding algorithms - node2vec and trans2vec - and extract 64 features. Results show a performance increase when using those techniques compared with a manual method that extracts 8 features, once again, features related to the amount and timestamp of transactions. The trans2vec algorithm is the one that leads to the best results, reaching a F-score of 0.908. [11] also implements a graph approach by using a differentiable graph method [13] and compare this method with 10 handcrafted features. These features don’t rely on time and amount, but only on structural properties like number of nodes, number of edges or average degree of nodes for instance. [11,12,14] - based on graph theory produce good results. Doing so, they harness the complexity of structural relations existing in the Ethereum network. The first common limitation to all previously cited works is their scope of illicit address blacklists that take into account phishing scam only. Unfortunately, scams on the blockchain are not limited to phishing. One can list at least fake 8
github.com/etherscamdb.
100
A. Bella Baci et al.
Initial Coin Offerings (fundraising on Ethereum9 ), addresses blocked for being related to money laundering activities by the government, etc. At the time of writing, the node embedding approach seems to outperform time/amount based feature calculation method. However, we must point out the feature space difference between those two approaches. On one hand, graph methods usually implements 64 features, which transcribe more subtle aspects of accounts activity. On the other hand, methods based on transactional data never exhibit more than one with 10 features. We can assume that a dataset with 64 features is more likely to deliver better results than 10 features. Work conducted by [1] maintains the time/amount approach for feature calculation. Although unlike [8,12,14], they calculate 219 features. They take advantage of the graph aspect to implement a cascade feature method. For each node, they first extract features related to the timestamp and the amount of its transactions. In a second step, they extract features based on the amount of transactions operated by the node’s neighbors. As a result, interesting patterns are brought out. For example, the sum of the amount of all out-transaction of the studied node receivers that can be interpreted as an indication of the overall financial strength of all victims. Finally, [1] operates the classification with a lightGBM model, combined with a dual-sampling method in order to minimize the imbalanced dataset impact (534,820 licit addresses against 323 illicit ones). The use of this dual sampling method significantly improves results from a 0.06 F-score to a 0.81 F-score. This approach demonstrates that features calculated with transactional data can lead to satisfying results that are as conclusive as the results obtained with a graph approach. Yet, just as [8,12,14], features are calculated regarding amount and timestamps only. A method taking into account a wider set of features that are not limited to the notion of time and amount is yet to be explored. Moreover, data processing includes ether transactions exclusively, leaving token transactions out of scope. Steven Farrugia, Joshua Ellul, and George Azzopardi implements 44 features that still rely on amount and timestamp, yet ERC20 token transaction are included [2]. Consequently, each feature is calculated given a certain transaction direction (i.e. sent or received) and given a certain asset transferred (i.e. ether or token). The dataset is composed of 2179 illicit transactions and 2502 licit ones, classification is made with a XGBoost model, reaching a 0.960 f-score. On one side, this outcome stresses the importance of taking into account the integrality of transactions. On the other side, it highlights the need to take precautions when building the dataset. Indeed, a quarter of the features are related to ERC20 token transactions. However, almost two thirds of the non-fraudulent addresses had not performed any ERC20 token transactions. As a result, a quarter of the features of the majority of the addresses given a label is null. While for the other label - illicit addresses - two thirds of them have made an ERC20 token transactions. By reproducing their approach - XGBoost model with the same hyperparameters and the exact same data published by [2] - it is shown that 9
ICOs.
Detecting Illicit Ethereum Accounts
101
misclassification only occurs for addresses that haven’t made an ERC20 token transaction, potentially indicating the introduction of a bias. Besides, all previously cited works use all Etherscamdb reported addresses indistinctly, mixing External Owned Addresses (EOAs) and smart-contract addresses. A smart contract activity is different from an EOAs as it usually generates much more transactions. Mixing all types of accounts might also lead to a classification bias. In this paper, our goal is to deepen the feature calculation approach based on transaction data only. First, all transactions type (i.e. ether, ERC20 token and nonERC20 token) are observed. Second, features derive from timestamp and amount, but also gas, gas price and fee parameters that are found in every Ethereum transactions. Moreover, address selection is restricted to EOAs that have an ERC20 token activity. This approach will enable to depict a precise typology of transaction that characterize a fraudulent account based on its activity. In the next section, we describe our methodology to build the dataset (cf. Sect. 3.1) and to extract the transactional features (cf. Sect. 3.2) from both onchain and off-chain data sources.
3
Dataset Construction
In this section, we describe each step of the work conducted regarding the dataset construction. The first step is to determine labelled accounts in order to build a qualified dataset. We achieve this by using certified blacklists. Then, we proceed to extract transactions record of each account. These records enable us to compute series of features used during the modelling phase. Finally, we proceed to a correlation analysis among our features. Thanks to this correlation analysis, we remove features that are highly correlated and as a result unnecessary for our predicting model. 3.1
Labelled Data Collection
Data collection is done in two steps. First, labelled account addresses are determined in order to build a suited dataset for a supervised learning algorithm. Then transactions records are extracted for every account. We use two publicly available blacklists of Ethereum addresses to constitute our collection of proven fraudulent accounts. The first one, Ethereum Scam Database10 is an open-source, collaborative database that lists scams on Ethereum. Ethereum’s users can report fraudulent addresses by specifying the category of the activity. Fraud reported are most often attempts at faking official websites to hack personal information or at fundraising (called an Initial Coin Offering) to steal money. The second one is a blacklist maintained by Tether and Coinbase, two fiat-backed stablecoins, that 10
github.com/EtherScamDB.
102
A. Bella Baci et al.
have been banning addresses since 2020 in collaboration with the US government11 . For the rest of this document, this blacklist will be referred as stablecoins’ blacklist. Those addresses were acquired using an SQL request provided by Philippe Castonguay on the website Dune Analytics12 . Looking closely into those blacklists, we can see that they contain smart contract addresses as well as personal account addresses (i.e. EOAs). The composition details are described in the Table 1. The address type may refer to a certain fraud type, for example, Ponzi schemes are more executed via smart contract than by EOAs [6]. In our work, the focus is on EOAs. Since our application field deals with user account addresses, mixing all address types might confuse our model.
Table 1. Blacklisted addresses type Etherscamdb BL Stablecoin BL ERC20 token NON ERC20 token Smart contract EOA
26 0 25 1986
7 47 152 299
Total
2037
505
Finally, we extract all 1,986 blacklisted accounts (EOAs only) of Etherscamdb and 299 of stablecoins’ blacklist available on the 25th October 2021. To extract the non-fraudulent accounts, we randomly select 20,000 addresses on the Ethereum network that have operated transactions after 2017. We verify that the selected addresses haven’t made a transaction either with a blacklisted address or a direct correspondent of the blacklist. By doing so, we ensure distance between different labelled samples. From those two datasets of fraudulent and non-fraudulent accounts, we extract the transaction history of each account. This extraction has been operated using Google API Bigquery13 . For each transaction, we extract: sender address, receiver address, timestamp (date and time of when the block, including the transaction, was mined), amount sent, gas limit (maximum computational power to be consumed to run the transaction), receipt gas used (the actual computational power used to run the transaction), gas price (the price the sender decided to pay for its transaction to be included in a block) and the signature method. The latter indicates the action realized by the user through his transaction. The most common and used method is the transfer() method, made for sending ether or token to someone else. In case of a token transaction, we also 11 12 13
cointelegraph.com. dune.xyz. bigquery.
Detecting Illicit Ethereum Accounts
103
extract the token address, the token name, the number of decimals and the total supply for the token. Once labelled accounts are acquired and transactions records are extracted, the next step is the feature extraction to end up building the dataset. 3.2
Feature Extraction
The main purpose of feature extraction is to identify transaction properties that can characterize suspicious behavior and discriminate fraudulent accounts from innocent ones. Our approach includes all the components of a transaction, beyond the amount and the timestamp, into the features. Features are chosen as follows. Account Data. For a given address, we extract data that characterizes it: its ether balance. The ether balance of an account is the amount of ether owned by an account at the time of the extraction. Transactional Data. We also extract data that characterize a transaction: the amount, the duration between two successive transactions, the gas, the gas price and the gas limit. Since the gas price is determined by each user, we also include the ratio between the user’s gas price and the average - daily and hourly - gas price computed on the whole network. In addition, we define two kinds of distinction: the direction of the transaction (ingoing or outgoing) and the crypto-asset exchanged (Ether, ERC20 token, non-ERC20 token). And for numeric values we compute the minimum, the maximum, the average and the standard deviation and the total sum if relevant. History Data. Finally, we build our features with data that characterize the overall activity of the address. We first extract the time between the first and last transaction operated. Then, we extract the most sent ERC20 token, the most received ERC20 token and the number of different ERC20 tokens used overall. In addition, we collect the number of different methods used by the user. We also collect the number of transactions realized and the number of unique addresses that interact with our dataset sample. Finally, we notice in our investigation that some transactions may include ether and tokens at the same time, those particular transactions are also implemented as features. The final step of the feature processing is the dataset reduction based on correlation. Using the Pearson Correlation Coefficient, we delete features for which correlation with another one is higher than 0.8. Mostly, variables regarding ratios between gas price paid, and average gas price are highly correlated. Deleting highly correlated variables turns our 311 features dataset into a 153 features dataset, which have a significant impact on our train speed thereafter.
4
A Robust and Performant Model
The goal of this section is to build a robust and generalized model that could be applied to any new dataset. This modelization is conducted in three steps. First,
104
A. Bella Baci et al.
we identify our baseline model by comparing different classification algorithms. Then, we proceed to a hyperparameter optimization as well as a feature selection. Finally, we analyze misclassified samples. As for the evaluation metrics, we use the recall, the precision and the F-score [5]. 4.1
Baseline Model
XGBoost, SVM and Logistic Regression Comparison. We compare three classification models: Logistic Regression, Support Vector Machine (SVM) and XGBoost. Those algorithms were chosen because they are fundamentally different classification models. While Logistic Regression is based on a regression approach, SVM classifier purpose is to find the hyperplane that separates data when brought in a higher dimension space. XGBoost model is based on decision trees. In addition, these are model that are easy to tune comparing to neural network.The process is the following: we first train and test the three models without any hyperparameters tuning, and check whether one or two models might outperform others. Table 2. Classification results Accuracy F1-score Precision Recall XGB 0.99 0.98 SVM Log reg 0.98
0.94 0.86 0.86
0.90 0.78 0.86
1 0.96 0.86
XGBoost Optimization. As part of our experiments, XGBoost outperforms the SVM model and the Logistic Regression for every evaluation metric. As a result, the remaining works are based on this model. The optimization process is operated using the GridsearchCV14 function in the scikit-learn package. This function takes in argument several possibilities for each XGBoost’s hyperparameter. Then, it fits the model with each possible combination of hyperparameters, and return the optimal ones. We operate three rounds of optimization, using a 5-folds cross validation. Finally, we end up with the following hyperparameters: n estimators = 10, learning rate = 0.7, max depth = 5, min child weight = 3, subsample = 0.8, colsample by tree = 0.8, gamma = 2. 4.2
Feature Selection
In this section, we detail the method used to highlight the most meaningful features, and the most meaningful type of transaction (which asset and which direction). 14
scikit-learn.org.
Detecting Illicit Ethereum Accounts
105
In order to determine features that are the most meaningful for the model, we undertake the following protocol: first, we train the model a hundred times with random seeds. For each train, we save features that obtained a feature importance superior to 0 - meaning that they were included in the XGBoost trees and thus played a role in the classification - and calculate its average importance. This value converges after 40 trainings, therefore we can set a final feature ranking after one hundred trainings. We use the feature weight importance available in the XGBoost package. This function attributes to each feature the impact that this one played out in the classification. In the XGBoost model, feature importance can be calculated in three different ways, resulting in three different metrics: weight, gain and cover. The weight is the number of times a feature appears in a tree. The gain is the average gain of splits which use the feature. In other terms, the gain expresses how significant a feature is in the classification process. Finally, the cover is the average coverage of splits which use the feature where coverage is defined as the number of samples affected by the split. Gain is chosen as the comparative metric since it clearly shows the contribution of each feature. Finally, 141 features stand out of the 153 initial ones. Gain-wise, around 26 features stands out. Those features along as their average gain are displayed in Fig. 1. In Fig. 1, the top 26 features are displayed. Red features are those implemented for the first time in this study. Out of 26 features, 15 are new. Finally, the same XGBoost obtains an f-score of 0.9631 and 0.9654 for 153 features and 26 features respectively. By identifying the most relevant features, we not only improve the model performance, but we significantly reduce the training time.
5 5.1
Results and Discussion Feature Cartography and Frauder Profiling
We want to identify which transaction aspect is the most significant for our model. There are three types of categories: the direction of the transaction, the asset transferred and the transaction parameter such as amount or gas as described in Sect. 3.2. To highlight the most impactful aspect, we count aspect occurrence in our top141 features list. For the direction, features are equally distributed. In contrast, there is a huge majority of “ERC20 transaction” in the top141 features, with 81 feautures related to “ERC20 transaction”. Results show that the direction of the transaction does not make a significant difference. But, ERC20 token transactions seem to be particularly meaningful. Comparing feature categories is not immediate, some categories are made of many features while others contain a unique feature. Therefore, it is necessary to compare not only the number of appearances but also the total gain as long as the relative gain(total gain divided by number of appearances). As shown in Fig. ?? some categories that are very present in the top 141 don’t have a significant total gain, and vice-versa. As a demonstration, the number of unique methods is in our case the most impactful category, since it has the biggest
106
A. Bella Baci et al. Nb trnx sent nonerc20 min gas sent nonerc20 Nb interlocutors sent erc20 min gas rec erc20 min fee sent nonerc20 nb diff methods Nb token sent erc20 min gas limit sent erc20 avg gas sent nonerc20 min gas rec nonerc20 max diff gas sent erc20 max duration all dir all assets avg gas sent erc20 Nb interlocutors all dir nonerc20 most rec erc20 TetherUSD max gas price sent erc20 min ratio gas price day sent ether Nb interlocutors rec ether std ratio gas price day all dir all assets avg duration all dir all assets avg gas limit rec erc20 max amount rec erc20 avg duration sent erc20 std ratio gas price day rec ether min duration all dir all assets max gas price rec erc20
0.138629 0.086164 0.062183 0.056469 0.053932 0.032458 0.023949 0.022072 0.016611 0.015557 0.015156 0.013524 0.009719 0.009453 0.008294 0.007513 0.007040 0.006716 0.006488 0.006101 0.006074 0.005852 0.005663 0.005649 0.005143
0.214064
Average gain
Fig. 1. Average gain for each feature in the top 26
relative gain. Crossing those three metrics, we can conclude that the five most significant categories are: the number of transactions, the gas limit, the number of different methods used, the number of different interlocutors and the fees. 5.2
Misclassified Samples
The final model, XGBoost with 26 features, delivers a 0.9654 f-score and its confusion matrix indicates 26 false positives. In order to build an intuition about why those accounts have been misclassified, we observe the distribution of samples according to two different features at the time. Figure 2 indicates three interesting distributions. In these scatterplots, non-fraudulent accounts are represented with beige dots, while fraudulent accounts are represented in purple (dark and light for well classified and unclassified respectively). These graphs show that in those three cases, fraudulent accounts are gathered in the same area. False positives (in light purple) are usually nearer of innocent than fraudulent accounts. These three graphs also give us information about how samples can be seen as clusters. The major difference between fraudulent and non-fraudulent accounts is that formers rarely send ERC20 token or non ERC20 token, as seen in Fig. 2, the majority of purple dots are gathered in the left area. It appears that fraudulent accounts that have sent at least one token transaction are more likely to be misclassified. The second difference is the maximum duration between two transactions. Once the 20 millions second threshold is exceeded, the model struggles to detect fraudulent accounts.
Detecting Illicit Ethereum Accounts
107
Finally, the model delivers great results and some improvements levers are easily identified. However, these improvements must be carried out carefully. In our case, we could simply add trees or leaves to our XGBoost model, but this would lead us to overfitting issues.
Fig. 2. Scatterplots of samples: innocents in beige and fraudulent accounts in purple (dark and light for well classified and misclassified respectively) (Color figure online)
6
Conclusion and Future Work
To conclude, with respect to the state of the art, we can assert that the transactional data approach leads to results as good as the graph approach. With a highly imbalanced dataset of 19,000 samples and 26 features, we obtain great results, meaning that it is possible to implement highly discriminating features only based on the transaction records. First, this study confirms that the XGBoost model remains one of the most performant supervised model for fraud detection problem on Ethereum. Second, our feature extraction method not only leads to good results but also facilitate model interpretability. The major benefit of this method is the partitioning way of calculating features. We take an aspect of the transaction record - the number of transactions for example - and we calculate it for every direction and every type of asset. By doing so, significant differences between fraudulent and non-fraudulent accounts arise. In our case, most fraudulent accounts have not sent any non-ERC20 token transaction, giving our model a very high discriminating feature. The incorporation of new transactional information such as gas or fee reveals that time and amount are far from being the most significant information. Although the duration between transactions remains impactful for our model. Finally, we identified five main feature categories: number of transactions, gas limit, number of different methods used, number of different interlocutors and fees. Fees, gas limit and number of different methods used prove to be even more revealing than the classic time and amount based features ??. As for the number of transactions and the number of different interlocutors, those two types of features are very close to ones calculated with a node embedding approach. Indeed, they both characterize the relations that a graph representation would directly
108
A. Bella Baci et al.
reveal. This result encourages the need of implementing a model combining features resulting from node embedding and features resulting from transactions records in order to further enhance the model precision.
References 1. Chen, W., et al.: Phishing scam detection on Ethereum: towards financial security for blockchain ecosystem. In: Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, pp. 4506–4512 (2020) 2. Farrugia, S., Ellul, J., Azzopardi, G.: Detection of illicit accounts over the Ethereum blockchain. Expert Syst. Appl. 150, 113318 (2020) 3. Greene, M.N., et al.: Divided we fall: fighting payments fraud together. Econ. Perspect. 33(1), 37–42 (2009) 4. Hilal, W., Gadsden, S.A., Yawney, J.: Financial fraud: a review of anomaly detection techniques and recent advances. In: Expert Systems with Applications (2022) 5. Hossin, M., Sulaiman, M.N.: A review on evaluation metrics for data classification evaluations. Int. J. Data Min. Knowl. Manage. Process 5(2), 1 (2015) 6. Jung, E., Le Tilly, M., Gehani, A., Ge, Y.: Data mining-based Ethereum fraud detection. In: 2019 IEEE International Conference on Blockchain (Blockchain), pp. 266–273. IEEE (2019) 7. Kim, Y., Kogan, A.: Development of an anomaly detection model for a bank’s transitory account system. J. Inf. Syst. 28(1), 145–165 (2014) 8. Lasas, K., et al.: Fraudulent behaviour identification in Ethereum blockchain (2020) 9. Nakamoto, S.: Bitcoin: a peer-to-peer electronic cash system. Decentralized Bus. Rev. 21260 (2008) 10. Perols, J.: Financial statement fraud detection: an analysis of statistical and machine learning algorithms. Auditing: J. Pract. Theor. 30(2), 19–50 (2011) 11. Wang, J., Chen, P., Yu, S., Xuan, Q.: TSGN: transaction subgraph networks for identifying Ethereum phishing accounts. In: Dai, H.-N., Liu, X., Luo, D.X., Xiao, J., Chen, X. (eds.) BlockSys 2021. CCIS, vol. 1490, pp. 187–200. Springer, Singapore (2021). https://doi.org/10.1007/978-981-16-7993-3 15 12. Wu, J., et al.: Who are the phishers? Phishing scam detection on Ethereum via network embedding. In: IEEE Transactions on Systems, Man, and Cybernetics: Systems (2020) 13. Ying, Z., et al.: Hierarchical graph representation learning with differentiable pooling. In: arXiv preprint arXiv:1806.08804 (2018) 14. Yuan, Q., et al.: Detecting phishing scams on Ethereum based on transaction records (2020) 15. Zheng, Z., et al.: Blockchain challenges and opportunities: a survey (2016)
Identifying Incentives for Extortion in Proof of Stake Consensus Protocols Alpesh Bhudia1(B) , Anna Cartwright2 , Edward Cartwright3 , Julio Hernandez-Castro4 , and Darren Hurley-Smith1 1
Royal Holloway, University of London, Egham, UK [email protected] 2 Oxford Brookes University, Oxford, UK 3 De Montfort University, Leicester, UK 4 University of Kent, Canterbury, UK
Abstract. A distributed consensus algorithm is at the core of what makes cryptocurrencies a decentralised ledger; they are the tools that facilitate the agreement between millions of users worldwide on what the playing rules are going to be, as well as the punishments and rewards for (dis)obeying them. The first cryptocurrency, Bitcoin, popularised proofof-work puzzle-solving algorithm, in the form of block mining to process and validate transactions on the blockchain. However, several limitations with proof-of-work, such as enormous energy demand, significant (and increasing) computational power requirement, and lack of scalability, led blockchain enthusiasts and researchers to construct alternatives. One prominent alternative mechanism proposed is the proof-of-stake; a mechanism that does not rely on the mining power but the amount of stake owned by a node, allowing randomly selected validators to create blocks and verify blocks created by other validators. A proof-of-stake mechanism naturally results in the formation of staking pools, whereby multiple stakeholders can pool their resource to earn rewards. In this paper we explore the likely evolution of a competitive staking pool market. We pay particular attention to the importance of security. Staking pools could be subject to a range of attacks by malicious actors and so secure staking pools are essential for a well functioning proof-of-stake currency.
Keywords: Ransom
1
· Proof-of-stake · Cryptocurrencies · Blockchain
Introduction
Since the introduction of Bitcoin (a peer-to-peer version of electronic cash) in 2008 [25], cryptocurrencies have gained widespread popularity and adoption, both as payments and a lucrative asset among investors looking to diversify Research partly supported by Ethereum Foundation Grant #FY21-0378 ‘Game theoretic modelling of a ransomware attack on validators in Ethereum 2.0’. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Awan et al. (Eds.): DBB 2022, LNNS 541, pp. 109–118, 2023. https://doi.org/10.1007/978-3-031-16035-6_9
110
A. Bhudia et al.
their investment portfolios. In January 2021, the market capitalisation of cryptocurrencies reached $1 trillion for the first time, while the latest valuation has exceeded over $2 trillion [28]. Although Bitcoin (the largest cryptocurrency by market capitalisation) is the best known and the most widely adopted cryptocurrency, there exist over 6000 other cryptocurrencies [10], referred to as altcoins. The technology at the core of many cryptocurrencies relies on a decentralised ledger known as the blockchain, which allows all transactions across a peerto-peer network to be secured by cryptography [22]. In order for blockchain to function on a global scale, the public ledger needs efficient security along with the ability to achieve common agreement on a single data value among distributed processes or systems. This is achieved through implementation of a consensus algorithm which defines set of rules that all nodes in the network must follow to add new blocks to the chain. Proof-of-work (PoW) is one such blockchain consensus algorithm that was first implemented in Bitcoin. In the PoW algorithm, miners (participating nodes in the network) are required to perform some work using physical scarce resources in the form of electricity and dedicated mining equipment to solve computationally rigorous puzzles to validate and create new blocks of transactions in the cryptocurrency’s blockchain [4]. Miners compete through brute force mechanism to solve the crypto-puzzles first and find a valid solution that matches the given threshold or set level of difficulty. Once it is found, the miner broadcasts it to the network and the block is added to the blockchain. Since miners around the world compete to solve the same difficult puzzle at the same time, and only one person (the winner of the race) can update the blockchain with the latest verified block of transactions, this process results in a tremendous amount of wasted electrical energy. A recent study show that Bitcoin alone consumes more energy than the country of Argentina [3]. As a result, many new and existing cryptocurrencies, such as Ethereum, are migrating from Bitcoin’s legacy PoW system to proof-ofstake (or a hybrid approach, such as Decred [16]) as it addresses scalability and sustainability challenges [14,18,20]. In proof-of-stake (PoS) consensus-forming, owners of a cryptocurrency can stake or risk their coins (the amount of coins required varies across different cryptocurrencies) as collateral (locked) for a chance to participate in the block creation and validation process in the blockchain and, in return, receive crypto rewards for their contribution. Those that choose to stake currency in this way become validators. Validators perform tasks similar to those of miners in a PoW system of adding blocks to the chain. Validation, however, requires minimal computational power and so is more efficient than PoW in this regard. Validation is also more distributed in the sense that anyone with sufficient stake can become a validator and earn rewards. PoW, by contrast, has a tendency to become dominated by mining pools that have the greatest computational power [11]. In a long run equilibrium the rewards from being a validator in a PoS system should approximate the market return from the staked currency [17], e.g. a return of 5–15% per annum.
Identifying Incentives for Extortion in Proof of Stake Consensus Protocols
111
In a PoW system there is relative separation between users and miners of the currency. In a PoS system, by contrast, these will be much more overlap between users and validators. In particular, small stakers, that have insufficient currency to operate a validator node, and/or investors, who are not interested in operating validator nodes themselves, can put their stake into a staking pool [13,15]. A staking pool allows a combined stake sufficient to operate potentially multiple validators. Third parties can operate staking pools as a business operation whereby they run validators on behalf of clients for a share of staking rewards. Existing staking pools on Ethereum 2.0, for instance, include RocketPool, StakeWise, LidoFinance, Stkr. Staking pools compete with each other for investors creating a competitive market [15]. In this paper we explore the likely evolution of that market with a particular focus on security. While a PoS system has many advantages, it also creates new opportunities for malicious actors. PoW has proved relatively robust to malicious activity with the opportunities for criminals to profit primarily restricted to the theft of private keys or the creation of fraudulent exchanges that mislead investors. A PoS system, by its very nature is, more vulnerable to attack because currency is staked as part of validation. Staking pools, in particular, offer various routes for malicious activity. For instance, a staking pool can be set up by a malicious actor wanting to disrupt the currency [13]. Staking pools could also be subject to extortion attacks by actors who have infiltrated the pool and obtained validator keys. Access to the keys would allow an attacker to potentially perform actions that would cause penalties or slashing and loss of stake for the staker pool. Security should, therefore, be critical to any PoS staking pool market. We proceed as follows. In Sect. 2 we provide a background to PoS systems. In Sect. 3 we present an economic model of the staking pool market and discuss the role of security. In Sect. 4 we conclude.
2
Background
Sunny King and Scott Nadal created a Peercoin (PPC) in 2012, which became the first cryptocurrency to adopt the PoS mechanism [19]. Since then, many cryptocurrencies have followed the route in an attempt to solve the problem of PoW mining’s high energy consumption [2,24]. For instance, Ethereum is slowly transitioning from PoW to PoS consensus mechanism, improving the network’s security and scalability. In PoS, validators stake their coins which serves as an economic incentive to act in the network’s best interests. On the basis that a validator would not devalue its own assets, stakeholders accept the responsibility to maintain the security of the blockchain. To become a validator, the coin owner must stake specific amounts of coins. The exact amount varies depending on the cryptocurrency. For example, in Ethereum 2.0, a minimum of 32 ETH is required (approximately $58,000 in June 2022), whereas, in Algorand, it is 1 or more ALGO coins (approximately $0.5). There are several variations of PoS systems (a few prominent ones shown in Table 1) currently in existence, each with its own solution to achieve effective,
112
A. Bhudia et al.
resource-efficient network governance. Some of these include: Pure PoS (PPoS), Delegated PoS (DPoS), Hybrid Proof of Stake (HPoS), Nominated PoS (NPoS), and Liquid PoS (LPoS). Tezos, for instance, is a self amending blockchain with similar features to Ethereum, such as a smart contract. It uses the liquid proof of stake approach that allows the coin owners to withdraw their stake from a validator (baker) at any time with no lock-up period, unlike Ethereum. In addition, the delegated funds never leave the owner’s wallet but rather delegate their rights to a validator to participate in the blockchain on their behalf and collect rewards [5]. Table 1. Summary of key features in prominent PoS blockchains Type Min Stake Slashing Penalty* Rewards† Algorand
PPoS ✓
✗
✗
10.05%
Cardano
DPoS ✗
✗
✗
5.07%
Cosmos
PoS
✓
✓
✓
14.22%
Ethereum 2.0 HPoS ✓
✓
✓
4.81%
Polkadot
NPoS ✓
✓
✓
14.02%
Solano
DPoS ✓
✓
✓
5.79%
Tezos LPoS ✓ ✓ ✓ 5.3% * Penalty occurred for inactivity or incorrect attestations. † An estimate staking rewards a validator could earn per year based on [27].
As you can see in Table 1, many PoS systems use a range of penalties, including slashing of validator balances, as a mechanism to prevent malicious action against the blockchain and maintain the integrity of the blockchain [6]. Penalties and slashing can promote security, honest network participation, and availability of the validators to perform duties [12]. In particular, large slashing penalties, which in Ethereum can equal the entire stake: (i) incentive the validator to put in effort so as to avoid ‘honest’ mistakes, and (ii) make a malicious attack on the network expensive and unattractive. Two common cases when the validator can be charged are during downtime (validator absent from signing transactions) and double signing (validator signs two or more blocks at the same height). A criminal who has breached a staking pool in, say, Ethereum, and obtained the relevant signing keys, could deliberately perform actions that would result in slashing, and consequent loss of stake, for the compromised validators. The potential loss of stake provides a medium through which criminals could extort a staking pool. In short, they could threaten to force slashing unless a ransom is paid. In PoS systems, such as Ethereum 2.0, the validator keys, used continuously online to preform validation activities, are typically less secure than the signing key, used only to withdraw stake. An attacker may, thus, be able to obtain the validator key and disrupt validation but not directly steal stake. A compromised validator would have little chance to avoid forced slashing.
Identifying Incentives for Extortion in Proof of Stake Consensus Protocols
113
Not all systems implement penalty or slashing strategies in their design. Prominent examples are Cardano, Algorand and Avalance. Algorand implements a unique variation of proof of stake based on a new Byzantine Agreement protocol. It is known to be a highly democratised form of PoS with a low minimum staking requirement of 1 ALGO coin to participate in and secure the network. The project was launched in 2019 with an aim to accelerate transaction speed and reduce the time it takes to process/finalise the transactions on its network. The system uses algorithmic randomness to select a validator (verifier) from a set of validators who is responsible for constructing the next block of valid transactions. The most notable feature in Algorand is that all block rewards are proportionally distributed to all coin owners (must hold a minimum of 1 coin and the reward is based on the amount staked) rather than only to the validators. The protocol lacks the mechanism to punish dishonest validators on the network, i.e. slashing, which is not deemed a requirement since it is impossible to fork the blockchain due to how the Algorand consensus process works [9,14]. Cardano implements a delegated-proof-of-stake (DPoS) based on Ouroboros [18], allowing ADA coin owners who have no desire to run their own validator nodes and participate in the network to transfer all or some of their stake to another stake pool and be rewarded for the amount staked [26]. The DPoS system is one of the fastest blockchain consensus mechanisms, and it can handle a higher number of transactions than the PoW system. In addition, the system allows all coin holders to play a role in influencing network decisions. Similarly to Algorand, Cardano also does not have any mechanism to punish dishonest validators. Instead, its security is based around an opportunity cost of losing rewards. While currencies like Algorand and Cardano are not open to the extortion threat of slashing, a successful attack on the stake pool operators could still impact their ability to participate in the blockchain. This would negatively effect rewards [1,7,8]. In a competitive market, this would likely impact on the reputation of the staking pool (or indeed currency) sufficiently to reduce investment in the pool. A criminal could again, leverage this threat to extort the staking pool. In short, an attacker could threaten to disrupt the pool’s reputation unless a ransom is paid.
3
Economic Analysis of the Staking Pool Market
In this section we describe a simple model with which to explore an attack in the staking pool market of a PoS system. We assume that time runs from time t = 0, 1, ...,. There are a set of investors N = {1, ..., n} who want to use PoS as a way to earn financial returns. There sole objective is to maximise the expected denote the amount of currency investor return on their investment. Let mi (t) i has to invest at time t. Let M (t) = i mi (t) denote the total balance of the currency. We assume that investors have no interest in operating validator nodes themselves but do want to earn the rewards that come from staking currency. They are, thus, interested to invest in a staking pool. The costs to operate a staking pool are relatively minimal for existing PoS currencies. This means the market is characterised by relatively free entry and
114
A. Bhudia et al.
exit. We take as given a set of staking pools K = {1, ..., k}. A staking pool offers a service to investors whereby they take money, perform validation activities and (t) ⊂ N denote the set of investors staking with give a financial return. Let yk pool k at time t. Let Ik (t) = yk (t) mi (t) denote the total amount invested in staker pool k at time t. We assume that if operating ‘normally’ the staker pool will accumulate a return of α in each period from staker rewards. In other words, ceteris paribus, the investment grows from Ik (t) to (1 + α)Ik (t). We assume that if it performs maliciously the staker pool will receive a penalty of β. In other words, ceteris paribus, the investment falls from Ik (t) to (1 − β)Ik (t). We do not rule out β = 0 to capture currencies with no punishment mechanism. When β > 0, the penalty effectively ‘burns’ part of the stake and currency in the sense that it is irretrievable. Slashing penalties may depend, as they do in Ethereum, on the amount of recently slashed balances, but modelling that is not critical to our analysis here. 3.1
Malicious Activity Against Staking Pools
A malicious attacker could disrupt a staking pool in various ways. First, and most obviously, an attacker could infiltrate the staking pool to such an extent that they can simply withdraw and steal the stake. Clearly this is highly damaging financially for the investors. Given that the amount invested in a staking pool Ik (t) may run into millions of dollars the incentives for an attacker to infiltrate a staker pool are considerable. There is, for instance, a considerable insider threat in a sector with relatively little experience of managing such risk [21]. We cannot, therefore, rule out highly sophisticated and targeted attacks. Even if the outright theft of the stake is not possible there are other threats a malicious actor could employ. Consider, for instance, an attacker who has obtained the validator signing keys of staker k through a breach, corrupting an employee or similar. This would allow the attacker to enact malicious activities that would force a penalty in period t. The size of this penalty would be P (t) = βIk (t). The attacker could, thus, threaten to force a penalty unless a ransom of P (t) or similar is paid. Given the investors in the staking pool face losing P (t) in currency it would be in their collective interests to pay. In practice the attacker may ask for considerably less than P (t) to better incentivize payment of the ransom. This, though, is still highly damaging for the staking pool and the investors involved. Any breach, whether it be theft or extortion would be highly damaging to the reputation of a staking pool. Indeed, in a competitive market it is unlikely a staking pool would be able to survive a damaging high profile attack. This can also be leveraged by an attacker to extort money. In particular, an attacker can threaten to harm the reputation of a staking pool unless a ransom is paid. Moreover, a staking pool subject to such a threat may simply decide to liquidate and exit the market. This, again, can mean investors lose significant amounts of stake. Our objective here is not to identify all the ways in which a staking pool could be attacked but merely to highlight their vulnerability to attack. This is
Identifying Incentives for Extortion in Proof of Stake Consensus Protocols
115
something that investors should take into account when choosing which staking pool to invest in. 3.2
Competition on Price Versus Security
Staking pools can compete for investors across a number of dimensions, including joining fees, freedom to withdraw money etc. Here we simplify the analysis and focus on two dimensions: commission and security. Specifically, we assume that staker k charges commission ck on all rewards. Thus, the staker makes revenue ck αIk (t) in period t, which we assume is withdrawn from the pool. The investor receives return (1 − ck )α. Thus, in normal operations mi (t + 1) = mi (t)(1 + (1 − ck )α)) if investor i invests in staker k. Stakers can differ across commission rates [23]. Investors would obviously, everything else the same, prefer a staker with the lower commission. As we have seen in the previous subsection the security of a staking pool is critical. Staking pools, thus, need to make a strategic investment in security (or lack of security). Let sk denote the amount staker k invests in security, which we assume for simplicity is a one-time decision that costs w(sk ). We assume that staker k incurs fixed costs F from operating a staker pool, marginal cost c for each unit of currency invested at each time each period and cost w(sk ) from investing in security. This implies economies of scale meaning that bigger staking pools have a lower average cost of operation. Thus, everything else the same, larger staking pools can either offer a lower commission or can invest more in security while still keeping commission rates competitive. We now consider two scenarios. First, suppose that investors are not at all concerned about security. For instance, they are complacent or over-confident and completely discount the potential threat of losing stake. In this setting we would, in equilibrium, obtain a ‘race to the bottom’ in which staking pools would invest 0 in security. To explain why consider time 0. Given that investors are solely focused on financial returns they will invest in the staking pool charging the lowest commission. We, thus, obtain a setting of perfect price competition. Given that there are economies of scale, the larger the staking pool, the lower would be the average operating cost and, therefore, the lower the commission needed to break even. A monopoly staking pool, therefore, that has all the investment, would have lowest possible average cost, given by ACM = (F + w(M (0)))/M (0). In equilibrium we would, therefore, see staking pools charging a commission consistent with ACM and investing 0 in security. Moreover, economies of scale mean a monopoly staking pool. As we discuss in the conclusion, this is highly risky for investors and the currency. Consider next a scenario in which investors are very concerned about security and want to invest in the most secure staking pool that will offer a normal rate of return on their investment. We saw in the previous subsection that the profits to a malicious actor from attacking a stacking pool are increasing in the amount staked in the pool. It, therefore, seems reasonable to assume that if we compare two staking pools with the same investment in security, the smaller pool is considered less likely to be the target of an attack. In this setting, at the equilibrium
116
A. Bhudia et al.
we would expect to observe all staking pools being equally sized and investing equally in security. The commission would be such as to cover the costs of security and costs of validation while still offering a normal return to investors. Note that the expected return in this scenario may take into account the probability of an attack that results in a loss of stake for investors. In other words, investors would expect α to be above the normal market return for financial assets to take into account the threat of loss of stake. We have considered two extreme scenarios that give two very different equilibrium outcomes. If investors are unconcerned about security we obtain a monopoly outcome with one large staking pool that does not invest in security. If investors are highly concerned about security we obtain a large number of small staking pools that all invest in security. In practice, we can expect something in between these two extremes. This, for instance, will follow from variation in the risk attitudes of investors with some being risk loving, and less concerned about security, while some are risk averse, and more concerned about security. Security, therefore, provides a crucial form of product differentiation around which staking pools can compete. We suspect that over time, particularly as attacks become common knowledge, concerns for security will increase. This, in our model, points towards a large number of small staking pools.
4
Conclusions
Proof-of-Stake systems are likely to become the dominant mechanism for cryptocurrencies given there many advantages in terms of security and energy consumption. The fact that investors stake money in blockchain validation does, however, create a new extortion risk. Given that most money will be invested through staking pools, a particular risk is that a pool is attacked and compromised in a way that allows the criminals to extort the pool operators and members. This extortion could take the form of threats to incur punishments or slashing that reduce stake and/or threats to harm the reputation of the pool. As PoS systems become more commonplace it is, thus, crucial to investigate and analyse the security consequences for staking pools. In this paper we have looked at a simple model of competition between staking pools. Investors in currencies like Ethereum and Cardano can already choose between a wide range of potential staking pools. Staking pools can differentiate themselves and compete on many different dimensions. Here we focus on the role of security of stake from attack and commission rate. We considered two extreme scenarios, one where investors disregard security and one where they highly value security. In the former case, market forces will push towards large, insecure staking pools. This is undesirable for both investors, because their stake is at risk, and the currency, because an attack on a dominant staking pool risks major disruption and loss of confidence in the currency. A ‘race to the bottom’ in terms of competition is, thus, highly undesirable. If investors value security then we predict a larger number of small staking pools, investing in security and offering a market return. This is beneficial for
Identifying Incentives for Extortion in Proof of Stake Consensus Protocols
117
the investors, because the staking pool is investing in security and so their stake is more secure. It is also beneficial for the currency because disparate attacks on small staking pools pose less of a systemic threat to the reputation of the currency. It is in the interests of both investors and cryptocurrencies to, thus, encourage a greater focus on staking pool security. Over time we would expect greater consideration to security, resulting in an increasing number of small staking pools. This, however, critically relies on staking pools being able to credibly signal their security. A crucial extension to our model is, thus, to consider a setting in which a staking pool’s level of security is only partially observable. Acknowledgment. We thank Justin Drake from the Ethereum Foundation for his support and feedback throughout this research. The research of Alpesh Bhudia is supported by the EPSRC and the UK government as part of the Centre for Doctoral Training in Cyber Security at Royal Holloway, University of London (EP/P009301/1).
References 1. ADAGO: Cardano stake pool compromised. https://twitter.com/AdagoPool/ status/1351781426094632965. Accessed Apr 2022 2. Bach, L.M., Mihaljevic, B., Zagar, M.: Comparative analysis of blockchain consensus algorithms. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). IEEE (2018) 3. BBC News: Bitcoin consumes ‘more electricity than argentina’. https://www.bbc. co.uk/news/technology-56012952. Accessed Mar 2022 4. Bentov, I., Gabizon, A., Mizrahi, A.: Cryptocurrencies without proof of work. In: Clark, J., Meiklejohn, S., Ryan, P.Y.A., Wallach, D., Brenner, M., Rohloff, K. (eds.) FC 2016. LNCS, vol. 9604, pp. 142–157. Springer, Heidelberg (2016). https://doi. org/10.1007/978-3-662-53357-4_10 5. Blockdaemon: How tezos staking works. https://blockdaemon.com/docs/protocoldocumentation/tezos/how-tezos-staking-works/. Accessed May 2022 6. Buterin, V.: Serenity design rationale. https://notes.ethereum.org/@vbuterin/ serenity_design_rationale. Accessed June 2022 7. Cardano: Keep your core node safe. https://forum.cardano.org/t/spos-do-notrepeat-my-mistakes-keep-your-core-node-/37766. Accessed Apr 2022 8. Cardano: What to do after the node skey got compromised - staking & delegation/operate a stake pool. https://forum.cardano.org/t/what-to-do-after-thenode-skey-got-compromised/33617. Accessed Apr 2022 9. Chen, J., Micali, S.: Algorand. arXiv preprint arXiv:1607.01341 (2016) 10. Coinmarketcap: All cryptocurrencies list. https://coinmarketcap.com/all/views/ all/. Accessed Apr 2022 11. Cong, L.W., He, Z., Li, J.: Decentralized mining in centralized pools. Rev. Financ. Stud. 34(3), 1191–1235 (2021) 12. Fanti, G., Kogan, L., Viswanath, P.: Economics of proof-of-stake payment systems. In: Working paper (2019) 13. Gersbach, H., Mamageishvili, A., Schneider, M.: Staking pools on blockchains. arXiv preprint arXiv:2203.05838 (2022) 14. Gilad, Y., Hemo, R., Micali, S., Vlachos, G., Zeldovich, N.: Algorand: scaling byzantine agreements for cryptocurrencies. In: Proceedings of the 26th Symposium on Operating Systems Principles, pp. 51–68 (2017)
118
A. Bhudia et al.
15. He, P., Tang, D., Wang, J.: Staking pool centralization in proof-of-stake blockchain network. Available at SSRN 3609817 (2020) 16. Jepson, C.: Dtb001: Decred technical brief. https://cryptorating.eu/whitepapers/ Decred/decred.pdf (2015) 17. John, K., Rivera, T.J., Saleh, F.: Equilibrium staking levels in a proof-of-stake blockchain. Available at SSRN 3965599 (2021) 18. Kiayias, A., Russell, A., David, B., Oliynykov, R.: Ouroboros: a provably secure proof-of-stake blockchain protocol. In: Katz, J., Shacham, H. (eds.) CRYPTO 2017. LNCS, vol. 10401, pp. 357–388. Springer, Cham (2017). https://doi.org/10.1007/ 978-3-319-63688-7_12 19. King, S., Nadal, S.: Ppcoin: peer-to-peer crypto-currency with proof-of-stake. Selfpublished paper, August 19(1) (2012) 20. Kwon, J.: Tendermint: consensus without mining. Draft v. 0.6, fall 1(11) (2014) 21. Liu, L., De Vel, O., Han, Q.L., Zhang, J., Xiang, Y.: Detecting and preventing cyber insider threats: a survey. IEEE Commun. Surv. Tutor. 20(2), 1397–1417 (2018) 22. Mechkaroska, D., Dimitrova, V., Popovska-Mitrovikj, A.: Analysis of the possibilities for improvement of blockchain technology. In: 2018 26th Telecommunications Forum (TELFOR), pp. 1–4. IEEE (2018) 23. Miguel, A.: Ethereum 2.0 staking rates. https://defirate.com/staking/. Accessed May 2022 24. Motepalli, S., Jacobsen, H.A.: Reward mechanism for blockchains using evolutionary game theory. In: 2021 3rd Conference on Blockchain Research & Applications for Innovative Networks and Services (BRAINS). IEEE (2021) 25. Nakamoto, S.: Bitcoin: a peer-to-peer electronic cash system. Decentralized Business Review p. 21260 (2008) 26. Saad, S.M.S., Radzi, R.Z.R.M.: Comparative review of the blockchain consensus algorithm between proof of stake (POS) and delegated proof of stake (DPOS). Int. J. Innov. Comput. (2020) 27. StakingRewards: Earn passive income with crypto | staking rewards. https://www. stakingrewards.com/. Accessed Apr 2022 28. VanDenburgh, W.M., Daniels, R.B.: Pragmatic realities of bitcoin and cryptoinvesting. CPA J. (2021)
Three-Valued Model Checking Smart Contract Systems with Trust Under Uncertainty Ghalya Alwhishi1 , Jamal Bentahar1(B) , and Ahmed Elwhishi2 1 Concordia University, Montreal, Canada g [email protected], [email protected] 2 University of Doha for Science and Technology, Doha, Qatar [email protected]
Abstract. Blockchain systems based on smart contracts are critical systems that have to be verified in order to ensure their reliability and efficiency. Verifying these systems is a major challenge that is still an active topic of research in different domains. In this paper, we focus on verifying these systems that we model using trust protocols under uncertainty. Specifically, we address the problem using an effective verification approach called three-valued model checking. We introduce a new logic by extending the recently proposed Computation Tree Logic of Trust (TCTL) to the three-valued case (3v − T CT L) to reason about trust with uncertainty over smart contract-based systems. We also propose a new transformation approach to reduce the 3v − T CT L model checking problem to the classical case. We apply our approach to a smart contract-based drug traceability system in the healthcare supply chain. The approach is implemented using a Java toolkit that automatically interacts with the NuSMV model checker. We verify this system against a set of specifications and report the results of our experiments. Keywords: Smart contract · Blockchain checking · TCTL · Trust · Uncertainty
1
· Three-valued model
Introduction
The development of Blockchain applications based on smart contracts is rapidly increasing with the development of economic globalization in various domains, specially the healthcare industry. However, these applications still encounter many challenges when verifying the agents’ behaviors. Fore example, due to a weak verification mechanism, attacks on the Decentralized Autonomous Organization (DAO) and Bitfinex exchange in 2016 cost millions of dollars. Therefore, to reduce the vulnerability of these applications, they need to be verified to prevent security vulnerabilities and any bad behaviors. Trust is regarded as being a very important concept in applications of open multi-agent technologies [28], especially Blockchain systems based on smart contracts. The main role of trust in such applications lies in providing social control c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Awan et al. (Eds.): DBB 2022, LNNS 541, pp. 119–133, 2023. https://doi.org/10.1007/978-3-031-16035-6_10
120
G. Alwhishi et al.
that regulates the interactions and relationships among agents. Many formalisms and approaches that provide social modelling for trust in open systems have been introduced. For example, in [16] trust is modeled from a high-level abstraction based on the social correct behaviors of agents. In fact, trust is seen as a relationship between agents in social settings where trust is used to control relationships between truster and trustee to ensure that the later will perform a certain action. For example, trust among two agents in healthcare supply chain application can be interpreted as “The distributor trusts the manufacturer to deliver drug Lot when a hash sent by the IPFS to the smart contract”. Model checking technique [1,11,22] is a formal verification approach used to check the correctness of a system by verifying its specifications. Recently, this technique is widely used in verifying the Blockchain systems based on smart contracts [3,5,26,30–32]. In this technique, the system is modeled as a finite state machine model and its specifications are described as properties interpreted in the form of propositional temporal logic. The model and its properties are taken as inputs and the model checker verifies whether the system satisfies its properties or not and generates three possible answers: (1) the property is satisfied, (2) Not satisfied, or (3) state explosion that occurs when the size of the system state space grows exponentially and leads to an increase in the number of state variables in the system [12]. Model checking systems with trust protocols has been recently extensively addressed in many research [14,15,17,18]. However, addressing the problem of model checking trust systems under uncertain settings is the main challenge, where the existence of uncertainty increases with the rapid growth of intelligent trust applications where millions of agents are involved to interact within a shared and open environment. Although it turned out that the classical model checking technique (with only true or false answers) is effective in verifying systems with trust protocols, it cannot deal with such systems with uncertain behaviors. Thus, the need for three-valued (3v) model checking is necessary to deal with uncertainty. The verification process of this technique is similar to the classical one but it generates three truth values (T, M, F) where the value “M” refers to the presence of uncertainty or the missing information in the system. This technique of model checking has been used to reason about uncertainty in many research [7–10,25]. however, to the best of our knowledge, it has not been addressed with trust. Therefore, our motivation for this paper is to propose a 3v-model checking technique to verify intelligent systems (especially those based on smart contracts) with trust protocols under uncertain settings. Specifically, We contribute by proposing a new 3v-logic called 3v − T CT L (3-valued Trust Computation Tree Logic). We extend the modality of the two-valued trust logic TCTL represented in [18] to the three valued case and include the new three-valued modality as extension to the three-valued χCT L [10], We then produce a new transformation algorithm to reduce the 3v-valued model checking trust problem to the classical one and experiment our approach over a smart contract-based drug traceability system with trust protocols under uncertainty. We implement our
Model Checking Smart Contract System with 3-Valued Trust
121
experiment using a Java toolkit produced in [17] that automatically interacts with the NuSMV model checker. Finally, We report and discus our results to ensure the high level of efficacy and scalability of the proposed approach. The rest of our paper is organized as following. Section 2 gives a brief background about Blockchain and smart contract technologies and logic of trust. Section 3 explains the modeling of uncertainty in smart contract systems with trust protocols. Section 4 explains the problem of 3v-model checking with trust and uncertainty and introduces our reduction algorithm to the classical one. Then we apply our algorithm to the case study of “A Blockchain- System Based on Smart Contracts for Drug Traceability in Healthcare Supply Chain”. At the end of this section, we report and discuss our implementation results. In Sect. 5, we conclude and discuss our future works.
2 2.1
Background Blockchain Technology
Blockchains are tamper-resistant digital ledgers implemented in a distributed way without a central authority such as a company, bank, or government [38]. More specifically, A Blockchain is a sequence of blocks, which holds a complete list of transaction records. Each user in a Blockchain has a private key used to sign the transactions and a public key which is a cryptographic code that allows users to receive cryptocurrencies in their accounts [39] 2.2
Smart Contract in Block-chain Technology
The concept of Smart Contract has been produced by Nick Szabo in 1997 in order to enable development of different use cases using Blockchain technology [36]. A Smart contract is a program that encodes the real-world contractual agreement and is identified by an address stored on a blockchain network. This contract combines different protocols with user interfaces to formalize and secure relationships among computer networks. Embedding smart contracts in blockchain technologies has led to significant development in blockchain-based applications in terms of, (1) increased security where Blockchain transaction records are encrypted, (2) Speed, accuracy and efficiency where there is no need to paperwork as smart contracts are digital and automated, (3) Trust and transparency, where encrypted records of transactions are shared across participants and there is no need to involve third parties, (4) Savings, where time delays and the associated fees are decreased because there is no need for intermediaries to handle transactions. For example, Fig. 1 (b) shows how blockchain with a smart contract facilitates the Tax processing where there is no need to the third parties included in (b).
122
2.3
G. Alwhishi et al.
Trust Computational Temporal Logic (T CT L)
Definition 1. Smart contract based T CT L model is a tuple M = (S, R, {∼i→j | (i, j) ∈ A2 }, I, V ) where: – S is a non-empty set of reachable global states for the system; – R ⊆ S × S is a total transition relation denoted by (s, s ) ∈ R • I ⊆ S is a set of initial global states; • V : S → 2AP is a valuation function where AP is a set of atomic propositions. • For each pair truster-trustee of agents (i, j) ∈ A2 , ∼i→j ⊆ S × S is the direct trust accessibility relation defined by s ∼i→j s iff: (1) li (s)(v i (j)) = li (s )(v i (j)) which means state s is trustful from the vision of agent i with regard to agent j as the value of vector v is equal in the two states s and s . (2) s is reachable from s using R transitions. Where the vector v is used to define the trust accessibility relation between i and j. li (s) represents the local state of agent i in the global state s. (v i (j)) represent the jth component of the local vector of agent i in the same global state s.
Fig. 1. (a) The Tax process without smart contract; (b) The Tax process with smart contract
Syntax : The syntax of T CT L is defined as follows: φ :: = p | ¬φ | φ ∨ φ | EGφ | EXφ | E(φU φ) | T (i, j, φ) where p is a constant and φ is a formula. The syntax EXφ for example means “there exists a path where the formula φ holds in the next state in the system”. The formula T (i, j, φ) means “agent i trusts agent j to paring about φ.
Model Checking Smart Contract System with 3-Valued Trust
123
Semantics : The semantics of this logic is an extension of the CTL logic semantics produced in [33]. In this section, we only include the semantics of the trust modality. The satisfaction relation denoted by (M, s) |= φ where M is a model, s is a global state and φ is a trust formula is defined as follows:
• (M, s) |= T (i, j, φ) iff s |= φ and ∀s = s such that s ∼i→j s , we have (M, s ) |= φ.
3
Modeling Uncertainty in a Amart Contract System with Trust
Models of smart-contract based systems are subject to uncertainty due to missing or incomplete information about the actual system. The missing information occurs in the cases of, system space partition [6], unexpected system behaviours affected by the environment [24], system abstraction [27] or an incomplete understanding of system properties. Increasing attention has recently been devoted to 3v-model checking, based on 3v-logic with truth values (T, M, F ). This logic is more flexible and expressive than classic 2-valued logic and turns out to be very effective in a number of problems for which we need to reason under uncertainty settings in smart contract-based systems. 3.1
3-Valued Propositional Logic
This logic is known as the Kleene’s logic [23] and it is an extension of the 2-valued logic as it has three truth values (T, M, F ). T stands for T rue, M stands for M aybe and F stands for F alse. The value M represents the missing information that could be in the transitions between states in the system or in the formulae.
Fig. 2. (a) The truth table of 3v-lattice based on Kleene’s logic; (b) 3v-lattice
The three-valued lattice logic [34,37] is introduced based on the three-valued lattice shown in Fig. 2 (b). This lattice includes three truth values and is defined as algebraic structure (L3 , , ) where every two elements x and y in L3 have supremum or (join) denoted by (x y) and infimum or (meet) denoted by (x y). The symbols and operate like ∨ and ∧ respectivily in the Kleene’s logic. The truth table of this lattice represented in Fig. 2 (a).
124
G. Alwhishi et al.
Join-irreducible Elements Definition 2. Let L3 be a partial order (L, ≤). An element x ∈ L is called a join-irreducible element iff x =⊥ and , for any a, b ∈ L, if ‘x = a b, then either ‘x = a or x = y. The set of the join-irreducible elements of L3 is denoted by J I(L). In other words, the Join-irreducible elements cannot be ⊥ and cannot be decomposed into two elements. In lattice L3 the Join-irreducible elements are T and M . Every element a =⊥ of a finite distributive lattice can be uniquely decomposed into the join of all join-irreducible elements in its downward closure [13]. Formally, a = (J I(L)∩ ↓ a) 3.2
3-Valued T CT L
In this section, we introduce our new logic for temporal trust called 3-valued T CT L (3v − T CT L). This logic will be used for reasoning about uncertainty over smart contract-based models with trust protocols. Definition 3. Smart contract-based model of 3v − T CT L denoted by (Kt ) is obtained from the T CT L model defined in Sect. 2.3 by extending it with the lattice structure (L3 , , ). In this model we replace the valuation function V by O : S → (AP → L3 ) total labeling function that maps every atomic proposition x ∈ AP in a state s ∈ S into L3 -valued sets. Wherefore, (O(s))(x) = l means the atomic variable x has value l from L3 in state s where x ∈ AP . Syntax 3v − T CT L is syntactically equivalent to T CT L, but formulae are evaluated over the three-valued lattice. Semantics. The semantics of this logic is an extension to the three-valued case of the multi-valued logic introduced in [10]. This semantic relies mainly on the consept of multi-valued sets and multi-valued relations that are well explained in [2–4,10]. Bellow, we added the 3v-semantic of the T CT L to the semantics of χCT L. Given a 3v-model Kt and 3v-formula, the truth degrees of the satisfaction of the formula is defined as follows: • a (s) = (O(s))(a) where a ∈ AP and O : S → (AP → L3 ) is a total labeling function that maps states in S into L3 on a set of atomic propositions AP . • ϕ ∨ ψ (s) = ϕ (s) ψ (s) means in which truth degree, ϕ or ψ holds in state s. • ϕ ∧ ψ (s) = ϕ (s) ψ (s) means in which truth degree, ϕ and ψ holds in state s. • ¬ϕ (s) = ϕ (s) means in which truth degree, ϕ doesn’t hold in state s.
Model Checking Smart Contract System with 3-Valued Trust
• EX ϕ (s) = preR ∃ ( ϕ )(s) =
t∈S
125
ϕ (t) R(s,t) where preR ∃ ( ϕ )(s)
stands for the backward image of state s that determines the value of ϕ in the next state. The semantics expresses in which truth degree there exists a path in the system where ϕ holds in the next state. R • AX ϕ (s) = pre∀ ( ϕ )(s) = ϕ (t) ¬R(s,t) where preR ∀ ( ϕ )(s) t∈S
stands for the backward image of state s that determines the value of ϕ in the next state of all paths. • EGϕ = νZ. ϕ ∩L EX Z where νZ stands for the greatest fixed point of the globally operator G. The semantics expresses in which truth degree there exists a path in the system where ϕ globally holds. • E [ϕ ∪ ψ]= μZ. ψ ∪L ( ϕ ∩L EX Z ) where μZ stands for the smallest fix point of ϕ ∪ ψ. The semantics expresses in which truth degree there is a path where ϕ holds until ψ holds. The following is the new semantic where we define the truth degrees of the trust formula satisfaction in the system.
• T (i, j, φ) Kt (s) = T iff s |= φ and ∀s = s such that s ∼i→j s , we have ϕ Kt (s ) = T . This semantics means: the satisfaction degree of the formula T (i, j, φ) in state s of the 3v-system Kt is “true” iff the truth degree of ϕ in all the accessible states s is T . • T (i, j, φ) Kt (s) = M iff s |= φ and ∀s = s such that s ∼i→j s ,we have ϕ Kt (s ) = F and ∃s ∈ S such that s ∼i→j s and ϕ Kt (s ) = M . This semantics means: the satisfaction degree of the formula T (i, j, φ) in state s of the system Kt is M iff the truth degree of φ in all the accessible states s is not equal to F and there at least one accessible state s that holds φ with the value M .
4 4.1
Model Checking 3v − T CT L Reduction-based Model Checking 3v − T CT L
Generally, three valued model checking can be performed by two main algorithms: 1) a direct algorithm [10,25,35] that handles the problem of 3v-model checking directly with more expressive syntax and semantics; and 2) a reductionbased algorithm. However, developing direct algorithms is more complicated than the reduction ones [19]. The reduction algorithms reduce the 3v-valued model checking problems to the classical ones. Using these algorithms is less complicated but sometimes leads to losing some information in smart contractbased systems. However, it has two main advantages which are the ability to reuse existing model checking tools and the efficiency in dealing with the state explosion problem. In this section, we introduce a reduction-based model checking algorithm that reduces the proposed 3v − T CT L to CT L as shown in Algorithm 1. Our algorithm mainly relies on the join-irreducible elements of the
126
G. Alwhishi et al.
quasi-Boolean 3v-lattice defined in Sect. 3.1. The key ideas is proposed in [9] for reducing the multi-valued model checking problems to the classical ones and has been adopted in several proposals [20,21]. The idea behind this technique is to decompose the multi-valued model into several classical models according to the number of the join-irreducible elements in the used lattice. Each model considers the atomic propositions with the value mapped to the join-irreducible element and the values of the up-closure of this element as True. Then, model check these models using one of the known model checkers. The last step is to consider only the results with the value True and take the join of the corresponding join-irreducible elements. In our work, we combine this method to the T CT L reduction algorithm introduced in [17] and produce reduction algorithms for both 3v − T CT L model and formula. The formula of trust is translated as in [17] except that, to deal with the negation of the value M , every formula is transformed in its negation normal form by pushing the negation to the level of atomic propositions, i.e. ¬(φ ∨ ψ) ⇒ ¬φ ∧ ¬ψ. To evaluate the effectiveness of our proposed work, we apply the proposed verification approach over a smart contract-based system for Drug Traceability in Healthcare Supply Chain. We modeled our system and assigned a set of 10 properties to be checked against this system. The implementation conducted in two main steps, we transformed the 3v-model and formula of trust to the classical case (TCTL) and then we called Java toolkit that automatically interacts with the NuSMV. Although this procedure gave us accurate results, the first step is conducted manually. Therefore, we proposed our algorithm to fully implement and automate the verification process. 4.2
Case Study: A Smart Contract-based System for Drug Traceability in Healthcare Supply Chain
A smart contract-based system for drug traceability [29] is shown in Fig. 3. The system includes seven main interacted agents, Food and Drug Administration (FDA), manufacturers, distributors, pharmacies, InterPlanetary File System (IPFS), smart contracts and patients. The system works in consecutive steps as follows. (1) a manufacturer send a request for approval from the FDA to initiate the manufacturing process of a drug Lot. (2) Once the FDA approves the request, the manufacturer initiates the manufacturing process. (3) The manufacturer uploads images of the drug Lot to the IPFS. (4) IPFS sends a hash to the smart contract to give access to the images by authorized participants. (5) The drug delivered to the distributor for packaging. (6) The distributor initiates distribution process. (7) The distributor uploads image of the package to the IPFS. (8) IPFS sends a hash to the smart contract. (9) The drug Lot packages delivered to pharmacies. (10) The pharmacy initiates the sale of drug Lot box and this event will be declared to the all participants of the supply chain. (11) The pharmacy uploads an image of the sold drug package to the IPFS. (12) the IPFS sends a hash to the smart contract. (13) The pateint request drugs (14) The drug Lot box will be sold to the patient.
Model Checking Smart Contract System with 3-Valued Trust
127
Algorithm 1. Transform Kt = (St , Rt , {∼i→j | (i, j) ∈ A2 }, It , Ot ) into KT = (ST , RT , IT , VT ) and KM = (SM , RM , IM , VM ) 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35: 36: 37: 38: 39:
Input the model Kt Output the model KT Output the model KM ST = SK SM = SK IT = ∅ IM = ∅ Initialize RT = ∅ Initialize RM = ∅ (Ot (St ))(x) = M ⇒ (Ot (St ))(x) = F Initialize VT (ST ) = Ot (St ) for each sT ∈ ST and st ∈ St such that sT = st ; (Ot (St ))(x) = M ⇒ (Ot (St ))(x) = T Initialize VM (SM ) = Ot (St ) for each sM ∈ SM and st ∈ St such that sM = st ; for each (st , st ) ∈ St2 do if (st , st ) ∈ Rt then RT := RT ∪ {(st , st )} if st ∼i→j st for all (i, j) ∈ A2 and st = st then if ∃st such that ((st , st ), (st , st ) ∈ RT and χ ∈ V( st ) then ij VT (st ) := VT (st ) ∪ {α } else ST := ST ∪ {st } RT := RT ∪ {(st , st ), (st , st )} and VT (st ) := {χ, α} end if end if end if end for for each (st , st ) ∈ St2 do if (st , st ) ∈ Rt then RM := RM ∪ {(st , st )} if st ∼i→j st for all (i, j) ∈ A2 and st = st then if ∃st such that ((st , st ), (st , st ) ∈ RM and χ ∈ VM (st ) then VM (st ) := VM (st ) ∪ {αij } else SM := SM ∪ {st } RM := RM ∪ {(st , st ), (st , st )} and VM (st ) := {χ, α} end if end if end if end for
We can note that the atomic propositions that take the truth values T are ApReqSent in state S1 stands for “The manufacturer send a request for approval”, M P rocInit and ScCtU pload in S3 stads for “the manufacturer initiates the manufacturing process” and “The smart contract updated” respectively,
128
G. Alwhishi et al.
DrugImU p in S4 stands for “The manufacturer upload the drug image to IPFS” , HashSentM an in state S5 stands for “IPFS sends a hash to the smart contract” , DisP roInit in S7 stands for “The distributor initiates distribution process”, P ackegImU p in S8 stands for “The distributor upload the drug package image to IPFS”, HashSentDis in S9 stands for “The distributor send a hash to the smart contract”, P harmP rocInit in S11 stands for “The pharmacy initiates the sale process”, SoldDrugIm in S12 stands for “The sold drug uploaded to the IPFS”, HashSent stands for “IPFS sent a hash to the smart contract” in S13, DreqSent in S14 stands for “The patient request drug” and DrugSold in state S15 stands for “The pharmacy sells the drug to the patient”. While the atomic propositions that take the truth values M are ReqApprov in S2 stands for “The FDA approves the request from the manufacturer”, DrugDel in S6 stands for “The drug delivered to the distributive from the manufacturer” and P ackageDelin S10 stands for “The distributor deliver the drug package to the pharmacy”. This means, we have missing information in these atomic propositions.
Fig. 3. A smart contract-based system for drug traceability in healthcare supply chain Kt
4.3
System Properties
In this section, we emphasize the properties related to the concept of temporal trust. Over this system, we verified 10 properties classified into three categories,
Model Checking Smart Contract System with 3-Valued Trust
129
Safety express that (No thing bad will happen during the system execution), Liveness express (the good events that eventually happen) and Reachability express that a predetermined state will be reached in the system. Using our approach proposed in Sect. 4.1, we reduce the 3v-model to two classical models. By doing so, we take advantage of reusing the existing model checking tool called NuSMV. The classification of the system specifications is as follows: – Safety properties (1) “It is not the case that the manufacturer trusts the FDA to approve the request of initiating the manufacturing process, and the FDA doesn’t respond” ϕ1 = ¬ EF T (manuf acturer, F DA, ¬ReqApprov) (2) “It is not the case that the pharmacy trust the distributor to deliver the drug packages and the latter doesn’t make the delivery” ϕ2 = ¬ EF T (P harmacy, Distributor, ¬P ackageDel) – Liveness properties (3) “In all possible executions, the Smart contract trusts IPFS to send a Hash after each uploading of a drug package image” ϕ3 = AF T (Smartcontract, IP F S, HashSent) (4) “In all possible executions, the IPFS trusts the manufacturer to upload image of the drug package” ϕ4 = AF T (IP F S, M anuf acturer, DrugImU p) (5) “In all possible executions, the distributor trusts the IPFS to send a hash to the smart contract when the first upload the image of the drug package” ϕ5 = EF T (Distributor, IP F S, HashSent) (6) “In all possible executions, the distributor trusts the manufacturer to send a notification if the drug package will be delayed” ϕ6 = EF T (distributor, manuf acturer, SendDelN ote) – Reachability properties (7) “The patient trusts the pharmacy that it will sell the drug when the first places an order for a drug.” ϕ7 = EF T (P atient, P harmacy, DrugSold) (8) “The distributor trust the manufacturer to deliver drug Lot when a hash sent by the IPFS ” ϕ8 = EF T (distributor, manuf acturer, DrugDel) (9) “the IPFS trust the pharmacy to upload an image of the sold drug when pharmacy initiates a sale of the drug box.” ϕ9 = EF T (IP F S, P harmacy, SoldDrugIm) (10) “the patient trusts the pharmacy to send a notification if the drug is not available” ϕ10 = EF T (P atient, P harmacy, SendN ote)
4.4
Verification Results
Our approach runs Java toolkit that automatically interacts with the NuSMV model checker using a machine with the following specifications: Intel(R)
130
G. Alwhishi et al. Table 1. The smart contract system verification results Pro. K(T) K(M) Result
T(ms.) M(ms.)
ϕ1
F
T
M
ϕ2
F
T
M
6.023
6.026
ϕ3
T
T
T
8.043
8.045
ϕ4
T
T
T
5.011
5.015
ϕ5
T
T
T
6.014
6.016
ϕ6
F
F
F
9.018
9.020
ϕ7
T
T
T
9.021
9.023
ϕ8
F
T
M
12.019
12.022
ϕ9
T
T
T
8.020
8.024
ϕ10
F
F
F
8.015
8.018
6.021
6.024
Core(TM) i5-8250U with 1.60 GHz processor and 1.80 GHz RAM. In Table 1, we consider the properties numbers (Pro.), the model with only “T” value (K(T)), the model with the values “T” and “M” as true (K(M)), result, and the verification time (in milliseconds) for both K(T) and K(M). The results show that we have missing information about the system behaviour on the properties ϕ1 , ϕ2 and ϕ8 . While the properties ϕ6 and ϕ10 verified as False in the system and the properties ϕ3 , ϕ4 , ϕ5 , ϕ7 and ϕ9 verified as True. Based on these results, our system needs to be refined with regard to the properties with values F and M. To check the scalability and effectiveness of the applied technique, we conducted 6 experiments started with 7 agents and ended with 42 agents. Table 2 shows the number of experiment Exp.#, the number of agents Age.#, the number of reachable states St.# and the execution time of T(K) and M(K) models in milliseconds T(ms) and M(ms) respectively. The results of the reachable states in the table reflect that the state space increases exponentially with the increase in the number of agents while The execution time for each model increases logarithmically. These results ensure that our approach is highly scalable. Table 2. The scalability results of running the models K(T) and K(M) 6 times Exp.# Age.# St.#
T(ms) M(ms)
1
7
16
10.07
11.06
2
14
256
1011
1055
3
21
4352
15344
15600
4
28
73984
30623
30715
5
35
1.31882E+06 110000 110005
6
42
2.31265E+07 150000 150033
Model Checking Smart Contract System with 3-Valued Trust
5
131
Conclusion and Future Work
In this paper, we have applied a practical and scalable verification approach for verifying a smart contract-based application, with social trust communications under uncertain behaviours. We introduced a new logic called 3v − T CT L by extending the multi-valued CTL (χCT L), with a new modality for trust extracted from T CT L. The new logic is used for reasoning about uncertainty in blochchain systems based on smart contracts with trust. We introduced a reduction algorithm that transforms the 3v-model and formula to the classical case to reuse NuSMV model checker. To check the reliability of our system we implemented our framework and checked safety, liveness and Reachability properties and discussed the obtained results with the scalability. For future work, we identify the following directions: 1) we aim to implement our algorithm where the new tool takes the 3v − T CT L model and formulae and directly generates CTL ones; 2) we aim to extend our logic by investigating additional trust modalities such as the pre-conditional trust; 3) our recent work addressed uncertainty in the properties (i.e., in states), we plan to enhance our reduction algorithm so that it can handle the uncertainty in transitions; and 4) we plan to extend our semantics over arbitrary lattice that can take more than three values in order to reason about both uncertainty and inconsistency in smart contract-based systems.
References 1. Al-Saqqar, F., Bentahar, J., Sultan, K., Wan, W., Asl, E.K.: Model checking temporal knowledge and commitments in multi-agent systems using reduction. Simul. Model. Pract. Theory 51, 45–68 (2015) 2. Alwhishi, G., Bentahar, J., Drawel, N.: Reasoning about uncertainty over IoT systems. In: conference: international Wireless Communications and Mobile Computing Conference (IWCMC) (2022) 3. Alwhishi, G., Bentahar, J., Elwhishi, A.: Verifying timed commitment specifications for IoT-cloud systems with uncertainty. In: Conference: The 9th International Conference on Future Internet of Things and Cloud (FiCloud) (2022) 4. Alwhishi, G., Drawel, N., Bentahar, J.: Model checking intelligent information systems with 3-valued timed commitments. In: Conference: The 18th International Conference on Mobile Web and Intelligent Information Systems (MobiWis) (2022) 5. Bai, X., Cheng, Z., Duan, Z., Hu, K.: Formal modeling and verification of smart contracts. In: Proceedings of the 2018 7th International Conference on Software and Computer Applications, pp. 322–326 (2018) 6. Bernasconi, A., Menghi, C., Spoletini, P., Zuck, L.D., Ghezzi, C.: From model checking to a temporal proof for partial models. In: Cimatti, A., Sirjani, M. (eds.) SEFM 2017. LNCS, vol. 10469, pp. 54–69. Springer, Cham (2017). https://doi. org/10.1007/978-3-319-66197-1 4 7. Bruns, G., Godefroid, P.: Model checking partial state spaces with 3-valued temporal logics. In: Halbwachs, N., Peled, D. (eds.) CAV 1999. LNCS, vol. 1633, pp. 274–287. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48683-6 25 8. Bruns, G., Godefroid, P.: Generalized model checking: reasoning about partial state spaces. In: Palamidessi, C. (ed.) CONCUR 2000. LNCS, vol. 1877, pp. 168–182. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-44618-4 14
132
G. Alwhishi et al.
9. Bruns, G., Godefroid, P.: Model checking with multi-valued logics. In: D´ıaz, J., Karhum¨ aki, J., Lepist¨ o, A., Sannella, D. (eds.) ICALP 2004. LNCS, vol. 3142, pp. 281–293. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-278368 26 10. Chechik, M., Devereux, B., Easterbrook, S., Gurfinkel, A.: Multi-valued symbolic model-checking. ACM Trans. Softw. Eng. Methodol. (TOSEM) 12(4), 371–408 (2003) 11. Clarke, E.M., Emerson, E.A., Sifakis, J.: Model checking: algorithmic verification and debugging. Commun. ACM 52(11), 74–84 (2009) 12. Clarke, E.M., Henzinger, T.A., Veith, H., Bloem, R., et al.: Handbook of model checking, vol. 10. Springer, Cham (2018). https://doi.org/10.1007/978-3-31910575-8 13. Davey, B.A., Priestley, H.A.: Introduction to Lattices and Order. Cambridge University Press, Cambridge (2002) 14. Drawel, N., Bentahar, J., Laarej, A., Rjoub, G.: Formalizing group and propagated trust in multi-agent systems. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI, pp. 60–66 (2020) 15. Drawel, N., Bentahar, J., Laarej, A., Rjoub, G.: Formal verification of group and propagated trust in multi-agent systems. Auton. Agents Multi Agent Syst. 36(1), 19 (2022). https://doi.org/10.1007/s10458-021-09542-6 16. Drawel, N., Bentahar, J., Shakshuki, E.: Reasoning about trust and time in a system of agents. Procedia Comput. Sci. 109, 632–639 (2017) 17. Drawel, N., Laarej, A., Bentahar, J., El Menshawy, M.: Transformation-based model checking temporal trust in multi-agent systems. J. Syst. Softw. 192, 111383 (2022) 18. Drawel, N., Qu, H., Bentahar, J., Shakshuki, E.: Specification and automatic verification of trust-based multi-agent systems. Future Gener. Comput. Syst. 107, 1047–1060 (2020) 19. El-Menshawy, M., Bentahar, J., Dssouli, R.: Symbolic model checking commitment protocols using reduction. In: Omicini, A., Sardina, S., Vasconcelos, W. (eds.) DALT 2010. LNCS (LNAI), vol. 6619, pp. 185–203. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-20715-0 11 20. Gurfinkel, A., Chechik, M.: Multi-valued model checking via classical model checking. In: Amadio, R., Lugiez, D. (eds.) CONCUR 2003. LNCS, vol. 2761, pp. 266– 280. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45187-7 18 21. Jamroga, W., Konikowska, B., Penczek, W.: Multi-valued verification of strategic ability. In: Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pp. 1180–1189 (2016) 22. Kholy, W.E., Bentahar, J., El-Menshawy, M., Qu, H., Dssouli, R.: Modeling and verifying choreographed multi-agent-based web service compositions regulated by commitment protocols. Expert Syst. Appl. 41(16), 7478–7494 (2014) 23. Kleene, S.C.: Introduction to Metamathematics, vol. 1. North-Holland Publishing Company, Amsterdam (1964) 24. Konikowska, B., Penczek, W.: Model checking for multivalued logic of knowledge and time. In: Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 169–176 (2006) 25. Li, Y., Lei, L., Li, S.: Computation tree logic model checking based on multi-valued possibility measures. Inf. Sci. 485, 87–113 (2019) 26. Liu, Y., Zhou, Z., Yang, Y., Ma, Y.: Verifying the smart contracts of the port supply chain system based on probabilistic model checking. Systems 10(1), 19 (2022)
Model Checking Smart Contract System with 3-Valued Trust
133
27. Lomuscio, A., Qu, H., Raimondi, F.: MCMAS: a model checker for the verification of multi-agent systems. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 682–688. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3642-02658-4 55 28. Mehdi, M., Bouguila, N., Bentahar, J.: Probabilistic approach for QOS-aware recommender system for trustworthy web service selection. Appl. Intell. 41(2), 503– 524 (2014) 29. Musamih, A., Salah, K., Jayaraman, R., Arshad, J., Debe, M., Al-Hammadi, Y., Ellahham, S.: A blockchain-based approach for drug traceability in healthcare supply chain. IEEE Access 9, 9728–9743 (2021) 30. Nam, W., Kil, H.: Formal verification of blockchain smart contracts via ATL model checking. IEEE Access 10, 8151–8162 (2022) 31. Nehai, Z., Piriou, P.Y., Daumas, F.: Model-checking of smart contracts. In: 2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), pp. 980–987. IEEE (2018) 32. Osterland, T., Rose, T.: Model checking smart contracts for Ethereum. Pervasive Mob. Comput. 63, 101129 (2020) 33. Peled, E.M.C.O.G.D.A.: Model Checking. Cyber Physical Systems Series, MIT Press, Cambridge (1999) 34. Roman, S.: Lattices and Ordered Sets. Springer, New York (2008). https://doi. org/10.1007/978-0-387-78901-9 35. Shoham, S., Grumberg, O.: Multi-valued model checking games. In: Peled, D.A., Tsay, Y.-K. (eds.) ATVA 2005. LNCS, vol. 3707, pp. 354–369. Springer, Heidelberg (2005). https://doi.org/10.1007/11562948 27 36. Szabo, N.: Formalizing and securing relationships on public networks. First monday (1997) 37. Xu, Y., Ruan, D., Qin, K., Liu, J.: Lattice-valued logic. In: An Alternative Approach to Treat Fuzziness and Incomparability. Studies in fuzziness and soft computing vol. 132 (2003). https://doi.org/10.1007/978-3-540-44847-1 38. Yaga, D., Mell, P., Roby, N., Scarfone, K.: Blockchain technology overview. arXiv preprint arXiv:1906.11078 (2019) 39. Zheng, Z., Xie, S., Dai, H., Chen, X., Wang, H.: An overview of blockchain technology: architecture, consensus, and future trends. In: 2017 IEEE International Congress on Big Data (BigData Congress), pp. 557–564. IEEE (2017)
Author Index
A Abdallah, Raed, 65 Abdelhédi, Fatma, 97 AlMazrua, Halah, 53 AlShamlan, Hala, 53 Alwhishi, Ghalya, 119 Amal, Ilias, 97 B Bayraktar, Suha, 15 Bella Baci, Amel, 97 Benbernou, Salima, 65 Bentahar, Jamal, 39, 119 Bhudia, Alpesh, 109 Brousmiche, Kei, 97 C Cartwright, Anna, 109 Cartwright, Edward, 109
H Haque, Rafiqul, 65 Hernandez-Castro, Julio, 109 Hurley-Smith, Darren, 109 L Liu, Chao, 3 M Mousas, Christos, 79 N Nguyen, Phuong, 3 R Rekabdar, Banafsheh, 79 Rigaud, Lionel, 97 Rjoub, Gaith, 39 S Sanli, Mustafa, 27 Sebald, Lawrence, 3
D Drawel, Nagat, 39
T Taher, Yehia, 65 Talafha, Sameerah, 79
E Ekenna, Chinwe, 79 Elwhishi, Ahmed, 119
W Wahab, Omar Abdel, 39 Wu, Yusen, 3
G Gören, Sezer, 15
Y Yesha, Yelena, 3 Younas, Muhammad, 65
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 I. Awan et al. (Eds.): DBB 2022, LNNS 541, p. 135, 2023. https://doi.org/10.1007/978-3-031-16035-6