313 25 2MB
English Pages 152 Year 2005
Editorial This is the final issue [November 2004]∗ put together by me as Editor-in-Chief of TISSEC. This issue completes the seventh volume of TISSEC and is the twentyfifth issue. TISSEC has grown and flourished since the first issue was published in November 1998. I would like to personally thank the editors, referees, ACM staff, and most of all the authors for the sustained effort and interest that has brought TISSEC to its current standing as the premier forum for high-quality original research of archival significance. The information security discipline has undergone tremendous change since TISSEC was created. On the practical side, the importance of security is recognized at the highest levels of management even as confusion and uncertainty remains regarding what security means and how it can be achieved. The average Internet user has been hammered by viruses, worms, spyware, spam, and phishing attacks to the point where claims of security in existing systems look extremely hollow. Nevertheless web commerce, on-line banking, and similar consumer applications continue to grow and thrive as the benefits outweigh the security risks. Designers of these systems must be doing something right to continue their popularity and increasing penetration with the consumer population. Our understanding of how to engineer the appropriate mix of security, convenience, and cost remains rudimentary at best but the real practical success of these systems gives us strong empirical evidence of what works. Many security purists continue to blast the security of these systems, but perhaps our profession needs to learn from real-world success instead of berating it. On the research front the amount of activity and the size of our community have increased substantially. Security has also become a topic of interest in other established research circles. There is hardly any computer science or engineering conference without some discussion of security. Security has become a mainstream concern of leading hardware and software vendors. At the same time, security has become such a big topic that it has slowly begun to spin off as a separate discipline. A truly broad and deep education in information security requires multiple advanced courses, and we are beginning to see a few departments worldwide that provide this kind of education. I am personally optimistic about prospects for achieving effective information security. We all recognize that absolute security is never possible so we cannot declare failure if our systems fail this impossible goal. Instead our criterion for success has to be realistic. Indeed much of our challenge is to formulate success criteria that are achievable and useful in practice. In closing, I am confident that TISSEC will continue to be a comprehensive and broad forum for presenting the best security research results. It is a pleasure to hand over to Michael Reiter, the new Editor-in-Chief, who I am confident will lead TISSEC to continued success. RAVI SANDHU ∗ [Editor’s
note: This editorial should have appeared in the November 2004 issue of Transactions on Information and Systems Security.] ACM Transactions on Information and Systems Security, Vol. 8, No. 1, February 2005, Page 1.
Preface This issue of the ACM Transactions on Information System Security comprises four papers presented at the 10th ACM Conference on Computer and Communicaition Security, held in Washington, DC, USA, in October 2003. The primary objective of this annual conference is to disseminate high-quality original research results that have practical relevance to the construction, evaluation, application, or operation of secure systems. The four papers in this special issue were invited submissions that were substantially extended for journal publication and were reviewed through a normal review process of this journal. These four papers address different aspects of computer security: protection against code injection attacks, sensor network security, game-theoretic solutions to model attackers, and protection of data residing on untrusted servers. The first paper, “Randomized Instruction Set Emulation,” by Elena Gabriela Barrantes, David H. Ackley, Stephanie Forrest, and Darko Stefanovi, proposes a randomized instruction set emulator that prevents binary code injection attacks to a running program. The main idea is to encrypt the binary code of the original program with a random key as it is loaded into the emulator memory, and then to decrypt it during the emulated instruction fetch cycle. Even if the adversary manages to inject code, the attack code will not be executable as the decryption process randomizes it since it is not encrypted. The second paper, “Establishing Pairwise Keys in Distributed Sensor Networks,” by Donggang Liu, Peng Ning, and Rongfang, Li, develops a general framework for establishing pairwise keys between sensor nodes by improving the technique of bivariate polynomials. It proposes three key distribution schemes that are instantiations of this framework to reduce the computation overhead at sensor nodes. The third paper, “Incentive-Based Modeling and Inference of Attacker Intent, objectives, and Strategies,” by Peng Liu, Wanyu Zang, and Meng Yu, proposes an interesting application of game theory to computer security by modeling incentives of attackers to enable better inference of attacker’s intent, and by modeling attacker and defender objectives and strategies. The last paper, “Modeling and Assessing Inference Exposure in Encrypted Databases,” by Alberto Ceselli, Ernesto Damiani, Sabrina De Capitani Di Vimercati, Sushil Jajodia, Stefano Paraboschi, and Pierangela Samarati, addresses the problem of ensuring the confidentiality of the out-sourced data residing on untrusted servers. The key idea is to encrypt the database before storing, and to employ a trusted front end to translate each query into an equivalent query on the encrypted database and to decrypt the query results before presenting them to the user. The approach exploits the indexing information associated with the encrypted databse to execute a query without having to decrypt the data. VIJAY ATLURI ACM Transactions on Information and Systems Security, Vol. 8, No. 1, February 2005, Page 2.
Randomized Instruction Set Emulation ELENA GABRIELA BARRANTES, DAVID H. ACKLEY, STEPHANIE FORREST, ´ and DARKO STEFANOVIC University of New Mexico
Injecting binary code into a running program is a common form of attack. Most defenses employ a “guard the doors” approach, blocking known mechanisms of code injection. Randomized instruction set emulation (RISE) is a complementary method of defense, one that performs a hidden randomization of an application’s machine code. If foreign binary code is injected into a program running under RISE, it will not be executable because it will not know the proper randomization. The paper describes and analyzes RISE, describing a proof-of-concept implementation built on the open-source Valgrind IA32-to-IA32 translator. The prototype effectively disrupts binary code injection attacks, without requiring recompilation, linking, or access to application source code. Under RISE, injected code (attacks) essentially executes random code sequences. Empirical studies and a theoretical model are reported which treat the effects of executing random code on two different architectures (IA32 and PowerPC). The paper discusses possible extensions and applications of the RISE technique in other contexts. Categories and Subject Descriptors: D.4.6 [Operating Systems]: Security and Protection—Invasive software; D.3.4 [Programming Languages]: Processors—Interpreters, runtime environments General Terms: Security Additional Key Words and Phrases: Automated diversity, randomized instruction sets, software diversity
An earlier version of this paper was published as Barrantes, E. G., Ackley, D. H., Forrest, S., Palmer, T. S., Stefanovi´c, D., and Dai Zovi, D. 2003. Randomized instruction set emulation to disrupt binary code injection attacks. In Proceedings of the 10th ACM Conference on Computer and Communications Security, pp. 281–289. This version adds a detailed model and analysis of the safety of random bit execution, and presents additional empirical results on the prototype’s effectiveness and performance. The authors gratefully acknowledge the partial support of the National Science Foundation (grants ANIR-9986555, CCR-0219587, CCR-0085792, CCR-0311686, EIA-0218262, EIA-0238027, and EIA0324845), the Office of Naval Research (grant N00014-99-1-0417), Defense Advanced Research Projects Agency (grants AGR F30602-00-2-0584 and F30602-02-1-0146), Sandia National Laboratories, Hewlett-Packard gift 88425.1, Microsoft Research, and Intel Corporation. Any opinions, findings, conclusions, or recommendations expressed in this material are the authors’ and do not necessarily reflect those of the sponsors. Authors’ address: Department of Computer Science, University of New Mexico, MSC01 1130, Albuquerque, NM 87131-1386. Stephanie Forrest is also with the Santa Fe Institute, 1399 Hyde Park Rd, Santa Fe, NM 87501. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 1515 Broadway, New York, NY 10036 USA, fax: +1 (212) 869-0481, or [email protected]. C 2005 ACM 1094-9224/05/0200-0003 $5.00 ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005, Pages 3–40.
4
•
E. G. Barrantes et al.
1. INTRODUCTION Standardized machine instruction sets provide consistent interfaces between software and hardware, but they are a double-edged sword. Although they yield great productivity gains by enabling independent development of hardware and software, the ubiquity of well-known instructions sets also allows a single attack designed around an exploitable software flaw to gain control of thousands or millions of systems. Such attacks could be stopped or greatly hindered if each protected system could be economically destandardized, so that a different attack would have to be created specifically for each new target, using information that was difficult or impossible for an outsider to obtain. The automatic diversification we explore in this paper is one such destandardization technique. Many existing defenses against machine code injection attacks block the known routes by which foreign code is placed into a program’s execution path. For example, stack defense mechanisms [Chiueh and Hsu 2001; Cowan et al. 1998; Etoh and Yoda 2000, 2001; Forrest et al. 1997; Frantzen and Shuey 2001; Nebenzahl and Wool 2004; Prasad and Chiueh 2003; Vendicator 2000; Xu et al. 2002] protect return addresses and defeat large classes of buffer overflow attacks. Other mechanisms defend against buffer overflows elsewhere in program address space [PaX Team 2003], against alternative overwriting methods [Cowan et al. 2001], or guard from known vulnerabilities through shared interfaces [Avijit et al. 2004; Baratloo et al. 2000; Lhee and Chapin 2002; Tsai and Singh 2001]. Our approach is functionally similar to the PAGEEXEC feature of PaX [PaX Team 2003], an issue we discuss in Section 6. Rather than focusing on any particular code injection pathway, a complementary approach would disrupt the operation of the injected code itself. In this paper we describe randomized instruction set emulation (RISE), which uses a machine emulator to produce automatically diversified instruction sets. With such instruction set diversification, each protected program has a different and secret instruction set, so that even if a foreign attack code manages to enter the execution stream, with very high probability the injected code will fail to execute properly. In general, if there are many possible instruction sets compared to the number of protected systems and the chosen instruction set in each case is externally unobservable, different attacks must be crafted for each protected system and the cost of developing attacks is greatly increased. In RISE, each byte of protected program code is scrambled using pseudorandom numbers seeded with a random key that is unique to each program execution. Using the scrambling constants it is trivial to recover normal instructions executable on the physical machine, but without the key it is infeasible to produce even a short code sequence that implements any given behavior. Foreign binary code that manages to reach the emulated execution path will be descrambled without ever having been correctly scrambled, foiling the attack, and producing pseudorandom code that will usually crash the protected program. 1.1 Threat Model The set of attacks that RISE can handle is slightly different from that of many defense mechanisms, so it is important to identify the RISE threat model clearly. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
5
Our specific threat model is binary code injection from the network into an executing program. This includes many real-world attack mechanisms, but explicitly excludes several others, including the category of attacks loosely grouped under the name “return into libc” [Nergal 2001] which modify data and addresses so that code already existing in the program is subverted to execute the attack. These attacks might or might not use code injection as part of the attack. Most defenses against code injection perform poorly against this category as it operates at a different level of abstraction; complementary defense techniques are needed, and have been proposed, such as address obfuscation [Bhatkar et al. 2003; Chew and Song 2002; PaX Team 2003], which hide and/or randomize existing code locations or interface access points. The restriction to code injection attacks excludes data only attacks such as nonhybrid versions of the “return into libc” class mentioned above, while focusing on binary code excludes attacks such as macro viruses that inject code written in a higher-level language. Finally, we consider only attacks that arrive via network communications and therefore we treat the contents of local disks as trustworthy before an attack has occurred. In exchange for these limitations, RISE protects against all binary code injection attacks, regardless of the method by which the machine code is injected. By defending the code itself, rather than any particular access route into the code, RISE offers the potential of blocking attacks based on injection mechanisms that have yet to be discovered or revealed. This threat model is related to, but distinct from, other models used to characterize buffer overflow attacks [Cowan et al. 2000, 2001]. It includes any attack in which native code is injected into a running binary, even by means that are not obviously buffer overflows, such as misallocated malloc headers, footer tags [Security Focus 2003; Xu et al. 2003], and format string attacks that write a byte to arbitrary memory locations [Gera and Riq 2002; Newsham 2000]. RISE protects against injected code arriving by any of these methods. On the other hand, other defense mechanisms, such as the address obfuscation mentioned above, can prevent attacks that are specifically excluded from our code injection threat model. We envision the relatively general code-based mechanism of RISE being used in conjunction with data and address diversification-based mechanisms to provide deeper, more principled, and more robust defenses against both known and unknown attacks. 1.2 Overview This paper describes a proof-of-concept RISE system, which builds randomized instruction set support into a version of the Valgrind IA32-to-IA32 binary translator [Nethercote and Seward 2003; Seward and Nethercote 2004]. Section 2 describes a randomizing loader for Valgrind that scrambles code sequences loaded into emulator memory from the local disk using a hidden random key. Then, during Valgrind’s emulated instruction fetch cycle, fetched instructions are unscrambled, yielding the unaltered IA32 machine code sequences of the protected application. The RISE design makes few demands on the supporting ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
6
•
E. G. Barrantes et al.
emulator and could be easily ported to any binary-to-binary translator for which source code is available. Section 3 reports empirical tests of the prototype and confirms that RISE successfully disrupts a range of actual code injection attacks against otherwise vulnerable applications. In addition, it highlights the extreme fragility of typical attacks and comments on performance issues. A basic property of the RISE defense mechanism is that if an attack manages to inject code by any means, essentially random machine instructions will be executed. Section 4 investigates the likely effects of such an execution in several different execution contexts. Experimental results are reported and theoretical analyses are given for two different architectures. There is always a possibility that random bits could create valid instructions and instruction sequences. We present empirical data suggesting that the majority of random code sequences will produce an address fault or illegal instruction quickly, causing the program to abort. Most of the remaining cases throw the program into a loop, effectively stopping the attack. Either way, an attempted takeover is downgraded into a denial-of-service attack against the exploitable program. Unlike compiled binary code, which uses only a well-defined and often relatively small selection of instructions, random code is unconstrained. The behavior of random code execution in the IA32 architecture can involve the effects of undocumented instructions and whatever instruction set extensions (e.g., MMX, SSE, and SSE2) are present, as well as the effects of random branch offsets combined with multibyte, variable-length instructions. Although those characteristics complicate a tight theoretical analysis of random bit executions on the IA32, models for more constrained instruction set architectures, such as the PowerPC, lead to a closer fit to the observed data. Section 6 summarizes related work, Section 7 discusses some of the implications and potential vulnerabilities of the RISE approach, and Section 8 concludes the paper. 2. TECHNICAL APPROACH AND IMPLEMENTATION This section describes the prototype implementation of RISE using Valgrind [Nethercote and Seward 2003; Seward and Nethercote 2004] for the Intel IA32 architecture. Our strategy is to provide each program copy its own unique and private instruction set. To do this, we consider what is the most appropriate machine abstraction level, how to scramble and descramble instructions, when to apply the randomization and when to descramble, and how to protect interpreter data. We also describe idiosyncrasies of Valgrind that affected the implementation. 2.1 Machine Abstraction Level The native instruction set of a machine is a promising computational level for automated diversification because all computer functionality can be expressed in machine code. This makes the machine-code level desirable to attack and protect. However, automated diversification is feasible at higher levels of abstraction, although there are important constraints on suitable candidates. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
7
Language diversification seems most promising for languages that are interpreted or executed directly by a virtual machine. Randomizing source code for a compiled language would protect only against injections at compile time. An additional constraint is the possibility of crafting attacks at the selected language level. Although it is difficult to evaluate this criterion in the abstract, we could simply choose languages for which those attacks have already been shown to exist, such as Java, Perl, and SQL [Harper 2002]. And in fact, proposals for diversifying these higher levels have been made [Boyd and Keromytis 2004; Kc et al. 2003]. Macro languages provide another example of a level that could be diversified to defeat macro viruses. Finally, it is necessary to have a clear trust boundary between internal and external programs so that it is easy to decide which programs should be randomized. The majority of programs should be internal to the trust boundary, or the overhead of deciding what is trusted and untrusted will become too high. This requirement eliminates most web-client scripting languages such as Javascript because a user decision about trust would be needed every time a Javascript program was going to be executed on a client. A native instruction set, with a network-based threat model, provides a clear trust boundary, as all legitimately executing machine code is stored on a local disk. An obvious drawback of native instruction sets is that they are traditionally physically encoded and not readily modifiable. RISE therefore operates at an intermediate level, using software that performs binary-to-binary code translation. The performance impact of such tools can be minimal [Bala et al. 2000; Bruening et al. 2001]. Indeed, binary-to-binary translators sometimes improve performance compared to running the programs directly on the native hardware [Bala et al. 2000]. For ease of research and dissemination, we selected the open-source emulator, Valgrind, for our prototype. Although Valgrind is described primarily as a tool for detecting memory leaks and other program errors, it contains a complete IA32-to-IA32 binary translator. The primary drawback of Valgrind is that it is very slow, largely owing to its approach of translating the IA32 code into an intermediate representation and its extensive error checking. However, the additional slowdown imposed by adding RISE to Valgrind is modest, and we are optimistic that porting RISE to a more performance-oriented emulator would yield a fully practical code defense. 2.2 Instruction Set Randomization Instruction set randomization could be as radical as developing a new set of opcodes, instruction layouts, and a key-based toolchain capable of generating the randomized binary code. And, it could take place at many points in the compilation-to-execution spectrum. Although performing randomization early could help distinguish code from data, it would require a full compilation environment on every machine, and recompiled randomized programs would likely have one fixed key indefinitely. RISE randomizes as late as possible in the process, scrambling each byte of the trusted code as it is loaded into the emulator, and then unscrambling it before execution. Deferring the randomization to ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
8
•
E. G. Barrantes et al.
load time makes it possible to scramble and load existing files in the executable and linking format (ELF) [Tool Interface Standards Committee . 1995] directly, without recompilation or source code, provided we can reliably distinguish code from data in the ELF file format. The unscrambling process needs to be fast, and the scrambling process must be as hard as possible for an outsider to deduce. Our current default approach is to generate at load time a pseudorandom sequence the length of the overall program text using the Linux /dev/urandom device [Tso 1998], which uses a secret pool of true randomness to seed a pseudorandom stream generated by feedback through SHA1 hashing. The resulting bytes are simply XORed with the instruction bytes to scramble and unscramble them. In addition, it is possible to specify the length of the key, and a smaller key can be tiled over the process code. If the underlying truly random key is long enough, and as long as it is infeasible to invert SHA1 [Schneier 1996], we can be confident that an attacker cannot break the entire sequence. The security of this encoding is discussed further in Section 7. 2.3 Design Decisions Two important aspects of the RISE implementation are how it handles shared libraries and how it protects the plaintext executable. Much of the code executed by modern programs resides in shared libraries. This form of code sharing can significantly reduce the effect of the diversification, as processes must use the same instruction set as the libraries they require. When our load-time randomization mechanism writes to memory that belongs to shared objects, the operating system does a copy-on-write, and a private copy of the scrambled code is stored in the virtual memory of the process. This significantly increases memory requirements, but increases interprocess diversity and avoids having the plaintext code mapped in the protected processes’ memory. This is strictly a design decision, however. If the designer is willing to sacrifice some security, it can be arranged that processes using RISE share library keys, and so library duplication could be avoided. Protecting the plaintext instructions inside Valgrind is a second concern. As Valgrind simulates the operation of the CPU, during the fetch cycle when the next byte(s) are read from program memory, RISE intercepts the bytes and unscrambles them; the scrambled code in memory is never modified. Eventually, however, a plaintext piece of the program (semantically equivalent to the block of code just read) is written to Valgrind’s cache. From a security point of view, it would be best to separate the RISE address space completely from the protected program address space, so that the plaintext is inaccessible from the program, but as a practical matter this would slow down emulator data accesses to an extreme and unacceptable degree. For efficiency, the interpreter is best located in the same address space as the target binary, but of course this introduces some security concerns. A RISE-aware attacker could aim to inject code into a RISE data area, rather than that of the vulnerable program. This is a problem because the cache cannot be encrypted. To protect the cache its pages are kept as read-and-execute only. When a new translated basic block is ready to be ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
9
written to the cache, we mark the affected pages as writable, execute the write action, and restore the pages to their original nonwritable permissions. A more principled solution would be to randomize the location of the cache and the fragments inside it, a possibility for future implementations of RISE. 2.4 Implementation Issues Our current implementation does not handle self-modifying code, but it has a primitive implementation of an interface to support dynamically generated code. We consider arbitrary self-modifying code as an undesirable programming practice and agree with Valgrind’s model of not allowing it. However, it is desirable to support legitimate dynamically generated code, and eventually we intend to provide a complete interface for this purpose. An emulator needs to create a clear boundary between itself and the process to be emulated. In particular, the emulator should not use the same shared libraries as the process being emulated. Valgrind deals with this issue by adding its own implementation of all library functions it uses, with a local modified name for example, VGplain printf instead of printf. However, we discovered that Valgrind occasionally jumped into the target binary to execute low-level functions (e.g., umoddi and udivdi). When that happened, the processor attempted to execute instructions that had been scrambled for the emulated process, causing Valgrind to abort. Although this was irritating, it did demonstrate the robustness of the RISE approach in that these latent boundary crossings were immediately detected. We worked around these dangling unresolved references by adding more local functions to Valgrind and renaming affected symbols with local names (e.g., rise umoddi instead of “%” (the modulo operator)). A more subtle problem arises because the IA32 does not impose any data and code separation requirement, and some compilers insert dispatch tables directly in the code. In those cases, the addresses in such internal tables are scrambled at load time (because they are in a code section), but are not descrambled at execution time because they are read as data. Although this does not cause an illegal operation, it causes the emulated code to jump to a random address and fail inappropriately. At interpretation time, RISE looks for code sequences that are typical for jump-table referencing and adds machine code to check for in-code references into the block written to the cache. If an in-code reference is detected when the block is executing, our instrumentation descrambles the data that was retrieved and passes it in the clear to the next (real) instruction in the block. This scheme could be extended to deal with the general case of using code as data by instrumenting every dereference to check for in-code references. However, this would be computationally expensive, so we have not implemented it in the current prototype. Code is rarely used as data in legitimate programs except in the case of virtual machines, which we address separately. An additional difficulty was discovered with Valgrind itself. The thread support implementation and the memory inspection capabilities require Valgrind to emulate itself at certain moments. To avoid infinite emulation regress, it has a special workaround in its code to execute some of its own functions natively during this self-emulation. We handled this by detecting Valgrind’s own address ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
10
•
E. G. Barrantes et al.
ranges and treating them as special cases. This issue is specific to Valgrind, and we expect not to encounter it in other emulators. 3. EFFICACY AND PERFORMANCE OF RISE The results reported in this section were obtained using the RISE prototype, available under the GPL from http://cs.unm.edu/~immsec. We have tested RISE’s ability to run programs successfully under normal conditions and its ability to disrupt a variety of machine code injection attacks. The attack set contained 20 synthetic and 15 real attacks. The synthetic attacks were obtained from two sources. Two attacks, published by Fayolle and Glaume [2002], create a vulnerable buffer—in one case on the heap and in the other case on the stack—and inject shellcode into it. The remaining 18 attacks were executed with the attack toolkit provided by Wilander and Kamkar and correspond to their classification of possible buffer overflow attacks [Wilander and Kamkar 2003] according to technique (direct or pointer redirection), type of location (stack, heap, BSS, or data segment), and attack target (return address, old base pointer, function pointer, and longjump buffer). Without RISE, either directly on the processor or using Valgrind, all of these attacks successfully spawn a shell. Using RISE, the attacks are stopped. The real attacks were launched from the CORE impact attack toolkit [CORE Security 2004]. We selected 15 attacks that satisfied the following requirements of our threat model and the chosen emulator: the attack is launched from a remote site; the attack injects binary code at some point in its execution; and, the attack succeeds on a Linux OS. Because Valgrind runs under Linux; we focused on Linux distributions, reporting data from Mandrake 7.2 and versions of RedHat from 6.2 to 9. 3.1 Results All real (nonsynthetic) attacks were tested on the vulnerable applications before retesting with RISE. All of them were successful against the vulnerable services without RISE, and they were all defeated by RISE (Table I). Based on the advisories issued by CERT in the period between 1999 and 2003, Xu et al. [2003] classify vulnerabilities that can inject binary code into a running process according to the method used to modify the execution flow: buffer overflows, format string vulnerabilities, malloc/free, and integer manipulation errors. Additionally, the injected code can be placed in different sections of the process (stack, heap, data, BSS). The main value of RISE is its imperviousness to the entry method and/or location of the attack code, as long as the attack itself is expressed as binary code. This is illustrated by the diversity of vulnerability types and shellcode locations used in the real attacks (columns 3 and 4 of Table I). The available synthetic attacks are less diverse in terms of vulnerability type. They are all buffer overflows. However, they do have attack code location variety (stack, heap, and data), and more importantly, they have controlled diversity of corrupted code address types (return address, old base pointer, function pointer, and longjump buffer as either local variable or parameter), ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
11
Table I. Results of Attacks Against Real Applications Executed under RISE Attack Apache OpenSSL SSLv2 Apache mod php Bind NXT Bind TSIG CVS flag insertion heap exploit CVS pserver double free PoPToP Negative Read ProFTPD xlate ascii write off-by-two rpc.statd format string SAMBA nttrans SAMBA trans2 SSH integer overflow sendmail crackaddr wuftpd format string wuftpd glob “˜{”
Linux Distribution RedHat 7.0 & 7.2
Location of Injected Code Heap
Stopped by RISE √
Heap Stack Stack Heap
√ √ √ √
RedHat 7.2 RedHat 6.2 RedHat 6.2 RedHat 7.2 & 7.3
Vulnerability Buffer overflow and malloc/free Buffer overflow Buffer overflow Buffer overflow Malloc/free
RedHat 7.3 RedHat 9 RedHat 9
Malloc/Free Integer error Buffer overflow
Heap Heap Heap
√ √ √
RedHat 6.2 RedHat 7.2 RedHat 7.2 Mandrake 7.2 RedHat 7.3 RedHat 6.2–7.3 RedHat 6.2–7.3
Format string Buffer overflow Buffer overflow Integer error Buffer overflow Format string Buffer overflow
GOT Heap Stack Stack Heap Stack Heap
√ √ √ √ √ √ √
Column 1 gives the exploit name (and implicitly the service against which it was targeted). The vulnerability type and attack code (shellcode) locations are included (columns 3 and 4, respectively). The result of the attack is given in column 5.
and offer either direct or indirect execution flow hijacking (see Wilander and Kamkar [2003]). All of Wilander’s attacks have the shellcode located in the data section. Both of Fayolle and Glaume’s exploits use direct return address pointer corruption. The stack overflow injects the shellcode on the stack, and the heap overflow locates the attack code on the heap. All synthetic attacks are successful (spawn a shell) when running natively on the processor or over unmodified Valgrind. All of them are stopped by RISE (column 5 of Table II). When we originally tested real attacks and analyzed the logs generated by RISE, we were surprised to find that nine of them failed without ever executing the injected attack code. Further examination revealed that this was due to various issues with Valgrind itself, which have been remedied in later versions. The current RISE implementation in Valgrind 2.0.0 does not have this behavior. All attacks (real and synthetic) are able to succeed when the attacked program runs over Valgrind, just as they do when running natively on the processor. These results confirm that we successfully implemented RISE and that a randomized instruction set prevents injected machine code from executing, without the need for any knowledge about how or where the code was inserted in process space. 3.2 Performance Being emulation based, RISE introduces execution costs that affect application performance. For a proof-of-concept prototype, correctness and defensive power were our primary concerns, rather than minimizing resource overhead. In this section, we describe the principal performance costs of the RISE approach, ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
12
•
E. G. Barrantes et al. Table II. Results of the Execution of Synthetic Attacks under RISE
Type of Overflow Stack direct Data direct Stack indirect Data indirect Stack direct Stack direct
Shellcode Location Data Data Data Data Stack Heap
Exploit Origin Wilander and Kamkar [2003] Wilander and Kamkar [2003] Wilander and Kamkar [2003] Wilander and Kamkar [2003] Fayolle and Glaume [2002] Fayolle and Glaume [2002]
Number of = Pointer Types 6 2 6 4 1 1
Stopped by RISE 6 (100%) 2 (100%) 6 (100%) 4 (100%) 1 (100%) 1 (100%)
Type of overflow (column 1) denotes the location of the overflowed buffer (stack, heap or data) and the type of corruption executed: direct modifies a code pointer during the overflow (such as the return address), and indirect modifies a data pointer that eventually is used to modify a code pointer. Shellcode location (column 2) indicates the segment where the actual malicious code was stored. Exploit origin (column 3) gives the paper from which the attacks were taken. The number of pointer types (column 4) defines the number of different attacks that were tried by varying the type of pointer that was overflowed. Column 5 gives the number of different attacks in each class that were stopped by RISE.
which include a once-only time cost for code randomization during loading, time for derandomization while the process executes, and space overheads. Although in the following we assume an all-software implementation, RISE could also be implemented with hardware support, in which case we would expect much better performance because the coding and decoding could be performed directly in registers rather than executing two different memory accesses for each fetch. The size of each RISE-protected process is increased because it must have its own copy of any library it uses. Moreover, the larger size is as much as doubled to provide space for the randomization mask.1 A software RISE uses dynamic binary translation, and pays a runtime penalty for this translation. Valgrind amortizes interpretation cost by storing translations in a cache, which allows native-speed execution of previously interpreted blocks. Valgrind is much slower than binary translators [Bala et al. 2000; Bruening et al. 2001] because it converts the IA32 instruction stream into an intermediate representation before creating the code fragment. However, we will give some evidence that long-running, server-class processes can execute at reasonable speeds, and these are precisely the ones for which RISE is most needed. As an example of this effect, Table III provides one data point about the long-term runtime costs of using RISE, using the Apache web server in the face of a variety of nonattack workloads. Classes 0 to 3, as defined by SPEC Inc. [1999], refer to the size of the files that are used in the workload mix. Class 0 is the least I/O intensive (files are less than 1 KB long), and class 3 is the one that uses the most I/O (files up to 1000 KB long). As expected, on I/O bound mixes, the throughput of Apache running over RISE is closer to Apache running 1A
RISE command-line switch controls the length of the mask, which is then tiled to cover the program. A 1000-byte mask, for example, would be a negligible cost for mask space, and very probably would provide adequate defense. In principle, however, it might open a within-run vulnerability owing to key reuse. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
13
Table III. Comparison of the Average Time Per Operation between Native Execution of Apache and Apache over RISE Mix Type Class 0 Class 1 Class 2 Class 3 Total
Native Execution Mean (ms) Std. Dev. 177.32 422.22 308.76 482.31 1,230.75 624.58 10,517.26 3,966.24 493.80 1,233.56
Execution over RISE Mean (ms) Std. Dev. 511.73 1,067.79 597.11 1,047.23 1,535.24 1,173.57 11,015.74 4,380.26 802.63 1,581.50
RISE/ Native 2.88 1.93 1.25 1.05 1.62
Presented times were obtained from the second iteration in a standard SPECweb99 configuration (300 s warm up and 1200 s execution).
directly on the processor.2 Table III shows that the RISE prototype slows down by a factor of no more than 3, and sometimes by as little as 5%, compared with native execution, as observed by the client. These results should not be taken as a characterization of RISE’s performance, but as evidence that cache-driven amortization and large I/O and network overheads make the CPU performance hit of emulation just one (and possibly not the main) factor in evaluating the performance of this scheme. By contrast, short interactive jobs are more challenging for RISE performance, as there is little time to amortize mask generation and cache filling. For example, we measured a slowdown factor of about 16 end-to-end when RISE protecting all the processes invoked to make this paper from LATEX source. Results of the Dynamo project suggest that a custom-built dynamic binary translator can have much lower overheads than Valgrind, suggesting that a commercial-grade RISE would be fast enough for widespread use; in longrunning contexts where performance is less critical, even our proof-of-concept prototype might be practical. 4. RISE SAFETY: EXPERIMENTS Code diversification techniques such as RISE rely on the assumption that random bytes of code are unlikely to execute successfully. When binary code is injected by an attacker and executes, it is first derandomized by RISE. Because the attack code was never prerandomized, the effect of derandomizing is to transform the attack code into a random byte string. This is invisible to the interpretation engine, which will attempt to translate, and possibly execute, the string. If the code executes at all, it clearly will not have the effect intended by the attacker. However, there is some chance that the random bytes might correspond to an executable sequence, and an even smaller chance that the executed sequence of random bytes could cause damage. In this section, we measure the likelihood of these events under several different assumptions, and in the following section we develop theoretical estimates. Our approach is to identify the possible actions that randomly formed instructions in a sequence could perform and then to calculate the probabilities 2 The large standard deviations are typical of SPECweb99, as web server benchmarks have to model
long-tailed distributions of request sizes [Nahum 2002; SPEC Inc. 1999]. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
14
•
E. G. Barrantes et al.
for these different events. There are several broad classes of events that we consider: illegal instructions that lead to an error signal, valid execution sequences that lead to an infinite loop or a branch into valid code, and other kinds of errors. There are several subtle complications involved in the calculations, and in some cases we make simplifying assumptions. The simplifications lead to a conservative estimate of the risk of executing random byte sequences. 4.1 Possible Behaviors of Random Byte Sequences First, we characterize the possible events associated with a generic processor or emulator attempting to execute a random symbol. We use the term symbol to refer to a potential execution unit, because a symbol’s length in bytes varies across different architectures. For example, instruction length in the PowerPC architecture is exactly 4 bytes and in the IA32 it can vary between 1 and 17 bytes. Thus, we adopt the following definitions: (1) A symbol is a string of l bytes, which may or may not belong to the instruction set. In a RISC architecture, the string will always be of the same length, while for CISC it will be of variable length. (2) An instruction is a symbol that belongs to the instruction set. In RISE there is no explicit recognition of an attack, and success is measured by how quickly and safely the attacked process is terminated. Process termination occurs when an error condition is generated by the execution of random symbols. Thus, we are interested in the following questions: (1) How soon will the process crash after it begins executing random symbols? (Ideally, in the first symbol.) (2) What is the probability that an execution of random bytes will branch to valid code or enter an infinite loop (escape)? (Ideally, 0.) Figure 1 illustrates the possible outcomes of executing a single random symbol. There are three classes of outcome: an error that generates a signal, a branch into executable memory in the process space that does not terminate in an error signal (which we call escape), and the simple execution of the symbol with the program pointer moving to the next symbol in the sequence. Graph traversal always begins in the start state, and proceeds until a terminating node is reached (memory error signal, instruction-specific error signal, escape, or start). The term crash refers to any error signal (the states labeled invalid opcode, specific error signal, and memory error signal in Figure 1). Error signals do not necessarily cause process termination due to error, because the process could have defined handlers for some of the error signals. We assume, however, that protected processes have reasonable signal handlers, which terminate the process after receiving such a signal. We include this outcome in the event crash. The term escape describes a branch from the sequential flow of execution inside the random code sequence to any executable memory location. This event occurs when the instruction pointer (IP) is modified by random instructions to ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
15
Fig. 1. State diagram for random code execution. The graph depicts the possible outcomes of executing a single random symbol. For variable-length instruction sets; the start state represents the reading of bytes until a nonambiguous decision about the identity of the symbol can be made.
point either to a location inside the executable code of the process, or to a location in a data section marked as executable even if it does not typically contain code. An error signal is generated when the processor attempts to decode or execute a random symbol in the following cases: (1) Illegal instruction: The symbol has no further ambiguity and it does not correspond to a defined instruction. The persymbol probability of this event depends solely on the density of the instruction set. An illegal instruction is signaled for undefined opcodes, illegal combinations of opcode and operand specifications, reserved opcodes, and opcodes undefined for a particular configuration (e.g., a 64-bit instruction on a 32-bit implementation of the PowerPC architecture). (2) Illegal read/write: The instruction is legal, but it attempts to access a memory page to which it does not have the required operation privileges, or the page is outside the process’ virtual memory. (3) Operation error: Execution fails because the process state has not been properly prepared for the instruction; for example, division by 0, memory errors during a string operation, accessing an invalid port, or invoking a nonexistent interrupt. (4) Illegal branch: The instruction is of the control transfer type and attempts to branch into a nonexecutable or nonallocated area. (5) Operation not permitted: A legal instruction fails because the rights of the owner process do not allow its execution, for example, an attempt to use a privileged instruction in user mode. There are several complications associated with branch instructions, depending on the target address of the branch. We assume that the only dangerous class of branch is a correctly invoked system call. The probability of randomly ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
16
•
E. G. Barrantes et al.
1 1 × 256 ≈ 1.52 × 10−5 for IA32, and at most invoking a system call in Linux is 256 1 −10 ≈ 2.33×10 for the 32-bit PowerPC. This is without adding the restriction 232 that the arguments be reasonable. Alternatively, a process failure could remain hidden from an external observer, and we will see that this event is more likely. A branch into the executable code of the process (ignoring alignment issues) will likely result in the execution of at least some instructions, and will perhaps lead to an infinite loop. This is an undesirable event because it hides the attack attempt even if it does not damage permanent data structures. We model successful branches into executable areas (random or nonrandom) as always leading to the escape state in Figure 1. This conservative assumption allows us to estimate how many attack instances will not be immediately detected. These “escapes” do not execute hostile code. They are simply attack instances that are likely not to be immediately observed by an external process monitor. The probability of a branch resulting in a crash or an escape depends at least in part on the size of the executing process, and this quantity is a parameter in our calculations. Different types of branches have different probabilities of reaching valid code. For example, if a branch has the destination specified as a full address constant (immediate) in the instruction itself, it will be randomized, and the probability of landing in valid code will depend only on the density of valid code in the total address space, which tends to be low. A return takes the branching address from the current stack pointer, which has a high probability of pointing to a real-process return address. We model these many possibilities by dividing memory accesses, for both branch and nonbranch instructions into two broad classes:
(1) Process-state dominated: When the randomized exploit begins executing, the only part of the process that has been altered is the memory that holds the attack code. Most of the process state (e.g., the contents of the registers, data memory, and stack) remains intact and consistent. However, we do not have good estimates of the probability that using these values from registers and memory will cause an error. So, we arbitrarily assign probabilities for these values and explore the sensitivity of the system to different probabilities. Experimentally we know that most memory accesses fail (see Figure 2). (2) Immediate dominated: If a branch calculates the target address based on a full-address size immediate, we can assume that the probability of execution depends on the memory occupancy of the process, because the immediate is just another random number generated by the application of the mask to the attack code. We use this classification in empirical studies of random code execution (Section 4.2). These experiments provide evidence that most processes terminate quickly when random code sequences are inserted. We then describe a theoretical model for the execution of random IA32 and PowerPC instructions (Section 5), which allows us to validate the experiments and provides a framework for future analysis of other architectures. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
17
Fig. 2. Executing random blocks on native processors. The plots show the distribution of runs by type of outcome for (a) IA32 and (b) Power PC. Each color corresponds to a different random block size (rb): 4, 16, 28, 40, and 52 bytes. The filler is set such that the total process density is 5% of the possible 232 address space. The experiment was run under the Linux operating system.
4.2 Empirical Testing We performed two kinds of experiments: (1) execution of random blocks of bytes on native processors, and (2) execution of real attacks in RISE on IA32. 4.2.1 Executing Blocks of Random Code. We wrote a simple C program that executes blocks of random bytes. The block of random bytes simulates a randomized exploit running under RISE. We then tested the program for different block sizes (the “exploit”) and different degrees of process space occupancy. The program allocates a prespecified amount of memory (determined by the filler size parameter) and fills it with the machine code for no operation (NOP). The block of random bytes is positioned in the middle of the filler memory. Figure 2 depicts the observed frequency of the events defined in Section 4.1. There is a preponderance of memory access errors in both architectures, although the less dense PowerPC has an almost equal frequency of illegal instructions. Illegal instructions occur infrequently in the IA32 case. In both architectures, about one-third of legal branch instructions fail because of an invalid memory address, and two-thirds manage to execute the branch. Conditional branches form the majority of branch instructions in most architectures, and these branches have a high probability of executing because of their very short relative offsets. Because execution probabilities could be affected by the memory occupancy of the process, we tested different process memory sizes. The process sizes used are expressed as fractions of the total possible 232 address space (Table IV). Each execution takes place inside GDB (the GNU debugger), single stepping until either a signal occurs or more than 100 instructions have been executed. We collect information about type of instruction, addresses, and types of signals during the run. We ran this scenario with 10,000 different seeds, five random ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
18
•
E. G. Barrantes et al. Table IV. Process Memory Densities (Relative to Process Size)
Process Memory Density (as a fraction of 232 bytes)
0.0002956
0.0036093
0.0102365
0.0234910
0.0500000
Values are expressed as fractions of the total possible 232 address space. They are based on observed process memory used in two busy IA32 Linux systems over a period of two days.
Fig. 3. Probability that random code escapes when executed for different block sizes (the x-axis) for (a) IA32 and (b) Power PC. Block size is the length of the sequence of random bytes inserted into the process. Each set of connected points represents a different memory density (q). Solid lines represent the fraction of runs that escaped under our definition of escape, and dotted lines show the fraction of “true” escaped executions (those that did not fail after escaping from the exploit area).
block sizes (4, 8, 24, 40, and 56 bytes), and five total process densities (see Table IV), both for the PowerPC and the IA32. Figure 3 plots the fraction of runs that escaped according to our definition of escape (given in Section 4.1) for different memory densities. An execution was counted as an escape if a jump was executed and did not fail immediately (that is, it jumped to an executable section of the code). In addition, it shows the proportion of escapes that did not crash within a few bytes of the exploit area (“true” escapes: for example when the execution is trapped into an infinite loop). Escapes that continued executing for more than 100 instructions were terminated. The figure shows that for realistic block sizes (over 45 bytes), the proportion of true escapes is under 10% (IA32). In the Power PC case, although the fraction of escaped runs is smaller, most of the escapes do not fail afterwards, so the curves overlap. A second observation (not shown) is that memory density has a negligible effect on the probability of escape, even though we created an environment that maximizes successful escapes. This is likely because the process sizes are still relatively small compared to the total address space and because only a minorACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
19
Fig. 4. Proportion of runs that fail after exactly n instructions, with memory density 0.05, and random block (simulated attack) size 52 bytes, for (a) IA32 and (b) PowerPC. On the right, the proportion of escaped vs. crashed runs is presented for comparison.
ity of memory accesses are affected by this density (those that are immediate dominated). Figure 4 shows the proportion of failed runs that die after executing exactly n instructions. On the right side of the graph, the proportion of escaped versus failed runs is shown for comparison. Each instruction length bar comprises five subbars, one for each simulated attack size. We plot them all to show that the size of the attack has almost no effect on the number of instructions executed, except for very small sizes. On the IA32, more than 90% of all failed runs died after executing at most 6 instructions and in no case did the execution continue for more than 23 instructions. The effect is even more dramatic on the Power PC, where 90% of all failed runs executed for fewer than 3 instructions, and the longest failed run executed only 10 instructions. 4.2.2 Executing Real Attacks under RISE. We ran several vulnerable applications under RISE and attacked them repeatedly over the network, measuring how long it took them to fail. We also tested the two synthetic attacks from Fayolle and Glaume [2002]. In this case the attack and the exploit are in the same program, so we ran them in RISE for 10,000 times each, collecting output from RISE. Table V summarizes the results of these experiments. The real attacks fail within an average of two to three instructions (column 4). Column 3 shows how many attack instances we ran (each with a different random seed for masking) to compute the average. As column 5 shows, most attack instances crashed instead of escaping. The synthetic attacks averaged just under two instructions before process failure. No execution of any of the attacks was able to spawn a shell. Within the RISE approach, one could avoid the problem of accidentally viable code by mapping to a larger instruction set. The size could be tuned to reflect the desired percentage of incorrect unscramblings that will likely lead immediately to an illegal instruction. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
20
•
E. G. Barrantes et al. Table V. Survival Time in Executed Instructions for Attack Codes in Real-Applications Running under RISE Attack Name Named NXT resource record overflow rpc.statd format string Samba trans2 exploit Synthetic heap exploit Synthetic stack exploit
Application Bind 8.2.1-7
No. of Attacks 101
Avg. no. of Insns. 2.24
Crashed Before Escape (%) 85.14
nfs-utils 0.1.6-2
102
2.06
85.29
smbd 2.2.1a
81
3.13
73.00
N/A
10,131
1.98
93.93
N/A
10,017
1.98
93.30
Column 4 gives the average number of instructions executed before failure (for instances that did not “escape”). Column 5 summarizes the percentage of runs crashing (instead of “escaping”).
5. RISE SAFETY: THEORETICAL ANALYSIS This section develops theoretical estimates of RISE safety and compares them with the experiments reported in the previous section. A theoretical analysis is important for several reasons. Diversified code techniques of various sorts and at various levels are likely to become more common. We need to understand exactly how much protection they confer. In addition, it will be helpful to predict the effect of code diversity on new architectures before they are built. For example, analysis allows us to predict how much increase in safety could be achieved by expanding the size of the instruction space by a fixed amount. In the case of a variable-size instruction set, such as the IA32, we compute the aggregate probabilities using a Markov chain. In the case of a uniformlength instruction set, such as the PowerPC, we can compute the probabilities directly. 5.1 IA32 Instruction Set For the IA32 instruction set, which is a CISC architecture, we use the published instruction set specification [Intel Corporation 2004] to build a Markov chain used to calculate the escape probability of a sequence of m random bytes (with byte-length b = 8 bits). Our analysis is based on the graph of event categories shown in Figure 1, but it is specialized to include the byte-to-byte symbol recognition transitions. A transition is defined as the reading of a byte by the processor, and the states describe either specific positions within instructions or exceptions. Appendix A provides the specifics of this particular Markov chain encoding. Apart from the complexity of encoding the large and diverse IA32 instruction set, the major difficulty in the model is the decision of what to do when a symbol crosses the boundary of the exploit area. It is conceivable that the result of the ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
21
Fig. 5. Theoretical analysis of IA32 escape probability: The x-axis is the number of bytes in the random sequence, and the y-axis is the probability of escaping from a random string of m bytes. Each connected set of plotted points corresponds to one assumed probability of successfully executing a process-state-dominated memory access ( ps ), with either strict or loose criterion of escape. The memory density is fixed at 0.05. For comparison with empirical data, the dashed line with triangles marks the observed average frequency of successful jumps (data taken from Figure 3 for the IA32 and memory density q = 0.05).
interpretation is still ambiguous at the byte that crosses the border. However, the model needs to decide what happens to the execution at that moment. We model this situation using both extremes: A loose escape declares the execution as an escape if the bytes interpreted up to the crossing point have not caused a crash; a strict escape declares that the execution of the partial instruction ends in crash. A characterization of the states in terms of the Markov chain is presented in Appendix A. Figure 5 shows the probability of escape as a function of increasing sequence length for both loose and strict criteria of escape for a fixed memory density (0.05), and for different probabilities of a process state-dominated memory access to execute. The plot reveals several interesting phenomena. First, the plots converge to a steady state quickly—in less than 20 bytes. This is consistent with the empirical data. Notably, the probability of escape converges to a nonzero value. This means that independently of exploit or process size, there will always be a nonzero probability that a sequence of random code will escape. A second observation revealed by the plot is the relatively small difference between our loose and strict criteria for escape. The main difference between both cases is how to interpret the last instruction in the sequence if the string has not crashed before the exploit border. Not surprisingly, as sequences get longer, the probability of reaching the last symbol diminishes, so the overall effect of an ambiguous last instruction in those few cases is respectively smaller. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
22
•
E. G. Barrantes et al. Table VI. Partition of Symbols into Disjoint Sets
Set Name U P BS R LD LM I BM I LM P BM P
Type of Instructions in Set Undefined instructions. Privileged instructions. Small offset, relative branch Legal instructions with no memory access and no branching. All branches require memory access, so L D only contains linear instructions. Legal no-branch instructions with immediate-dominated memory access. Legal branch instructions with immediate-dominated memory-access. Legal no-branch instructions with process-state dominated memory-access. Legal branch instructions with process-state dominated memory access.
A third observation (data not shown in the figure) is that for different memory densities, the escape curves are nearly identical. This means that memory size has almost no effect on the probability of escape at typical process memory occupancies. In part, this reflects the fact that most jumps use process-statedominated memory accesses. In particular, immediate-dominated memory accesses constitute a very small proportion of the instructions that use memory (only 4 out of more than 20 types of jumps). The fourth observation concerns the fact that the first data point in the empirical run (block size of 4 bytes) differs markedly from all the strict and loose predicted curves. Both criteria are extreme cases and the observed behavior is in fact bounded by them. The divergence is most noticeable during the first 10 bytes, as most IA32 instructions have a length between 4 and 10 bytes. As noted before, the curves for loose and strict converge rapidly as the effect of the last instruction becomes less important, and so we see a much closer fit with the predicted behavior after 10 bytes, as the bounds become tighter. The final observation is that the parameter ps varies less than expected. We were expecting that the empirical data would have an ever-increasing negative slope, given that in principle the entropy of the process would increase as more instructions were executed. Instead, we get a close fit with ps = 0.6 after the first 20 bytes. This supports our approximation to the probability of execution for process-state dominated instructions, as a constant that can be determined with system profiling. 5.2 Uniform-Length Instruction Set Model The uniform-length instruction set is simpler to analyze because it does not require conditional probabilities on instruction length. Therefore, we can estimate the probabilities directly without resorting to a Markov chain. Our analysis generalizes to any RISC instruction set, but we use the PowerPC [IBM 2003] as an example. Let all instructions be of length b bits (usually b = 32). We calculate the probability of escape from a random string of m symbols r = r1 . . . rm , each of length b bits (assumed to be drawn from a uniform distribution of 2b possible symbols). We can partition all possible symbols in disjoint sets with different execution characteristics. Table VI lists the partition we chose to use. Figure 7 in Appendix B illustrates the partition in terms of the classification of events ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
23
given in Section 4.1. S = U ∪ P ∪ BSR ∪ L D ∪ LMI ∪ BMI ∪ LMP ∪ BMP is the set of all possible symbols that can be formed with b bits. |S| = 2b. The probability that a symbol s belongs to any given set I (where I can be any one of U , P , BSR , L D , LMI , BMI , LMP or BMP ) is given by P {s ∈ I } = P (I ) = |I2b| . If there are a bits for addressing (and consequently the size of the address space is 2a ); E I is the event that a symbol belonging to set I executes; M t is the total memory space allocated to the process; M e is the total executable memory of the process; and ps is the probability that a memory access dominated by the processor state succeeds, then the probabilities of successful execution for instructions in each set are For illegal and privileged opcodes, P (EU ) = P (E P ) = 0. For the remaining legal opcodes, P (E L D ) = P (E BSR ) = 1; P (E LMI ) = e ; P (E LMP ) = ps and P (E BMP ) = ps . P (E BM I ) = M 2a
Mt ; 2a
We are interested in the probability of a successful branch (escape) out of a sequence of n random bytes. Let X n denote the event that an execution escapes at exactly symbol n. This event requires that n − 1 instructions execute without branching and that the nth instruction branches successfully. In consequence, P (X n ) = (P (L))n−1 P (E), where P (L) = P (L D ) + P (L M I )P (E L M I ) + P (L M P ) is the probability that a symbol executes a successful linear instruction, and P (E) = P (BM I )P (E BMI ) + P (BM P ) + P (BS R ) is the probability that a symbol executes a valid branch. If X n⋆ is the event that the execution of a random string r = r1 · · · rn escapes, its probability P (X n⋆ ) is given by (Appendix B: P (X n⋆ ) = P (E)
1 − P (L)n+1 + P (L)n 1 − P (L)
P (X n⋆ ) is plotted in Figure 6 for different values of ps , increasing random code sizes and a given memory density (0.05 as in the IA32 case). The comparable data points from our experiments are shown for comparison. We did not plot results for different memory densities because the difference among the curves is negligible. The figure shows that the theoretical analysis agrees with our experimental results. The parameters were calculated from the published documentation of the PowerPC instruction set [IBM 2003], for the 32-bit case: b = 32, a = 32, P (L D ) ≈ 0.25, P (L M I ) = 0, P (L M P ) ≈ 0.375, P (BM I ) ≈ 0.015, P (BM P ) ≈ 0.030, P (BS R ) ≈ 0.008. It can be seen that the probability of escape converges to a nonzero value. For a uniform-length instruction set, this value can be calculated as lim P (X n⋆ ) =
n→∞
P (E) . 1 − P (L)
The limit value of P (X n⋆ ) is the lower bound on the probability of a sequence of length n escaping. It is independent of n, so larger exploit sizes are no more likely to fail than smaller ones in the long run. It is larger than 0 for any architecture in which the probability of successful execution of a jump to a random location is larger than 0. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
24
•
E. G. Barrantes et al.
Fig. 6. Theoretical probability of escape for a random string of n symbols. Each curve plots a different probability of executing a process-state-determined memory access ( ps ) for the PowerPC uniform-length instruction set. Process memory occupancy is fixed at 0.05. The large triangles are the measured data points for the given memory occupancy (data taken from Figure 3 for the PowerPC and memory density q = 0.05), and the dotted lines are the predicted probabilities of escape.
6. RELATED WORK Our randomization technique is an example of automated diversity, an idea that has long been used in software engineering to improve fault tolerance [Avizienis 1995; Avizienis and Chen 1977; Randell 1975] and more recently has been proposed as a method for improving security [Cohen 1993; Forrest et al. 1997; Pu et al. 1996]. The RISE approach was introduced in Barrantes et al. [2003], and an approach similar to RISE was proposed in Kc et al. [2003]. Many other approaches have been developed for protecting programs against particular methods of code injection, including: static code analysis [Dor et al. 2003; Larochelle and Evans 2001; Wagner et al. 2000] and runtime checks, using either static code transformations [Avijit et al. 2004; Baratloo et al. 2000; Chiueh and Hsu 2001; Cowan et al. 1998, 2001; Etoh and Yoda 2000, 2001; Jones and Kelly 1997; Lhee and Chapin 2002; Nebenzahl and Wool 2004; Prasad and Chiueh 2003; Ruwase and Lam 2004; Tsai and Singh 2001; Vendicator 2000; Xu et al. 2002], dynamic instrumentation [Baratloo et al. 2000; Kiriansky et al. 2002], or hybrid schemes [Jim et al. 2002; Necula et al. 2002]. In addition, some methods focus on protecting an entire system rather than a particular program, resulting in defense mechanisms at the operating system level and ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
25
hardware support [Milenkovi´c et al. 2004; PaX Team 2003; Xu et al. 2002]. Instruction-set randomization is also related to hardware code encryption methods explored in Kuhn [1997] and those proposed for TCPA/TCG [TCPA 2004]. 6.1 Automated Diversity Diversity in software engineering is quite different from diversity for security. In software engineering, the basic idea is to generate multiple independent solutions to a problem (e.g., multiple versions of a software program) with the hope that they will fail independently, thus greatly improving the chances that some solution out of the collection will perform correctly in every circumstance. The different solutions may or may not be produced manually, and the number of solutions is typically quite small, around 10. Diversity in security is introduced for a different reason. Here, the goal is to reduce the risk of widely replicated attacks, by forcing the attacker to redesign the attack each time it is applied. For example, in the case of a buffer overflow attack, the goal is to force the attacker to rewrite the attack code for each new computer that is attacked. Typically, the number of different diverse solutions is very high, potentially equal to the total number of program copies for any given program. Manual methods are thus infeasible, and the diversity must be produced automatically. Cowan et al. [2000] introduced a classification of diversity methods applied to security (called “security adaptations”) which classifies diversifications based on what is being adapted—either the interface or the implementation. Interface diversity modifies code layout or access controls to interfaces, without changing the underlying implementation to which the interface gives access. Implementation diversity, on the other hand, modifies the underlying implementation of some portion of the system to make it resistant to attacks. RISE can be viewed as a form of interface diversity at the machine code level. In 1997, Forrest et al. presented a general view of the possibilities of diversity for security [Forrest et al. 1997], introducing the idea of deliberately diversifying data and code layouts. They used the example of randomly padding stack frames to make exact return address locations less predictable, and thus more difficult for an attacker to locate. Developers of buffer overflow attacks have developed a variety of workarounds—such as “ramps” and “landing zones” of no-ops and multiple return addresses. Automated diversity via random stack padding coerces an attacker to use such techniques; it also requires larger attack codes in proportion to the size range of random padding employed. Other work in automated diversity for security has also experimented with diversifying data layouts [Cohen 1993; Pu et al. 1996], as well as system calls [Chew and Song 2002], and file systems [Cowan et al. 2000]. In addition, several projects address the code-injection threat model directly, and we describe those projects briefly. Chew and Song [2002] proposed a method that combines kernel and loader modification on the system level with binary rewriting at the process level ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
26
•
E. G. Barrantes et al.
to provide system call number randomization, random stack relocation, and randomization of standard library calls. This work has not been completely evaluated to our knowledge. Address space layout randomization (ASLR) [PaX Team 2003] and transparent runtime randomization (TRR) [Xu et al. 2003] randomize the positions of the stack, shared libraries, and heap. The main difference between the two is the implementation level. ASLR is implemented in the kernel, while TRR modifies the loader program. Consequently, TRR is more oriented to the end user. Bhatkar et al. [2003] describe a method that randomizes the addresses of data structures internal to the process, in addition to the base address of the main segments. Internal data and code blocks are permuted inside the segments and the guessing range is increased by introducing random gaps between objects. The current implementation instruments object files and ELF binaries to carry out the required randomizations. No access to the source code is necessary, but this makes the transformations extremely conservative. This technique nicely complements that of RISE, and the two could be used together to provide protection against both code injection and return-into-libc attacks simultaneously. PointGuard [Cowan et al. 2003] uses automated randomization of pointers in the code and is implemented by instrumenting the intermediate code (AST in GCC). The automated diversity project that is closest to RISE is the system described in Kc et al. [2003], which also randomizes machine code. There are several interesting points of comparison with RISE, and we describe two of them: (1) persystem (whole image) versus perprocess randomization; (2) Bochs [Butler 2004] versus Valgrind as emulator. First, in the Kc et al. implementation, a single key is used to randomize the image, all the libraries, and any applications that need to be accessed in the image. The system later boots from this image. This has the advantage that in theory, kernel code could be randomized using their method although most code-injection attacks target application code. A drawback of this approach lies in its key management. There is a single key for all applications in the image, and the key cannot be changed during the lifetime of the image. Key guessing is a real possibility in this situation, because the attacker would be likely to know the cleartext of the image. However, the Kc et al. system is more compact because there is only one copy of the libraries. On the other hand, if the key is guessed for any one application or library, then all the rest are vulnerable. Second, the implementations differ in their choice of emulator. Because Bochs is a pure interpreter it incurs a significant performance penalty, while emulators such as Valgrind can potentially achieve close-to-native efficiency through the use of optimized and cached code fragments. A randomization of the SQL language was proposed in Boyd and Keromytis [2004]. This technique is essentially the same one used in the Perl randomizer [Kc et al. 2003], with a random string added to query keywords. It is implemented through a proxy application on the server side. In principle, there could be one server proxy per database connection, thus allowing more key ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
27
diversity. The performance impact is minimal, although key capture is theoretically possible in a networked environment. 6.2 Other Defenses Against Code Injection Other defenses against code injection (sometimes called “restriction methods”) can be divided into methods at the program and at the system level. In turn, approaches at the program level comprise static code analysis and runtime code instrumentation or surveillance. System level solutions can be implemented in the operating system or directly through hardware modifications. Of these, we focus on the methods most relevant to RISE. 6.2.1 Program-Level Defenses Against Code Injection. Program-level approaches can be seen as defense-in-depth, beginning with suggestions for good coding practices and/or use of type-safe languages, continuing with automated analysis of source code, and finally reaching static or dynamic modification of code to monitor the process progress and detect security violations. Comparative studies on program-level defenses against buffer overflows have been presented by Fayolle and Glaume [2002], Wilander and Kamkar [2003], and Simon [2001]. Several relevant defenses are briefly discussed below. The StackGuard system [Cowan et al. 1998] modifies GCC to interpose a a canary word before the return address, the value of which is checked before the function returns. An attempt to overwrite the return address via linear stack smashing will change the canary value and thus be detected. StackShield [Vendicator 2000], RAD [Chiueh and Hsu 2001], installtime vaccination [Nebenzahl and Wool 2004], and binary rewriting[Prasad and Chiueh 2003] all use instrumentations to store a copy of the function return address off the stack and check against it before returning to detect an overwrite. Another variant, Propolice [Etoh and Yoda 2000, 2001] uses a combination of a canary word and frame data relocation to avoid sensible data overwriting. Split control and data stack [Xu et al. 2002] divides the stack in a control stack for return addresses and a data stack for all other stack-allocated variables. FormatGuard [Cowan et al. 2001] used the C preprocessor (CPP) to add parameter-counting to printf-like C functions and defend programs against format print vulnerabilities. This implementation was not comprehensive even against this particular type of attacks. A slightly different approach uses wrappers around standard library functions, which have proven to be a continuous source of vulnerabilities. Libsafe [Baratloo et al. 2000; Tsai and Singh 2001], TIED, and LibsafePlus [Avijit et al. 2004], and the type-assisted bounds checker proposed by Lhee and Chapin [2002] intercept library calls and attempt to ensure that their manipulation of user memory is safe. An additional group of techniques depends on runtime bounds checking of memory objects, such as the Kelly and Jones bound checker [Jones and Kelly 1997] and the recent C range error detector (CRED) [Ruwase and Lam 2004]. Their heuristics differ in the way of determining if a reference is still legal. Both can generate false positives, although CRED is less computationally expensive. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
28
•
E. G. Barrantes et al.
The common theme in all these techniques is that they are specific defenses, targeting specific points of entry for the injected code (stack, buffers, format functions, and so on). Therefore, they cannot prevent an injection arriving from a different source or an undiscovered vulnerability type. RISE, on the other hand, is a generic defense that is independent of the method by which binary code is injected. There is also a collection of dynamic defense methods which do not require access to the original sources or binaries. They operate directly on the process in memory, either by inserting instrumentation as extra code (during the load process or as a library) or by taking complete control as in the case of nativeto-native emulators. Libverify [Baratloo et al. 2000] saves a copy of the return address to compare at the function end, so it is a predecessor to install-time vaccination [Nebenzahl and Wool 2004] and binary rewriting [Prasad and Chiueh 2003], with the difference that it is implemented as a library that performs the rewrite dynamically, so the binaries on disk do not require modification. Code shepherding [Kiriansky et al. 2002] is a comprehensive, policy-based restriction defense implemented over a binary-to-binary optimizing emulator. The policies concern client code control transfers that are intrinsically detected during the interpretation process. Two of those types of policies are relevant to the RISE approach. Code origin policies grant differential access based on the source of the code. When it is possible to establish if the instruction to be executed came from a disk binary (modified or unmodified) or from dynamically generated code (original or modified after generation), policy decisions can be made based on that origin information. In our model, we are implicitly implementing a code origin policy, in that only unmodified code from disk is allowed to execute. An advantage of the RISE approach is that the origin check cannot be avoided—only properly sourced code is mapped into the private instruction set so it executes successfully. Currently, the only exception we have to the disk origin policy is for the code deposited in the stack by signals. RISE inherits its signal manipulation from Valgrind [Nethercote and Seward 2003]. More specifically, all client signals are intercepted and treated as special cases. Code left on the stack is executed separately from the regular client code fetch cycle so it is not affected by the scrambling. This naturally resembles PaX’s special handling of signals, where code left on the stack is separately emulated. Also relevant are restricted control transfers in which a transfer is allowed or disallowed according to its source, destination, and type. Although we use a restricted version of this policy to allow signal code on the stack, in most other cases we rely on the RISE language barrier to ensure that injected code will fail. 6.2.2 System-Level Defenses Against Code Injection. System level restriction techniques can be applied in the operating system, hardware, or both. We briefly review some of the most important system-level defenses. The nonexecutable stack and heap as implemented in the PAGEEXEC feature of PaX [PaX Team 2003] is hardware assisted. It divides allocation into data and code TLBs and intercepts all page-fault handlers into the code TLB. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
29
As with any hardware-assisted technique, it requires changes to the kernel. RISE is functionally similar to these techniques, sharing the ability to randomize ordinary executable files with no special compilation requirements. Our approach differs, however, from nonexecutable stacks and heaps in important ways. First, it does not rely on special hardware support (although RISE pays a performance penalty for its hardware independence). Second, although a system administrator can choose whether to disable certain PaX features on a perprocess basis, RISE can be used by an end-user to protect user-level processes without any modification to the overall system. A third difference between PaX and RISE is in how they handle applications that emit code dynamically. In PaX, the process-emitting code requires having the PAGEEXEC feature disabled (at least), so the process remains vulnerable to injected code. If such a process intended to use RISE, it could modify the code-emitting procedures to use an interface provided by RISE, and derived from Valgrind’s interface for Valgrind-aware applications. The interface uses a validation scheme based on the original randomization of code from disk. In a pure language randomization, a process-emitting dynamic code would have to do so in the particular language being used at that moment. In our approximation, the process using the interface scrambles the new code before execution. The interface, a RISE function, considers the fragment of code as a new library, and randomizes it accordingly. In contrast to nonexecutable stack/heap, this does not make the area where the new code is stored any more vulnerable, as code injected in this area will still be expressed in nonrandomized code and will not be able to execute except as random bytes. Some other points of comparison between RISE and PaX include (1) Resistance to return-into-libc: Both RISE and PaX PAGEEXEC features are susceptible to return-into-libc attacks when implemented as an isolated feature. RISE is vulnerable to return-into-libc attacks without an internal data structure randomization, and data structure randomization is vulnerable to injected code without the code randomization. Similarly, as the PaX Team notes, LIBEEXEC is vulnerable to return-into-libc without ASLR (automatic stack and library randomization), and ASLR is vulnerable to injected code without PAGEEXEC [PaX Team 2003]. In both cases, the introduction of the data structure randomization (at each corresponding granularity level) makes return-into-libc attacks extremely unlikely. (2) Signal code on the stack: Both PaX and RISE support signal code on the stack. They both treat it as a special case. RISE in particular is able to detect signal code as it intercepts all signals directed to the emulated process and examines the stack before passing control to the process. (3) C trampolines: PaX detects trampolines by their specific code pattern and executes them by emulation. The current RISE implementation does not support this, although it would not be difficult to add it. StackGhost [Frantzen and Shuey 2001] is a hardware-assisted defense implemented in OpenBSD for the Sparc architecture. The return address of ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
30
•
E. G. Barrantes et al.
functions is stored in registers instead of the stack, and for a large number of nested calls StackGhost protects the overflowed return addresses through write protection or encryption. Milenkovi´c et al. [2004] propose an alternative architecture where linear blocks of instructions are signed on the last basic block (equivalent to a line of cache). The signatures are calculated at compilation time and loaded with the process into a protected architectural structure. Static libraries are compiled into a single executable with a program, and dynamic libraries have their own signature file loaded when the library is loaded. Programs are stored unmodified, but their signature files should be stored with strong cryptographic protection. Given that the signatures are calculated once, at compile time, if the signature files are broken, the program is vulnerable. Xu et al. [2002] propose using a secure return address stack (SRAS) that uses the redundant copy of the return address maintained by the processor’s fetch mechanism to validate the return address on the stack. 6.3 Hardware Encryption Because RISE uses runtime code scrambling to improve security, it resembles some hardware-based code encryption schemes. Hardware components to allow decryption of code and/or data on-the-fly have been proposed since the late 1970s [Best 1979, 1980] and implemented as microcontrollers for custom systems (for example, the DS5002FP microcontroller [Dallas Semiconductor 1999]). The two main objectives of these cryptoprocessors are to protect code from piracy and data from in-chip eavesdropping. An early proposal for the use of hardware encryption in general-purpose systems was presented by Kuhn [1997] for a very high threat level where encryption and decryption were performed at the level of cache lines. This proposal adhered to the model of protecting licensed software from users, and not users from intruders, so there was no analysis of shared libraries or how to encrypt (if desired) existing open applications. A more extensive proposal was included as part of TCPA/TCG [TCPA 2004]. Although the published TCPA/TCG specifications provide for encrypted code in memory, which is decrypted on the fly, TCPA/TCG is designed as a much larger authentication and verification scheme and has raised controversies about digital rights management (DRM) and end-users’ losing control of their systems [Anderson 2003; Arbaugh 2002]. RISE contains none of the machinery found in TCPA/TCG for supporting DRM. On the contrary, RISE is designed to maintain control locally to protect the user from injected code. 7. DISCUSSION The preceding sections describe a prototype implementation of the RISE approach and evaluate its effectiveness at disrupting attacks. In this section, we address some larger questions about RISE. 7.1 Performance Issues Although Valgrind has some limitations, discussed in Section 2, we are optimistic that improved designs and implementations of “randomized machines” ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
31
would improve performance and reduce resource requirements, potentially expanding the range of attacks the approach can mitigate. We have also observed that even in its current version, the performance RISE offers could be acceptable if the processes are I/O bound and/or use the network extensively. In the current implementation, RISE safety is somewhat limited by the dense packing of legal IA32 instructions in the space of all possible byte patterns. A random scrambling of bits is likely to produce a different legal instruction. Doubling the size of the instruction encoding would enormously reduce the risk of a processor’s successfully executing a long enough sequence of unscrambled instructions to do damage. Although our preliminary analysis shows that this risk is low even with the current implementation, we believe that emerging softhardware architectures such as Crusoe [Klaiber 2000] will make it possible to reduce the risk even further. 7.2 Is RISE Secure? A valid concern when evaluating RISE’s security is its susceptibility to key discovery, as an attacker with the appropriate scrambling information could inject scrambled code that will be accepted by the emulator. We believe that RISE is highly resistant to this class of attack. RISE is resilient against brute force attacks because the attacker’s work is exponential in the shortest code sequence that will make an externally detectable difference if it is unscrambled properly. We can be optimistic because most IA32 attack codes are at least dozens of bytes long, but if a software flaw existed that was exploitable with, say, a single 1-byte opcode, then RISE would be vulnerable, although the process of guessing even a 1-byte representation would cause system crashes easily detectable by an administrator. An alternative path for an attacker is to try to inject arbitrary address ranges of the process into the network, and recover the key from the downloaded information. The download could be part of the key itself (stored in the process address space), scrambled code, or unscrambled data. Unscrambled data does not give the attacker any information about the key. Even if the attacker could obtain scrambled code or pieces of the key (they are equivalent because we can assume that the attacker has knowledge of the program binary), using the stolen key piece might not be feasible. If the key is created eagerly, with a key for every possible address in the program, past or future, then the attacker would still need to know where the attack code is going to be written in process space to be able to use that information. However, in our implementation, where keys are created lazily for code loaded from disk, the key for the addresses targeted by the attack might not exist, and therefore might not be discoverable. The keys that do exist are for addresses that are usually not used in code injection attacks because they are write protected. In summary, it would be extremely difficult to discover or use a particular encoding during the lifetime of a process. Another potential vulnerability is RISE itself. We believe that RISE would be difficult to attack for several reasons. First, we are using a network-based threat model (attack code arrives over a network) and RISE does not perform ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
32
•
E. G. Barrantes et al.
network reads. In fact it does not read any input at all after processing the run arguments. Injecting an attack through a flawed RISE read is thus impossible. Second, if an attack arises inside a vulnerable application and the attacker is aware that the application is being run under RISE, the vulnerable points are the code cache and RISE’s stack, as an attacker could deposit code and wait until RISE proceeds to execute something from these locations. Although RISE’s code is not randomized because it has to run natively, the entire area is write protected, so it is not a candidate for injection. The cache is read-only during the time that code blocks are executed, which is precisely when this hypothetical attack would be launched, so injecting into the cache is infeasible. Another possibility is a jump-into-RISE attack. We consider three ways in which this might happen:3 (1) The injected address of RISE code is in the client execution path cache. (2) The injected address of RISE code is in the execution path of RISE itself. (3) The injected address of RISE code is in a code fragment in the cache. In case 1, the code from RISE will be interpreted. However, RISE only allows certain self-functions to be called from client code, so everything else will fail. Even for those limited cases, RISE checks the call origin, disallowing any attempt to modify its own structures. For case 2, the attacker would need to inject the address into a RISE data area in RISE’s stack or in an executable area. The executable area is covered by case 3. For RISE’s data and stack areas we have introduced additional randomizations. The most immediate threat is the stack, so we randomize its start address. For other data structures, the location could be randomized using the techniques proposed in Bhatkar et al. [2003], although this is unimplemented in the current prototype. Such a randomization would make it difficult for the attacker to guess its location correctly. An alternative, although much more expensive, solution would be to monitor all writes and disallow modifications from client code and certain emulator areas. It is worth noting that this form of attack (targeting emulator data structures) would require executing several commands without executing a single machine language instruction. Although such attacks are theoretically possible via chained system calls with correct arguments, and simple (local) attacks have been shown to work [Nergal 2001], these are not a common technique [Wilander and Kamkar 2003]. In the next version of RISE, we plan to include full data structure address randomization, which would make these rare attacks extremely difficult to execute. Case 3 is not easily achieved because fragments are write protected. However, an attacker could conceivably execute an mprotect call to change writing rights and then write the correct address. In such a case, the attack would execute. This is a threat for applications running over emulators, as it undermines all other security policies [Kiriansky et al. 2002]. In the current RISE implementation, we borrow the solution used in Kiriansky et al. [2002], monitoring 3 We
rely on the fact that RISE itself does not receive any external input once it is running.
ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
33
all calls to the mprotect system call by checking their source and destination and not allowing executions that violate the protection policy. 7.3 Code/Data Boundaries An essential requirement for using RISE for improving security is that the distinction between code and data must be carefully maintained. The discovery that code and data can be systematically interchanged was a key advance in early computer design, and this dual interpretation of bits as both numbers and commands is inherent to programmable computing. However, all that flexibility and power turn into security risks if we cannot control how and when data become interpreted as code. Code-injection attacks provide a compelling example, as the easiest way to inject code into a binary is by disguising it as data, for example, as inputs to functions in a victim program. Fortunately, code and data are typically used in very different ways, so advances in computer architecture intended solely to improve performance, such as separate instruction caches and data caches, also have helped to enforce good hygiene in distinguishing machine code from data, helping make the RISE approach feasible. At the same time, of course, the rise of mobile code, such as Javascript in web pages and macros embedded in word processing documents, tends to blur the code/data distinction and create new risks. 7.4 Generality Although our paper illustrates the idea of randomizing instruction sets at the machine-code level, the basic concept could be applied wherever it is possible to (1) distinguish code from data, (2) identify all sources of trusted code, and (3) introduce hidden diversity into all and only the trusted code. A RISE for protecting printf format strings, for example, might rely on compile-time detection of legitimate format strings, which might either be randomized upon detection, or flagged by the compiler for randomization sometime closer to runtime. Certainly, it is essential that a running program interact with external information, at some point, or no externally useful computation can be performed. However, the recent SQL attacks illustrate the increasing danger of expressing running programs in externally known languages [Harper 2002]. Randomized instruction set emulators are one step toward reducing that risk. An attraction of RISE, compared to an approach such as code shepherding, is that injected code is stopped by an inherent property of the system, without requiring any explicit or manually defined checks before execution. Although divorcing policy from mechanism (as in code shepherding) is a valid design principle in general, complex user-specified policies are more error prone than simple mechanisms that hard code a well-understood policy. 8. CONCLUSIONS In this paper we introduced the concept of a randomized instruction set emulator as a defense against binary code injection attacks. We demonstrated the feasibility and utility of this concept with a proof-of-concept implementation based ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
34
•
E. G. Barrantes et al.
on Valgrind. Our implementation successfully scrambles binary code at load time, unscrambles it instruction-by-instruction during instruction fetch, and executes the unscrambled code correctly. The implementation was successfully tested on several code-injection attacks, some real and some synthesized, which exhibit common injection techniques. We also addressed the question of RISE safety—how likely are random byte sequences to cause damage if executed. We addressed this question both experimentally and theoretically and conclude that there is an extremely low probability that executing a sequence of random bytes would cause real damage (say by executing a system call). However, there is a slight probability that such a random sequence might escape into an infinite loop or valid code. This risk is much lower for the Power PC instruction set than it is for the IA32, due to the density of the IA32 instruction set. We thus conclude that a RISE approach would be even more successful on the Power PC architecture than it is on the IA32. As the complexity of systems grows, and 100% provable overall system security seems an ever more distant goal, the principle of diversity suggests that having a variety of defensive techniques based on different mechanisms with different properties stands to provide increased robustness, even if the techniques address partially or completely overlapping threats. Exploiting the idea that it is hard to get much done when you do not know the language, RISE is another technique in the defender’s arsenal against binary code injection attacks. APPENDIX A. ENCODING OF THE IA32 MARKOV CHAIN MODEL In this appendix, we discuss the details for the construction of the Markov chain representing the state of the processor as each byte is interpreted. If X t = j is the event of being in state j at time t (in our case, at the reading of byte t), the transition probability P {X t+1 = j |X t = i} is denoted pij and is the probability that the system will be in state j at byte t + 1 if it is in state i for byte t. For example, when the random sequence starts (in state start), there is some probability p that the first byte will correspond to an existing 1-byte opcode that requires an additional byte to specify memory addressing (the Mod-Reg-R/M (MRM) byte). Consequently, we create a transition from start to mrm with some probability p: pstart,mrm = p. p is the number of instructions with one opcode that require the MRM byte, divided by the total number of possibilities for the 41 first byte (256). In IA32 there are 41 such instructions, so pstart,mrm = 256 . If the byte corresponds to the first byte of a 2-byte instruction, we transition to an intermediate state that represents the second byte of that family of instructions, and so on. There are two exit states: crash and escape. The crash state is reached when an illegal byte is read, or there is an attempt to use invalid memory, for an operation or a jump. The second exit state, escape, is reached probabilistically when a legitimate jump is executed. This is related to the escape event. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
35
Because of the complexity of the IA32 instruction set, we simplified in some places. As far as possible, we adhered to the worst-case principle, in which we overestimated the bad outcomes when uncertainty existed (e.g., finding a legal instruction, executing a privileged instruction, or jumping). The next few paragraphs describe these simplifications. We made two simplifications related to instructions. The IA32 has instruction modifiers called prefixes that can generate complicated behaviors when used with the rest of the instruction set. We simplified by treating all of them as independent instructions of length 1 byte, with no effect on the following instructions. This choice overestimates the probability of executing those instructions, as some combinations of prefixes are not allowed, others significantly restrict the kind of instructions that can follow, or make the addresses or operands smaller. In the case of regular instructions that require longer low-probability pathways, we combined them into similar patterns. Privileged instructions are assumed to fail with probability of 1.0 because we assume that the RISE-protected process is running at user level. In the case of conditional branches, we assess the probability that the branch will be taken, using the combination of flag bits required for the particular instruction. For example, if the branch requires that two flags have a given value (0 or 1), the probability of taking the branch is set to 0.25 . A nontaken branch transitions to the start state as a linear instruction. All conditional branches in IA32 use relative (to the current Instruction Pointer), 8- or 16-bit displacements. Given that the attack had to be in an executable area to start with, this means that it is likely that the jump will execute. Consequently, for conditional branches we transition to escape with probability 1. This is consistent with the observed behavior of successful jumps. A.1 Definition of Loose and Strict Criteria of Escape Given that the definition of escape is relative to the position of the instruction in the exploit area, it is necessary to arbitrarily decide if to classify an incomplete interpretation as an escape or as a crash. This is the origin of the loose and strict criteria. In terms of the Markov chain, the loose and strict classifications are defined as follows: (1) Loose escape: Starting from the start state, reach any state except crash, in m transitions (reading m bytes). (2) Strict escape: Reach the escape state in m or fewer transitions from the start state (in m bytes). If T is the transition matrix representing the IA32 Markov chain, then to find the probability of escape from a sequence of m random bytes, we need to determine if the chain is in state start or escape (the strict criterion) or not in state crash (the loose criterion) after advancing m bytes. These probabilities are ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
36
•
E. G. Barrantes et al.
Fig. 7. Partition of symbols into disjoint sets based on the possible outcome paths of interest in the decoding and execution of a symbol. Each path defines a set. Each shaded leaf represents one (disjoint) set, with the set name noted in the box.
given by T m (start,start) + T m (start,escape) and 1−T m (start,crash), respectively, where T (i, j ) is the probability of a transition from state i to state j . B. ENCODING OF A UNIFORM-LENGTH INSTRUCTION SET This appendix contains intermediate derivations for the uniform-length instruction set model. B.1 Partition Graph Figure 7 illustrates the partition of the symbols into disjoint sets using the execution model given in Section 4.1. B.2 Encoding Conventions The set of branches that are relative to the current instruction pointer with a small offset (defined as being less or equal than 2b−1 ) is separated from the rest of the branches, because their likelihood of execution is very high. In the analysis we set their execution probability to 1, which is consistent with observed behavior. A fraction of the conditional branches are artificially separated into LMI and LMP from their original BMI and BMP sets. This fraction corresponds to the probability of taking the branch, which we assume is 0.5. This is similar to the IA32 case, where we assumed that a non-branch-taking instruction could be treated as a linear instruction. To determine the probability that a symbol falls into one of the partitions, we need to enumerate all symbols in the instruction set. For accounting purposes, when parts of addresses and/or immediate (constant) operands are encoded inside the instruction, each possible instantiation of these data fields is counted as a different instruction. For example, if the instruction “XYZ” has 2 bits specifying one of four registers, we count four different XYZ instructions, one for each register encoding. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
B.3 Derivation of the Probability of a Successful Branch (Escape) Out of a Sequence of n Random Bytes P (X i ) + P (L)n P (X n⋆ ) = i=1,...,n P (L)i P (E) + P (L)n = i=1,...,n i P (L) + P (L)n = P (E)
•
37
(1)
i=1,...,n
1 − P (L)n+1 + P (L)n . = P (E) 1 − P (L)
B.4 Derivation of the Lower Limit for the Probability of Escape lim P (X n⋆ ) = lim P (E)
n→∞
n→∞
1 − P (L)n+1 + P (L)n 1 − P (L)
P (E) . = 1 − P (L)
(2)
REFERENCES ANDERSON, R. 2003. “Trusted Computing” and competition policy—Issues for computing professionals. Upgrade IV, 3 (June), 35–41. ARBAUGH, W. A. 2002. Improving the TCPA specification. IEEE Comput. 35, 8 (Aug.), 77–79. AVIJIT, K., GUPTA, P., AND GUPTA, D. 2004. Tied, libsafeplus: Tools for dynamic buffer overflow protection. In Proceeding of the 13th USENIX Security Symposium. San Diego, CA. AVIZIENIS, A. 1995. The methodology of N -version programming. In Software Fault Tolerance, M. Lyu, Ed. Wiley, New York, 23–46. AVIZIENIS, A. AND CHEN, L. 1977. On the implementation of N -Version programming for software fault tolerance during execution. In Proceedings of IEEE COMPSAC 77. 149–155. BALA, V., DUESTERWALD, E., AND BANERJIA, S. 2000. Dynamo: A transparent dynamic optimization system. In Proceedings of the ACM SIGPLAN ’00 Conference on Programming language design and implementation. ACM Press, Vancouver, British Columbia, Canada, 1–12. BARATLOO, A., SINGH, N., AND TSAI, T. 2000. Transparent run-time defense against stack smashing attacks. In Proceedings of the 2000 USENIX Annual Technical Conference (USENIX-00), Berkeley, CA. 251–262. BARRANTES, E. G., ACKLEY, D., FORREST, S., PALMER, T., STEFANOVIC, D., AND ZOVI, D. D. 2003. Randomized instruction set emulation to disrupt binary code injection attacks. In Proceedings of the 10th ACM Conference on Computer and Communications Security, Washington, DC. 272– 280. BEST, R. M. 1979. Microprocessor for executing enciphered programs, U.S. Patent no. 4 168 396. BEST, R. M. 1980. Preventing software piracy with crypto-microprocessors. In Proceedings of the IEEE Spring COMPCON ’80, San Francisco, CA. 466–469. BHATKAR, S., DUVARNEY, D., AND SEKAR, R. 2003. Address obfuscation: An approach to combat buffer overflows, format-string attacks and more. In Proceedings of the 12th USENIX Security Symposium, Washington, DC. 105–120. BOYD, S. W. AND KEROMYTIS, A. D. 2004. SQLrand: Preventing SQL injection attacks. In Proceedings of the 2nd Applied Cryptography and Network Security (ACNS) Conference. Yellow Mountain, China. 292–302. BRUENING, D., AMARASINGHE, S., AND DUESTERWALD, E. 2001. Design and implementation of a dynamic optimization framework for Windows. In 4th ACM Workshop on Feedback-Directed and Dynamic Optimization (FDDO-4). BUTLER, T. R. 2004. Bochs. http://bochs.sourceforge.net/. CHEW, M. AND SONG, D. 2002. Mitigating Buffer Overflows by Operating System Randomization. Tech. Rep. CMU-CS-02-197, Department of Computer Science, Carnegie Mellon University. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
38
•
E. G. Barrantes et al.
CHIUEH, T. AND HSU, F.-H. 2001. Rad: A compile-time solution to buffer overflow attacks. In Proceedings of the 21st International Conference on Distributed Computing Systems (ICDCS), Phoenix, AZ. 409–420. COHEN, F. 1993. Operating system protection through program evolution. Computers and Security 12, 6 (Oct.), 565–584. CORE SECURITY. 2004. CORE security technologies. http://www1.corest.com/home/home.php. COWAN, C., BARRINGER, M., BEATTIE, S., AND KROAH-HARTMAN, G. 2001. Format guard: Automatic protection from printf format string vulnerabilities. In Proceedings of the 10th USENIX Security Symposium, Washington, DC. 191–199. COWAN, C., BEATTIE, S., JOHANSEN, J., AND WAGLE, P. 2003. Pointguard: Protecting pointers from buffer overflow vulnerabilities. In Proceedings of the 12th USENIX Security Symposium, Washington, DC. 91–104. COWAN, C., HINTON, H., PU, C., AND WALPOLE, J. 2000. A cracker patch choice: An analysis of post hoc security techniques. In National Information Systems Security Conference (NISSC), Baltimore MD. COWAN, C., PU, C., MAIER, D., HINTON, H., BAKKE, P., BEATTIE, S., GRIER, A., WAGLE, P., AND ZHANG, Q. 1998. Automatic detection and prevention of buffer-overflow attacks. In Proceedings of the 7th USENIX Security Symposium, San Antonio, TX. COWAN, C., WAGLE, P., PU, C., BEATTIE, S., AND WALPOLE, J. 2000b. Buffer overflows: Attacks and defenses for the vulnerability of the decade. In DARPA Information Survivability Conference and Exposition (DISCEX 2000). 119–129. DALLAS SEMICONDUCTOR. 1999. DS5002FP secure microprocessor chip. http://pdfserv.maximic.com/en/ds/DS5002FP.pdf. DOR, N., RODEH, M., AND SAGIV, M. 2003. CSSV: Towards a realistic tool for statically detecting all buffer overflows in c. In Proceedings of the ACM SIGPLAN 2003 Conference on Programming Language Design and Implementation. 155–167. ETOH, H. AND YODA, K. 2000. Protecting from stack-smashing attacks. Web publishing, IBM Research Division, Tokyo Research Laboratory, http://www.trl.ibm.com/projects/ security/ssp/main.html. June 19. ETOH, H. AND YODA, K. 2001. Propolice: Improved stack smashing attack detection. IPSJ SIGNotes Computer Security (CSEC) 14 (Oct. 26). FAYOLLE, P.-A. AND GLAUME, V. 2002. A buffer overflow study, attacks & defenses. Web publishing, ENSEIRB, http://www.wntrmute.com/docs/bufferoverflow/report.html. FORREST, S., SOMAYAJI, A., AND ACKLEY, D. 1997. Building diverse computer systems. In Proceedings of the 6th Workshop on Hot Topics in Operating Systems. 67–72. FRANTZEN, M. AND SHUEY, M. 2001. Stackghost: Hardware facilitated stack protection. In Proceedings of the 10th USENIX Security Symposium. Washington, DC. GERA AND RIQ. 2002. Smashing the stack for fun and profit. Phrack 59, 11 (July 28). HARPER, M. 2002. SQL injection attacks—Are you safe? In Sitepoint, http://www.sitepoint. com/article/794. IBM. 2003. PowerPC Microprocessor Family: Programming Environments Manual for 64 and 32-Bit Microprocessors. Version 2.0. Number order nos. 253665, 253666, 253667, 253668. INTEL CORPORATION. 2004. The IA-32 Intel Architecture Software Developer’s Manual. Number order nos. 253665, 253666, 253667, 253668. JIM, T., MORRISETT, G., GROSSMAN, D., HICKS, M., CHENEY, J., AND WANG, Y. 2002. Cyclone: A safe dialect of c. In Proceedings of the USENIX Annual Technical Conference, Monterey, CA. 275– 288. JONES, R. W. M. AND KELLY, P. H. 1997. Backwards-compatible bounds checking for arrays and pointers in C programs. In 3rd International Workshop on Automated Debugging. 13–26. KC, G. S., KEROMYTIS, A. D., AND PREVELAKIS, V. 2003. Countering code-injection attacks with instruction-set randomization. In Proceedings of the 10th ACM Conference on Computer and Communications Security. ACM Press, Washington, DC. 272–280. KIRIANSKY, V., BRUENING, D., AND AMARASINGHE, S. 2002. Secure execution via program sheperding. In Proceeding of the 11th USENIX Security Symposium, San Francisco, CA. KLAIBER, A. 2000. The technology behind the crusoe processors. White Paper http://www.transmeta.com/pdf/white papers/paper aklaiber 19jan00.pdf. January. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Randomized Instruction Set Emulation
•
39
KUHN, M. 1997. The TrustNo 1 Cryptoprocessor Concept. Tech. Rep. CS555 Report, Purdue University. April 04. LAROCHELLE, D. AND EVANS, D. 2001. Statically detecting likely buffer overflow vulnerabilities. In Proceedings of the 10th USENIX Security Symposium, Washington, DC. 177–190. LHEE, K. AND CHAPIN, S. J. 2002. Type-assisted dynamic buffer overflow detection. In Proceeding of the 11th USENIX Security Symposium, San Francisco, CA. 81–88. MILENKOVIC´ , M., MILENCOVIC´ , A., AND JOVANOV, E. 2004. A framework for trusted instruction execution via basic block signature verification. In Proceedings of the 42nd Annual Southeast Regional Conference (ACM SE’04). ACM Press, Huntsville, AL. 191–196. NAHUM, E. M. 2002. Deconstructing specweb99. In Proceedings of 7th International Workshop on Web Content Caching and Distribution, Boulder, CO. NEBENZAHL, D. AND WOOL, A. 2004. Install-time vaccination of Windows executables to defend against stack smashing attacks. In Proceedings of the 19th IFIP International Information Security Conference. Kluwer, Toulouse, France, 225–240. NECULA, G. C., MCPEAK, S., AND WEIMER, W. 2002. Ccured: Type-safe retrofitting of legacy code. In Proceedings of the Symposium on Principles of Programming Languages. 128–139. NERGAL. 2001. The advanced return-into-lib(c) exploits. Phrack 58, 4 (Dec.). NETHERCOTE, N. AND SEWARD, J. 2003. Valgrind: A program supervision framework. In Electronic Notes in Theoretical Computer Science, O. Sokolsky and M. Viswanathan, Eds. Vol. 89. Elsevier, Amsterdam. NEWSHAM, T. 2000. Format string attacks. http://www.securityfocus.com/archive/1/81565. PAX TEAM. 2003. Documentation for the PaX project. See Homepage of The PaX Team. http://pax.grsecurity.net/docs/index.html. PRASAD, M. AND CHIUEH, T. 2003. A binary rewriting defense against stack based overflow attacks. In Proceedings of the USENIX 2003 Annual Technical Conference, San Antonio, TX. PU, C., BLACK, A., COWAN, C., AND WALPOLE, J. 1996. A specialization toolkit to increase the diversity of operating systems. In Proceedings of the 1996 ICMAS Workshop on Immunity-Based Systems, Nara, Japan. RANDELL, B. 1975. System structure for software fault tolerance. IEEE Trans. Software Eng. 1, 2, 220–232. RUWASE, O. AND LAM, M. S. 2004. A practical dynamic buffer overflow detector. In Proceedings of the 11th Annual Network and Distributed System Security Symposium. SCHNEIER, B. 1996. Applied Cryptography. Wiley, New York. SECURITY FOCUS. 2003. CVS directory request double free heap corruption vulnerability. http://www.securityfocus.com/bid/6650. SEWARD, J. AND NETHERCOTE, N. 2004. Valgrind, an open-source memory debugger for x86GNU/Linux. http://valgrind.kde.org/. SIMON, I. 2001. A comparative analysis of methods of defense against buffer overflow attacks. Web publishing, California State University, Hayward, http://www.mcs.csuhayward.edu/ simon/security/boflo.html. January 31. SPEC INC. 1999. Specweb99. Tech. Rep. SPECweb99 Design 062999.html, SPEC Inc. June 29. TCPA 2004. TCPA trusted computing platform alliance. http://www.trustedcomputing.org/home. TOOL INTERFACE STANDARDS COMMITTEE. 1995. Executable and Linking Format (ELF). Tool Interface Standards Committee. TSAI, T. AND SINGH, N. 2001. Libsafe 2.0: Detection of format string vulnerability exploits. White Paper Version 3-21-01, Avaya Labs, Avaya Inc. February 6. TSO, T. 1998. random.C: A strong random number generator. http://www.linuxsecurity.com/ feature stories/random.c. VENDICATOR. 2000. StackShield: A stack smashing technique protection tool for Linux. http://angelfire.com/sk/stackshield. WAGNER, D., FOSTER, J. S., BREWER, E. A., AND AIKEN, A. 2000. A first step towards automated detection of buffer overrun vulnerabilities. In Network and Distributed System Security Symposium, San Diego, CA. 3–17. WILANDER, J. AND KAMKAR, M. 2003. A comparison of publicly available tools for dynamic buffer overflow prevention. In Proceedings of the 10th Network and Distributed System Security Symposium, San Diego, CA. 149–162. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
40
•
E. G. Barrantes et al.
XU, J., KALBARCZYK, Z., AND IYER, R. K. 2003. Transparent runtime randomization for security. In Proceeding of the 22nd International Symposium on Reliable Distributed Systems (SRDS’03), Florence, Italy. 26–272. XU, J., KALBARCZYK, Z., PATEL, S., AND IYER, R. K. 2002. Architecture support for defending against buffer overflow attacks. In 2nd Workshop on Evaluating and Architecting System dependabilitY (EASY), San Jose, CA. http://www.crhc.uiuc.edu/EASY/. Received May 2004; revised September 2004; accepted September 2004
ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Distributed Sensor Networks DONGGANG LIU, PENG NING, and RONGFANG LI North Carolina State University
Pairwise key establishment is a fundamental security service in sensor networks; it enables sensor nodes to communicate securely with each other using cryptographic techniques. However, due to the resource constraints on sensor nodes, it is not feasible to use traditional key management techniques such as public key cryptography and key distribution center (KDC). A number of key predistribution techniques have been proposed for pairwise key establishment in sensor networks recently. To facilitate the study of novel pairwise key predistribution techniques, this paper develops a general framework for establishing pairwise keys between sensor nodes using bivariate polynomials. This paper then proposes two efficient instantiations of the general framework: a random subset assignment key predistribution scheme, and a hypercube-based key predistribution scheme. The analysis shows that both schemes have a number of nice properties, including high probability, or guarantee to establish pairwise keys, tolerance of node captures, and low storage, communication, and computation overhead. To further reduce the computation at sensor nodes, this paper presents an optimization technique for polynomial evaluation, which is used to compute pairwise keys. This paper also reports the implementation and the performance of the proposed schemes on MICA2 motes running TinyOS, an operating system for networked sensors. The results indicate that the proposed techniques can be applied efficiently in resource-constrained sensor networks. Categories and Subject Descriptors: C.2.0 [Computer-Communication Networks]: General— Security and protection; C.2.1 [Computer-Communication Networks]: Network Architecture and Design—Wireless communication; D.4.6 [Operating Systems]: Security and Protection— Cryptographic controls; K.6.5 [Management of Computing and Information Systems]: Security and Protection General Terms: Security, Design, Algorithms Additional Key Words and Phrases: Sensor networks, key management, key predistribution, pairwise key
This work is partially supported by the National Science Foundation (NSF) under grant CNS0430223. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements, either expressed or implied, of NSF, or the U.S. Government. A preliminary version this paper appeared in Proceedings of the 10th ACM Conference on Computer and Communications Security (CCS ’03), pp. 52–61, Oct. 2003. Authors’ address: Department of Computer Science, North Carolina State University, Raleigh, NC 27695-8207; email: {dliu;pning}@ncsu.edu, [email protected]. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 1515 Broadway, New York, NY 10036 USA, fax: +1 (212) 869-0481, or [email protected]. C 2005 ACM 1094-9224/05/0200-0041 $5.00 ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005, Pages 41–77.
42
•
D. Liu et al.
1. INTRODUCTION Distributed sensor networks have received a lot of attention recently due to its wide applications in military as well as civilian operations. Example applications include target tracking, scientific exploration, and data acquisition in hazardous environments. The sensor nodes are typically small, low-cost, battery powered, and highly resource constrained. They usually communicate with each other through wireless links. Security services such as authentication and key management are critical to secure the communication between sensor nodes in hostile environments. As one of the most fundamental security services, pairwise key establishment enables the sensor nodes to communicate securely with each other using cryptographic techniques. However, due to the resource constraints on sensor nodes, it is not feasible for them to use traditional pairwise key establishment techniques such as public key cryptography and key distribution center (KDC). Instead of the above two techniques, sensor nodes may establish keys between each other through key predistribution, where keying materials are predistributed to sensor nodes before deployment. As two extreme cases, one may set up a global key among the network so that two sensor nodes can establish a key based on this global key, or assign each sensor node a unique random key with each of the other nodes. However, the former is vulnerable to the compromise of a single node, and the latter introduces huge storage overhead on sensor nodes. Eschenauer and Gligor [2002] proposed a probabilistic key predistribution scheme recently for pairwise key establishment. The main idea is to let each sensor node randomly pick a set of keys from a key pool before the deployment so that any two sensor nodes have a certain probability to share at least one common key. Chan et al. [2003] further extended this idea and developed two key predistribution techniques: a q-composite key predistribution scheme and a random pairwise keys scheme. The q-composite key predistribution also uses a key pool but requires two nodes compute a pairwise key from at least q-predistributed keys that they share. The random pairwise keys scheme randomly picks pairs of sensor nodes and assigns each pair a unique random key. Both schemes improve the security over the basic probabilistic key predistribution scheme. However, the pairwise key establishment problem is still not fully solved. For the basic probabilistic and the q-composite key predistribution schemes, as the number of compromised nodes increases, the fraction of affected pairwise keys increases quickly. As a result, a small number of compromised nodes may affect a large fraction of pairwise keys. Though the random pairwise keys scheme does not suffer from the above security problem, given a memory constraint, the network size is strictly limited by the desired probability that two sensor nodes share a pairwise key, the memory available for keys on sensor nodes, and the number of neighbor nodes that a sensor node can communicate with. In this paper, we develop a number of key predistribution techniques to deal with the above problems. We first develop a general framework for pairwise key establishment based on the polynomial-based key predistribution protocol ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
43
in [Blundo et al. 1993] and the probabilistic key distribution in [Eschenauer and Gligor 2002] and [Chan et al. 2003]. This framework is called polynomial pool-based key predistribution, which uses a polynomial pool instead of a key pool in [Eschenauer and Gligor 2002] and [Chan et al. 2003]. The secrets on each sensor node are generated from a subset of polynomials in the pool. If two sensor nodes have the secrets generated from the same polynomial, they can establish a pairwise key based on the polynomial-based key predistribution scheme. All the previous schemes in [Blundo et al. 1993], [Eschenauer and Gligor 2002] and [Chan et al. 2003] can be considered as special instances in this framework. By instantiating the components in this framework, we further develop two novel pairwise key predistribution schemes: a random subset assignment scheme and a hypercube-based scheme. The random subset assignment scheme assigns each sensor node the secrets generated from a random subset of polynomials in the polynomial pool. The hypercube-based scheme arranges polynomials in a hypercube space, assigns each sensor node to a unique coordinate in the space, and gives the node the secrets generated from the polynomials related to the corresponding coordinate. Based on this hypercube, each sensor node can then identify whether it can directly establish a pairwise key with another node, and if not, what intermediate nodes it can contact to indirectly establish the pairwise key. Our analysis indicates that our new schemes have some nice features compared with the previous methods. In particular, when the fraction of compromised secure links is less than 60%, given the same storage constraint, the random subset assignment scheme provides a significantly higher probability of establishing secure communication between noncompromised nodes than the previous methods. Moreover, unless the number of compromised nodes sharing a common polynomial exceeds a threshold, compromise of sensor nodes does not lead to the disclosure of keys established between noncompromised nodes using this polynomial. Similarly, the hypercube-based scheme also has a number of attractive properties. First, it guarantees that any two nodes can establish a pairwise key when there are no compromised nodes, provided that the sensor nodes can communicate with each other. Second, it is resilient to node compromise. Even if some sensor nodes are compromised, there is still a high probability to re-establish a pairwise key between noncompromised nodes. Third, a sensor node can directly determine whether it can establish a pairwise key with another node and how to compute the pairwise key if it can. As a result, there is no communication overhead during the discovery of directly shared keys. Evaluation of polynomials is essential to the proposed schemes, since it affects the performance of computing a pairwise key. To reduce the computation at sensor nodes, we provide an optimization technique for polynomial evaluation. The basic idea is to compute multiple pieces of key fragments over some special finite fields such as F28 +1 and F216 +1 , and concatenate these fragments into a regular key. A nice property provided by such finite fields is that no division is necessary for modular multiplication. As a result, evaluation of polynomials can be performed efficiently on low-cost processors on sensor nodes that do ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
44
•
D. Liu et al.
not have division instructions. Our analysis indicates that such a method only slightly decreases the uncertainty of the keys. We have implemented the aforementioned algorithm on MICA2 motes [Crossbow Technology Inc. 2004] running TinyOS [Hill et al. 2000]. The implementation only occupies a small amount of memory (e.g., 416 bytes in ROM and 20 bytes in RAM for one of our implementations excluding the memory for polynomial coefficients). The evaluation indicates that computing a 64-bit key using this technique can be faster than generating a 64-bit MAC (message authentication code) using RC5 [Rivest 1994] or SkipJack [NIST 1998] for a polynomial of a reasonable degree. These results show that our schemes are practical for resource-constrained sensor networks. The rest of this paper is organized as follows. Section 2 gives an overview of the polynomial-based key predistribution technique. Section 3 presents our general framework for polynomial pool-based key predistribution. Sections 4 and 5 describe the random subset assignment scheme and the hypercube-based scheme, respectively. Section 6 presents the technique to reduce the computation at sensor nodes, and reports our implementation and performance results. Section 7 discusses the related work. Section 8 concludes this paper and points out some future research directions. 2. POLYNOMIAL-BASED KEY PREDISTRIBUTION FOR SENSOR NETWORKS In this section, we briefly review the basic polynomial-based key predistribution protocol in [Blundo et al. 1993], which is the basis of our new techniques. The protocol in [Blundo et al. 1993] was developed for group key predistribution. Since our goal is to establish pairwise keys, for simplicity, we only discuss the special case of pairwise key establishment in the context of sensor networks. To predistribute pairwise keys, the (key) setup server randomly generates a bivariate t-degree polynomial f (x, y) = i,t j =0 aij x i y j over a finite field Fq , where q is a prime number that is large enough to accommodate a cryptographic key, such that it has the property of f (x, y) = f ( y, x). (In the following, we assume all the bivariate polynomials have this property without explicit statement.) It is assumed that each sensor node has a unique ID. For each node i, the setup server computes a polynomial share of f (x, y), that is, f (i, y). This polynomial share is predistributed to node i. Thus, for any two sensor nodes i and j , node i can compute the key f (i, j ) by evaluating f (i, y) at point j , and node j can compute the same key f ( j, i) = f (i, j ) by evaluating f ( j, y) at point i. As a result, nodes i and j can establish a common key f (i, j ). In this approach, each sensor node i needs to store a t-degree polynomial f (i, x), which occupies (t + 1) log q storage space. To establish a pairwise key, both sensor nodes need to evaluate the polynomial at the ID of the other sensor node. (In Section 6, we will present techniques to reduce the computation required to evaluate polynomials.) There is no communication overhead during the pairwise key establishment process. The security proof in [Blundo et al. 1993] ensures that this scheme is unconditionally secure and t-collusion resistant. That is, the coalition of no more than ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
45
t compromised sensor nodes knows nothing about the pairwise key between any two noncompromised nodes. It is theoretically possible to use the general group key distribution protocol in [Blundo et al. 1993] in sensor networks. However, the storage cost for a polynomial share is exponential in terms of the group size, making it prohibitive in sensor networks. In this paper, we will focus on the problem of pairwise key establishment. 3. POLYNOMIAL POOL-BASED KEY PREDISTRIBUTION The polynomial-based key predistribution scheme discussed in Section 2 has some limitations. In particular, it can only tolerate the collusion of no more than t compromised nodes, where the value of t is limited by the available memory space and the computation capability on sensor nodes. Indeed, the larger a sensor network is, the more likely an adversary compromises more than t sensor nodes and then the entire network. To have secure and practical key establishment techniques, we develop a general framework for key predistribution based on the scheme presented in Section 2. We call it the polynomial pool-based key predistribution, since a pool of random bivariate polynomials are used in this framework. In this section, we focus on the discussion of this general framework. In the next two sections, we will present two efficient instantiations of this framework. The polynomial pool-based key predistribution is inspired by the studies in [Eschenauer and Gligor 2002] and [Chan et al. 2003]. The basic idea can be considered as the combination of the polynomial-based key predistribution and the key pool idea used in [Eschenauer and Gligor 2002] and [Chan et al. 2003]. However, our framework is more general in that it allows different choices to be instantiated within this framework, including those in [Eschenauer and Gligor 2002] and [Chan et al. 2003] and our later instantiations in Sections 4 and 5. Intuitively, this general framework generates a pool of random bivariate polynomials and assigns shares on a subset of bivariate polynomials in the pool to each sensor node. The polynomial pool has two special cases. When it has only one polynomial, the general framework degenerates into the polynomial-based key predistribution. When all the polynomials are 0-degree, the polynomial pool degenerates into a key pool used in [Eschenauer and Gligor 2002] and [Chan et al. 2003]. Pairwise key establishment in this framework has three phases: setup, direct key establishment, and path key establishment. The setup phase is performed to initialize the nodes by distributing polynomial shares to them. After being deployed, if two sensor nodes need to establish a pairwise key, they first attempt to do so through direct key establishment. If they can successfully establish a common key, there is no need to start path key establishment; otherwise, these two nodes start path key establishment, trying to establish a pairwise key with the help of other sensor nodes. 3.1 Phase 1: Setup The setup server randomly generates a set F of bivariate t-degree polynomials over the finite field Fq . To identify different polynomials, the setup server may ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
46
•
D. Liu et al.
assign each polynomial a unique ID. For each sensor node i, the setup server picks a subset of polynomials Fi ⊆ F, and assigns the shares of these polynomials to node i. The main issue in this phase is the subset assignment problem, which specifies how to pick a subset of polynomials from F for each sensor node. Here we identify two ways to perform subset assignments: random assignment and predetermined assignment. 3.1.1 Random Assignment. With random assignment, the setup server randomly picks a subset of F for each sensor node. This random selection should be evenly distributed in F for security concerns; otherwise, some polynomials may have higher probability of being selected and higher frequency of being used in key establishment than the others and thus become the primary targets of attacks. Several parameters may be used to control this process, including the number of polynomial shares assigned to a node and the size of F. In the simplest case, the setup server assigns the same number of random selected polynomial shares to each sensor node. 3.1.2 Predetermined Assignment. When predetermined assignment is used, the setup server follows certain scheme to assign subsets of F to sensor nodes. A predetermined assignment should bring some nice properties that can be used to improve direct and path key establishment. 3.2 Phase 2: Direct Key Establishment A sensor node starts phase 2 if it needs to establish a pairwise key with another node. If both sensor nodes have shares on the same bivariate polynomial, they can establish the pairwise key directly using the polynomial-based key predistribution discussed in Section 2. The main issue in this phase is the polynomial share discovery problem, which specifies how to find a common bivariate polynomial of which both nodes have polynomial shares. For convenience, we say two sensor nodes have a secure link if they can establish a pairwise key through direct key establishment. A pairwise key established in this phase is called a direct key. Here we identify two types of techniques to solve this problem: predistribution and real-time discovery. 3.2.1 Predistribution. The setup server predistributes certain information to the sensor nodes, so that given the ID of another sensor node, a sensor node can determine whether it can establish a direct key with the other node. A naive method is to let each sensor node store the IDs of all the sensor nodes with which it can setup direct keys. However, this naive method has difficulties in dealing with the sensor nodes that join the network on the fly, because the setup server has to inform some existing nodes about the addition of new sensor nodes. Alternatively, the setup server may map the ID of each sensor node to the IDs of polynomial shares it has so that given the ID of a sensor node, anybody can derive the IDs of polynomial shares it has. Thus, any sensor node can determine immediately whether it can establish a direct key with a given sensor ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
47
node by only knowing its ID. Note that this method requires the predetermined assignment strategy in the setup phase. The drawback of predistribution methods is that an attacker may also know the distribution of the polynomial shares. As a result, the attacker may precisely target at certain sensor nodes, attempting to learn the shares of a particular bivariate polynomial. The following alternative way may avoid this problem. 3.2.2 Real-Time Discovery. Intuitively, real-time discovery requires two sensor nodes to discover on the fly whether they have shares on a common bivariate polynomial. As one possible way, two nodes may first exchange the IDs of polynomials of which they both have shares, and then try to identify the common polynomial. To protect the IDs of the polynomials, the sensor node may challenge the other party to solve puzzles instead of disclosing the IDs of the polynomials directly. Similar to the method in [Eschenauer and Gligor 2002], when node i needs to establish a pairwise key with node j , it sends node j an encryption list, α, E K v (α), v = 1, . . . , |Fi |, where K v is computed by evaluating the vth polynomial share in Fi on point j (a potential pairwise key node j may have). When node j receives this encryption list, it first computes {K v′ }v = 1,...,|F j | , where K v′ is computed by evaluating the vth polynomial share in F j on point i (a potential pairwise key node i may have). Node j then generates another encryption list {E K v′ (α)}v = 1,...,|F j | . If there exists a common encryption value that is included in both encryption lists, node i and node j can establish a common key, which is the key used to generate this common value. The drawback of real-time discovery is that it introduces additional communication overhead that does not appear in the predistribution approaches. If the polynomial IDs are exchanged in clear text, an attacker may gradually learn the distribution of polynomials among sensor nodes and selectively capture and compromise sensor nodes based on this information. However, it is more difficult for an adversary to collect the polynomial distribution information in the real-time discovery method than in the predistribution method, since the adversary has to monitor the communication among sensor nodes. In addition, when the encryption list is used to protect the IDs of polynomial shares in a sensor nodes, an adversary has no way to learn the polynomial distribution among sensor nodes and thus cannot launch selective node capture attacks. 3.3 Phase 3: Path Key Establishment If direct key establishment fails, two sensor nodes need to start phase 3 to establish a pairwise key with the help of other sensor nodes. To establish a pairwise key with node j , a sensor node i needs to find a sequence of nodes between itself and node j such that any two adjacent nodes in this sequence can establish a direct key. For the sake of presentation, we call such a sequence of nodes a key path (or simply a path), since the purpose of such a path is to establish a pairwise key. Then either node i or j initiates a key establishment request with the other node through the intermediate nodes along the path. A pairwise key established in this phase is called an indirect key. A subtle issue is that two adjacent nodes in the path may not be able to communicate with each ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
48
•
D. Liu et al.
other directly. In this paper, we assume that they can always discover a route between themselves so that the messages from one node can reach the other. The main issue in this phase is the path discovery problem, which specifies how to find a path between two sensor nodes. Similar to phase 2, there are two types of techniques to address this problem. 3.3.1 Predistribution. Similar to the predistribution technique in phase 2, the setup server predistributes certain information to each sensor node so that given the ID of another node, each node can determine at least one key path to the other node directly. The resulting key path is called the predetermined key path. For convenience, we call the process to compute the predetermined key paths key path predetermination. The drawback is that an attacker may also take advantage of the predistributed information to attack the network. Moreover, it is possible that none of the predetermined key paths is available to establish an indirect key between two nodes due to compromised (intermediate) nodes or communication failures. To deal with the above problem, the source node needs to dynamically find other key paths to establish an indirect key with the destination node. For convenience, we call such a process dynamic key path discovery. For example, the source node may contact a number of other nodes with which it can establish direct keys using noncompromised polynomials, attempting to find a node that has a path to the destination node to help establish an indirect key. 3.3.2 Real-Time Discovery. Real-time discovery techniques have the sensor nodes discover key path on the fly without any predetermined information. The sensor nodes may take advantage of the direct keys established through direct key establishment. For example, to discover a key path to another sensor node, a sensor node picks a set of intermediate nodes with which it has established direct keys. The source node may send request to all these intermediate nodes. If one of the intermediate nodes can establish a direct key with the destination node, a key path is discovered. Otherwise, this process may continue with the intermediate nodes forwarding the request. Such a process is similar to a route discovery process used to establish a route between two nodes. The drawback is that such methods may introduce substantial communication overhead. 4. KEY PREDISTRIBUTION USING RANDOM SUBSET ASSIGNMENT In this section, we present the first instantiation of the general framework by using a random strategy for the subset assignment during the setup phase. That is, for each sensor node, the setup server selects a random subset of polynomials in F and assigns their polynomial shares to the node. 4.1 The Random Subset Assignment Scheme The random subset assignment scheme can be considered as an extension to the basic probabilistic scheme in [Eschenauer and Gligor 2002]. Instead of randomly selecting keys from a large key pool and assigning them to sensor nodes, our method randomly chooses polynomials from a polynomial pool and ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
49
assigns their polynomial shares to sensor nodes. However, our scheme also differs from the scheme in [Eschenauer and Gligor 2002]. In [Eschenauer and Gligor 2002], the same key may be shared by multiple sensor nodes. In contrast, in our scheme, there is a unique key for each pair of sensor nodes. If no more than t shares on the same polynomial are disclosed, none of the pairwise keys constructed using this polynomial between two noncompromised sensor nodes will be disclosed. Now let us describe this scheme by instantiating the three components in the general framework. (1) Subset assignment: The setup server randomly generates a set F of s bivariate t-degree polynomials over the finite field Fq . For each sensor node, the setup server randomly picks a subset of s′ polynomials from F and assigns shares as well as the IDs of these s′ polynomials to the sensor node. (2) Polynomial share discovery: Since the setup server does not predistribute enough information to the sensor nodes for polynomial share discovery, sensor nodes that need to establish a pairwise key have to find out a common polynomial with real-time discovery techniques. To discover a common bivariate polynomial, the source node discloses a list of polynomial IDs to the destination node. If the destination node finds that they have shares on the same polynomial, it informs the source node the ID of this polynomial; otherwise, it replies with a message that contains a list of its polynomial IDs, which also indicates that the direct key establishment fails. (3) Path discovery: If two sensor nodes fail to establish a direct key, they need to start path key establishment phase. During this phase, the source node tries to find another node that can help it setup a pairwise key with the destination node. Basically, the source node broadcasts two list of polynomial IDs. One includes the polynomial IDs at the source node, and the other includes the polynomial IDs at the destination node. These two lists are available at both the source and the destination nodes after the polynomial share discovery. If one of the nodes that receives this request is able to establish direct keys with both the source and the destination nodes, it replies with a message that contains two encrypted copies of a randomly generated key: one encrypted by the direct key with the source node, and the other encrypted by the direct key with the destination node. Both the source and the destination nodes can then get the new pairwise key from this message. (Note that the intermediate node acts as an ad hoc KDC in this case.) In practice, we may restrict that a sensor node only contact its neighbors within a certain range. 4.2 Performance Similar to the analysis in [Eschenauer and Gligor 2002], the probability of two sensor nodes sharing the same bivariate polynomial, which is the probability that two sensor nodes can establish a direct key, can be estimated by p=1−
′ s −1
i=0
s − s′ − i . s−i
(1)
ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
50
•
D. Liu et al.
Fig. 1. Probabilities about pairwise key establishment. (a) The probability p that two nodes share a polynomial versus the size s of the polynomial pool. (b) The probability Ps of establishing a pairwise key versus the probability p that two nodes share a polynomial.
Figure 1(a) shows the relationship between p and the combinations of s and s′ . It is easy to see that the closer s and s′ are, the more likely two sensor nodes can establish a direct key. Our later security analysis (in Section 4.4) shows that small s and s′ can provide high-security performance. This differs from the basic probabilistic scheme [Eschenauer and Gligor 2002] and the qcomposite scheme [Chan et al. 2003], where the key pool size has to be very large to meet certain security requirement. The reason is that there is another parameter (i.e., the degree t of the polynomials) that affects the security performance of the random subset assignment scheme. In Eq. (1), the value of s′ is affected by the storage overhead and the degree t of the polynomials. In fact, we have t = Cs′ − 1, where C is the number keys a sensor node can store. Now let us consider the probability that two nodes can establish a key through either the polynomial share discovery or the path discovery. Let d denote the average number of neighbor nodes that each sensor node contacts. Consider any one of these d nodes. The probability that it shares direct keys with both the source and the destination nodes is p2 , where p is computed by Eq. (1). As long as one of these d nodes can act as an intermediate node, the ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
51
source and the destination nodes can establish a common key. It follows that the probability of two nodes establishing a pairwise key (directly or indirectly) is Ps = 1 − (1 − p) (1 − p2 )d . Figure 1(b) shows that the probability Ps of establishing a pairwise key between two sensor nodes increases quickly as the probability p of establishing direct keys or the number d of neighbor nodes it contacts increases. 4.3 Overheads Each node has to store s′ t-degree polynomials over Fq , which introduces s′ (t + 1) log q bits storage overhead. In addition, each node needs to remember the IDs of revoked nodes with which it can establish direct keys. Assume the IDs of sensor nodes are chosen from a finite field Fq ′ . The storage overhead introduced by the revoked IDs is at most s′ t log q ′ bits, since if t + 1 shares of one bivariate polynomial are revoked, this polynomial is compromised and can be discarded. Thus, the overall storage overhead is at s′ (t + 1) log q + s′ t log q ′ bits. In terms of communication overhead, during the polynomial share discovery, the source node needs to disclose a list of s′ IDs to the destination node. The communication overhead is mainly due to the transmission of such lists. During the path discovery, the source node broadcasts a request message that consists of two lists of polynomial IDs. This introduces one broadcast message at the source node and possibly several broadcast messages at other nodes receiving this request if they further forward this request. However, due to the small values of s and s′ in our scheme, all the broadcast messages are small messages, and can be done efficiently in resource-constrained sensor networks. If a node receiving this message shares common polynomials with both the source and the destination nodes, it only needs to reply with a message consisting of two encrypted copies of a randomly generated key. In terms of computational overhead, the polynomial share discovery requires one polynomial evaluation at each node if they share a common polynomial. During the path discovery, if a node receiving the request shares common polynomials with both the source and the destination nodes, it only needs to perform two polynomial evaluations and two encryptions. If there exists at least one intermediate node that can be used in the establishment of an indirect key, both the source and the destination nodes only need one decryption. 4.4 Security Analysis It follows from the security analysis in [Blundo et al. 1993] that an attacker cannot determine any key established between noncompromised nodes established with a polynomial if he/she has compromised no more than t sensor nodes that have shares of this polynomial. Assume an attacker randomly compromises Nc sensor nodes, where Nc > t. Consider any polynomial f in F. The probability ′ of f being chosen for a sensor node is ss , and the probability of this polynomial being chosen exactly i times among N c compromised sensor nodes is ′ i Nc ! s′ Nc −i s P [i compromised shares] = 1− . (Nc − i)!i! s s ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
52
•
D. Liu et al.
Fig. 2. Performance of the random subset assignment scheme under attacks. RS refers to the random subset assignment scheme. Assume each node has available storage for 200 keys and p = 0.33. (a) Fraction of compromised links between noncompromised nodes versus number of compromised nodes. (b) Fraction of compromised keys (direct or indirect) between noncompromised nodes versus number of compromised nodes. Assume N = 20,000.
Thus, the probability of a particular bivariate polynomial being compromised t is Pcd = 1 − i=0 P [i compromised shares]. Since f is any polynomial in F, the fraction of compromised links between noncompromised nodes can be estimated as Pcd . Figure 2(a) includes the relationship between the fraction of compromised links for noncompromised nodes and the number of compromised nodes for some combinations of s and s′ . We can see that the random subset scheme provides high-security guarantee in terms of the fraction of compromised links between noncompromised nodes when the number of compromise nodes does not exceed certain threshold. (To save space, Figure 2 also includes the performance of the basic probabilistic scheme [Eschenauer and Gligor 2002] and the q-composite scheme [Chan et al. 2003], which will be used for the comparison in Section 4.5.) If an attacker knows the distribution of polynomials over sensor nodes, he/she may target at specific sensor nodes in order to compromise the keys derived from a particular polynomial. In this case, the attacker only needs to compromise t + 1 sensor nodes. However, it is generally more difficult than randomly ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
53
compromising sensor nodes, since the attacker has to compromise the selected nodes. An easy fix to remove the above threat is to restrict that each polynomial be used for at most t + 1 times. As a result, an attacker cannot recover a polynomial unless he/she compromises all related sensor nodes. Though effective at improving the security, this method also puts a limit on the maximum number of sensor nodes for a given combination of s and s′ . Indeed, given the above . constraint, the total number of sensor nodes cannot exceed (t+1)·s s′ To estimate the probability of any (direct or indirect) key between two noncompromised nodes being compromised, we assume that the network is fully connected and each pair of nodes can establish a direct or indirect key. Thus, among all pairwise keys, there are a fraction p of direct keys and a fraction 1 − p of indirect keys on average. For each direct key, it has the probability Pcd of being compromised. For each indirect key, if the intermediate node and the two polynomials used in the establishment of this key are not compromised, the key is still secure; otherwise, it cannot be trusted. Thus, the probability of an indirect key being compromised can be estimated by 1−(1− pc )(1− Pcd )2 , where pc = NNc . Therefore, the probability of any (direct or indirect) key between two noncompromised nodes being compromised can be estimated by Pc = p × Pcd + (1 − p)[1 − (1 − pc )(1 − Pcd )2 ].
Figure 2(b) includes the relationship between the fraction of compromised (direct or indirect) keys for noncompromised nodes and the number of compromised nodes for some combinations of s and s′ . We can see that the random subset scheme also provides high security guarantee in terms of the fraction of compromised (direct or indirect) keys between noncompromised nodes when the number of compromise nodes does not exceed a certain threshold. Two noncompromised sensor nodes may need to re-establish an indirect key when the current pairwise key is compromised. The knowledge of the identities of compromised nodes is generally a difficult problem, which needs deep investigation. However, when such detection mechanism is available and the node compromises are detected, it is alway desirable to re-establish the pairwise key. Thus, we assume that the detection of compromised nodes is done through other techniques, and is not considered in this paper. Assume the source node contacts d neighbor nodes to re-establish an indirect key with the destination node. Among these d nodes, the average number of noncompromised nodes can be estimated by d (NN−Nc ) for simplicity. If one of these noncompromised nodes shares common noncompromised polynomials with both the source and the destination nodes, a new pairwise key can be established. Thus, the probability of re-establishing an indirect key between two noncompromised nodes can be approximately estimated by Pre = 1 − [1 − p2 (1 − Pcd )2 ]
d (N −Nc ) N
.
Figure 3 includes the relationship between the probability of re-establishing an indirect key for noncompromised nodes and the number of compromised nodes in the network. It shows that there is still a high probability to reestablish a pairwise key between two noncompromised nodes when the current ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
54
•
D. Liu et al.
Fig. 3. The probability of re-establishing a pairwise key using path discovery. Assume each node has available storage equivalent to 200 keys, and contacts 30 neighbor nodes d = 30. Assume N = 20,000.
key is compromised, given that the network still provides certain security performance (e.g., less than 60% compromised links). 4.5 Comparison with Previous Schemes The random subset assignment scheme has a number of advantages compared with the basic probabilistic scheme [Eschenauer and Gligor 2002], the q-composite scheme [Chan et al. 2003], and the random pairwise keys scheme [Chan et al. 2003]. In this analysis, we first compare the communication and computational overheads introduced by these schemes given certain storage constraint, and then compare their security performance under attacks. We do not compare the random subset assignment scheme with the multiplespace key predistribution scheme in [Dz et al. 2003], since these two schemes are actually equivalent to each other. In fact, in multiple-space key predistribution scheme, the elements in the second row of matrix G can be considered as the IDs of sensor nodes in the random subset assignment scheme; each matrix Di can be considered as the coefficients of a bivariate polynomial; each row in a matrix Ai can be considered as a polynomial share; computing a key through Ac (i) × G( j ) can be considered as evaluating a polynomial share. After the direct key establishment, the basic idea of the path key establishment is to find an intermediate node that shares direct keys with both the source and the destination nodes. This is similar in all the previous schemes and the random subset assignment scheme. For simplicity, we focus on the overheads in direct key establishment. Note that each coefficient in our scheme takes about the same amount of space as a cryptographic key, since Fq is a finite field that can just accommodate the keys. We assume that each sensor node can store up to C keys or polynomial coefficients. The communication and computational overheads for different schemes are summarized in Table I. The communication overhead is calculated using the size of the list of key or polynomial IDs; and the computational overhead is calculated using the number of comparisons in identifying the common key or polynomial, and the number of polynomial evaluations, assuming that the IDs ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
55
Table I. Communication and Computational Overheads for Direct Key Establishment in Different Schemes Communication
Computation
Basic probabilistic scheme [Eschenauer and Gligor 2002] q-composite scheme [Chan et al. 2003] Random pairwise keys scheme [Chan et al. 2003]
C log sk
2C+ p−pC 2
C log sk
C log C comparisons
Random subset assignment scheme Grid-based scheme
s′ log s
0
0
log C comparisons
0 2s′ + p−ps′ 2
log s′ comparisons + 1 polynomial evaluation 1 polynomial evaluation
sk is the key pool size in the basic probabilistic scheme and the q-composite scheme. C . s′ = t+1 The last row will be discussed in Section 5.
of keys or polynomials are stored in ascending order in each node, and binary search is used to locate the ID of the common key or polynomial. 4.5.1 Comparison with the Basic Probabilistic and the q-Composite Scheme. According to Table I, we can see that the random subset assignment is usually much more efficient than the basic probabilistic scheme [Eschenauer and Gligor 2002] and the q-composite scheme [Chan et al. 2003] in terms of communication overhead due to small s and s′ . Indeed, this overhead is reduced by a factor of at least t +1. However, the computation overhead is more expensive in the random subset assignment scheme, since it has to evaluate a t-degree polynomial. Figures 2(a) and (b) compare the security performance of the random subset assignment scheme with the basic probabilistic scheme [Eschenauer and Gligor 2002] and the q-composite scheme [Chan et al. 2003]. These figures clearly show that before the number of compromised sensor nodes reaches a certain point, the random subset assignment scheme performs much better than both of the other schemes. When the number of compromised nodes exceeds a certain point, the other schemes have fewer compromised links or keys than the random subset assignment scheme. Nevertheless, under such circumstances, none of these schemes provide sufficient security due to the large fraction of compromised links (over 60%) or the large fraction of compromised (direct or indirect) keys (over 80%). Thus, the random subset assignment scheme clearly has advantages over the basic probabilistic scheme [Eschenauer and Gligor 2002] and the q-composite scheme [Chan et al. 2003]. 4.5.2 Comparison with the Random Pairwise Keys Scheme. As shown in Table I, the random pairwise keys scheme [Chan et al. 2003] does not have any communication and computational overheads in direct key establishment, since it stores the IDs of all other nodes with which it can establish direct keys. In terms of security performance, the random pairwise keys scheme does not allow reuse of the same key by multiple pairs of sensor nodes [Chan et al. 2003]. Thus, the compromise of some sensor nodes does not lead to the compromise of direct keys shared between noncompromised nodes. As we discussed earlier, ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
56
•
D. Liu et al.
Fig. 4. The relationship between the probability of establishing a common key and the maximum supported network size in order to be resilient against node compromise.
with a restriction that no polynomial be used more than t + 1 times, our scheme can ensure the same property. Now we compare the performance between the random subset assignment scheme under the above restriction and the random pairwise keys scheme. The maximum number of nodes that the random subset assignment scheme supports can be estimated as N = s×(t+1) . Assuming the storage overhead in s′ ′2 each sensor node is C = s′ · (t + 1), we have s = N ×s . Together with Eq. (1), C we can derive the probability of establishing a direct key between two nodes with a given storage constraint. Figure 4 plots the probability of two sensor nodes sharing a direct key in terms of the maximum network size for the random pairwise keys scheme [Chan et al. 2003] and the random subset assignment scheme. We can easily see that the random subset assignment scheme has lower but almost the same performance as the random pairwise keys scheme. The random subset assignment scheme has several advantages over the random pairwise keys scheme [Chan et al. 2003]. In particular, in the random subset assignment scheme, sensor nodes can be added dynamically without having to contact the previously deployed sensor nodes. In contrast, in the random pairwise keys scheme, if it is necessary to dynamically deploy sensor nodes, the setup server has to either reserve space for sensor nodes that may never be deployed, which reduces the probability that two deployed nodes share a common key, or inform some previously deployed nodes of additional pairwise keys, which introduces additional communication overhead. Moreover, given certain storage constraint, the random subset assignment scheme (without the restriction on the reuse of polynomials) allows the network to grow, while the random pairwise keys scheme has an upper limit on the network size. Thus, the random subset assignment scheme would be a more attractive choice than the random pairwise keys scheme in certain applications. 5. HYPERCUBE-BASED KEY PREDISTRIBUTION In this section, we give another instantiation of the framework, which we call the hypercube-based key predistribution. This scheme has a number of ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
57
Fig. 5. Hypercube-based key predistribution when n = 2. (a) An example of hypercube when n = 2. (b) An example order of node assignment.
attractive properties. First, it guarantees that any two sensor nodes can establish a pairwise key when there are no compromised sensor nodes, assuming that the nodes can communicate with each other. Second, this scheme is resilient to node compromises. Even if some nodes are compromised, there is still a high probability to re-establish a pairwise key between two noncompromised nodes. Third, a sensor node can directly determine whether it can establish a direct key with another node, and if it can, which polynomial should be used. As a result, there is no communication overhead during polynomial share discovery. Note that in the preliminary version of this paper [Liu and Ning 2003b], we studied a key predistribution technique named grid-based key predistribution. Hypercube-based key predistribution is a generalization of grid-based key predistribution. The grid-based key predistribution scheme is interesting due to its simplicity. However, we do not explicitly discuss it here. Please refer to Liu and Ning [2003b] for details. 5.1 The Hypercube-Based Scheme Given a total of N sensor nodes in the network, the hypercube-based scheme constructs an n-dimensional hypercube with mn−1 bivariate polynomials √ j arranged for each dimension j , { f i1 ,...,in−1 (x, y)}0≤i1 ,...,in−1 j if we consider node IDs i and j as integer values. The following algorithm can be performed on either of them. (a) The source node maintains a set L = {d 1 , . . . , d w }d1 j d1 . Thus, if ud1 = id1 , we choose d ′ = d 1 and compute the next node v. It is easy to verify that v < i. If ud1 = j d1 (d 1 has been chosen before), we have ud1 < id1 . This implies that v < i for any d ′ in L. Thus, we can choose any value in L. As a result, u can always find the next node v, and the size of the set L is reduced by 1 each time step (b) is executed. Eventually, the above key predetermination algorithm will output a sequence of nodes with node j as the last node. Moreover, the Hamming distance between u and v in the second step is exactly 1. This implies that every two adjacent nodes in P can establish a direct key. Thus, we can conclude that the above key path predetermination algorithm guarantees to compute a key path between any two sensor nodes. 5.2 Dynamic Key Path Discovery Though the path discovery algorithm described above can predetermine a key path with a number of intermediate nodes, the intermediate nodes may have been compromised, or are out of communication range in some situations. However, there are alternative key paths. In particular, we may reuse the predetermined paths at other nodes to find a secure key path. For example, in Figure 5(a), besides node i1 , j 2 and j 1 , i2 , node i1 , m − 2 has a predetermined path to node j 1 , j 2 through node j 1 , m − 2. Thus, it can help node i setup a common key with node j . Though it is possible to flood the network to find a key path, the resource constraints on sensor nodes make this method impractical. Instead, we propose ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
60
•
D. Liu et al.
the following algorithm to find a key path between nodes S and D dynamically. The basic idea is to have the source node and each intermediate node contact a noncompromised node that is “closer” to the destination node in terms of the Hamming distance between their IDs. Indeed, if there are no compromised nodes in the network, the above key path predetermination algorithm can always find a key path if any two nodes can communicate with each other. In practice, we may use the dynamic path discovery instead to achieve better performance when there are attacks or communication failures. To increase the chance of success, the following algorithm may be performed multiple rounds. It is assumed that every message between two nodes in the algorithm is encrypted and authenticated with the direct key between them. (1) In order to establish an indirect key with node D, node S generates a random number r and maintains a counter c with initial value 0. In each round, it increments c and computes K c = F (r, c), where F is a pseudo random function [Goldreich et al. 1986]. Then, it constructs a message M = {S, D, K c , c, flag} with flag = 1, and goes to the next step. (The flag in message M indicates whether the Hamming distance is reduced by forwarding M to the next intermediate node. The purpose is to control the length of the path discovered by this algorithm and the number of messages.) (2) Consider a sensor node u having the message M = {S, D, K c , c, flag}. Node u first tries to find a noncompromised node v that can establish a direct key with u using a noncompromised polynomial and has a smaller Hamming distance to D than u. If this succeeds, u set flag in M to 1 and sends the modified message M to v. We can see that the Hamming distance between v and D is one smaller than that between u and D. If u cannot find such a node and flag in M is 0, the path discovery stops. Otherwise, it selects a noncompromised node v that can establish a direct key with u using a noncompromised polynomial and whose Hamming distance to D is the same as u. If u finds such a node v, it sets flag in message M to 0 and sends to v the modified message M . If it cannot find such a node, the path discovery protocol at this node stops. (3) When the destination node D receives the key establishment request, it knows that node S wants to setup a pairwise key with it. Node D then sets the pairwise key as K S, D = K c , and informs node S the counter value c. As a result, both sensor nodes share the same pairwise key. LEMMA 5.2. For any two nodes S and D, the above dynamic key path discovery algorithm guarantees to find a key path with d h − 1 intermediate nodes if there are no compromised nodes and any two nodes can communicate with each other, where d h is the Hamming distance between S and D. PROOF. Let L = {d 1 , . . . , d w }d1 Dd1 . Thus, u can choose node v = u1 , . . . , ud1 −1 , Dd1 , ud1 +1 , . . . , un . Since v < u, v must be the ID of an existing node. If u < D, we can choose any value in L other than d 1 , since we always have v < D and v must be the ID of an existing node. Thus, if there are no compromised nodes and any two nodes can communicate with each other, any intermediate node will succeed in finding the next closer node v. Since each v has one more common subindex than the corresponding u, the Hamming distance between each v and D will be smaller than that between u and D by 1. Therefore, there will be d h − 1 intermediate nodes between S and D. LEMMA 5.3. The number of intermediate nodes in the key path discovered in the above dynamic key path discovery algorithm never exceeds 2(d h − 1). PROOF. Consider the flag values in the sequence of unicast messages in the path discovery flag1 , . . . , flagi . First, it contains at most d h ones, since otherwise, the request message should have already reached the destination node before the last message containing flag = 1. Second, there are no two consecutive zeros in this sequence, since the second step will stop if it cannot find next closer node and the flag in the current message is zero. Third, the last two values (flagi−1 and flagi ) in this sequence are always 1 for a successful path discovery. Consider the last three nodes in the key path u, v, D, where D is the destination node. It is obvious that flagi is always 1 for a successful discovery. If flagi−1 = 0, the Hamming distance between u and D is 1, and there is no intermediate node between u and D. Thus, we know that both flag values are 1. Hence, we can conclude that the maximum length of this sequence is d h − 2 + d h − 1 + 2 = 2d h − 1. This indicates that the maximum number of intermediate nodes in the key path is 2(d h − 1). 5.3 Performance Each sensor node stores n polynomial shares, and each polynomial is shared √ by about m different nodes, where m = ⌈ n N ⌉. Thus, each node can establish direct keys with n(m − 1) other nodes. This indicates that the probability√ to esn = n(⌈ NN−1⌉−1) . tablish direct keys between sensor nodes can be estimated by n(m−1) mn −1 Figure 6(a) shows that the probability of establishing a direct key between two nodes decreases as the number of dimensions n or the network size N grows. However, according to the path discovery algorithm, if there are no compromised nodes and any two nodes can communicate with each other, it is guaranteed that any two nodes can establish a pairwise key (directly or indirectly). 5.4 Overhead Each node has to store n polynomial shares and the IDs of revoked nodes with which it can establish direct keys. The former introduces n(t + 1) log q bits storage overhead. For the latter part, a node only needs to remember up to t compromised node IDs for each polynomial, since if t + 1 shares of one bivariate polynomial are compromised, this polynomial is already compromised and can be discarded. In addition, a sensor node i only needs to remember one subindex ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
62
•
D. Liu et al.
Fig. 6. Performance of the hypercube-based scheme. (a) Probability of estabilishing direct keys. (b) Average key path length.
of each revoked ID, because the IDs of node i and the revoked node only differ on one subindex. Thus, it requires at most ntl bits storage overhead to keep track of revoked node IDs. Totally, the storage overhead at senor nodes is at most n(t+1) log q+ntl bits, where l = ⌈log2 m⌉. To establish a direct key, a sensor node only needs the ID of the other node, and there is no communication overhead. However, if two nodes cannot establish a direct key, they need to find a number of intermediate nodes to help them establish a temporary session key. When key path predetermination is used, discovering a key path does not introduce any communication overhead. However, when dynamic key path discovery is used, this process involves a number of unicast messages. From Lemma 5.3, we know that the dynamic path discovery introduces at most 2d h − 1 unicast messages if every unicast message is successfully delivered. Now let us estimate the communication overhead during the path key establishment, assuming the key path is already discovered. In sensor networks, sending a unicast message between two arbitrary nodes may involve the overhead of establishing a route. However, finding a route in a sensor network depends on routing protocol, which is also required by other schemes to do path discovery. In fact, we are unable to give a precise analysis on this overhead due to the undecided routing protocol. Thus, for simplicity, we use the number of unicast messages to estimate the communication overhead involved in the ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
63
path key establishment. In fact, there are L + 1 unicast messages for the path key establishment using a key path with length L if every unicast message is successfully delivered. If there are no compromise sensor nodes and any two nodes can communicate with each other, there exists at least one key path with d h − 1 intermediate nodes, which is indeed one of the shortest key paths. Consider two nodes u (u1 , . . . , un ) and v (v1 , . . . , vn ). The probability of 1 , and the probability of having exactly having ue = ve for any e ∈ {1, . . . , n} is m i different subindexes is 1 1 i n! P [i differentsubindexes] = . 1− i!(n − i)! mn−i m Thus, the average key path length can be estimated by n (i − 1) × P [i differentsubindexes]. L= i=1
Figure 6(b) shows the relationship between the average key path length and the number of dimensions given different network sizes. We can see that the average key path length (and thus the communication overhead) increases as the number of dimensions or the network size grows. In terms of computational overhead, each unicast message requires one polynomial evaluation, one authentication and one encryption at the source node, and one polynomial evaluation, one authentication and one decryption at the destination node. Since L + 1 unicast messages are needed for a key path with length L, there are totally 2(L + 1) polynomial evaluations, 2(L + 1) authentications, L + 1 encryptions, and L + 1 decryptions involved in the path key establishment if every message is successfully delivered. 5.5 Security Analysis An adversary may launch two types of attacks against the hypercube-based scheme. First, the attacker may target the pairwise key between two particular sensor nodes. The attacker may either try to compromise the pairwise key, or prevent the two sensor nodes from establishing a pairwise key. Second, the attacker may target the entire network to lower the probability or to increase the cost to establish pairwise keys. 5.5.1 Attacks Against a Pair of Nodes. We focus on the attacks on the communication between two particular sensor nodes u and v without compromising either of them. If they can establish a direct key, the only way to compromise the key without compromising the related nodes is to compromise the shared polynomial between them, which requires the attacker to compromise at least t +1 sensor nodes. If they have established an indirect key through path key establishment, the attacker may compromise one of the polynomials or the nodes involved in this establishment, so that the attacker can recover the key if it has the message used to deliver this key. However, even if the attacker compromises the current pairwise key, the related sensor nodes may still re-establish another pairwise key with a different key path. To prevent u from establishing a pairwise key with v, the attacker has to compromise all those n polynomial ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
64
•
D. Liu et al.
shares on u (or v) so that node u (or v) is unable to use any polynomial to setup a pairwise key; otherwise, there may still exist noncompromised sensor nodes that can help them establish a new pairwise key. For each of these polynomial shares, the attacker has to compromise at least t + 1 nodes. This means that the attacker has to compromise at least n(t + 1) sensor nodes to prevent u and v from establishing another pairwise key. 5.5.2 Attacks Against the Network. Since the adversary also knows the distribution of polynomials over sensor nodes, it may systematically attack the network by compromising the polynomials in F one by one in order to compromise the entire network. Assume the attack compromises b bivariate polynomials. There are up to bm sensor nodes with at least one compromised polynomial share. Among all the remaining N − bm sensor nodes, none of the secure links between them is compromised, since the polynomials used to establish direct keys between them are not compromised. However, the indirect keys in the remaining part of network could be affected, since the common polynomial between two intermediate nodes in the key path may be compromised. Nevertheless, there is still a high probability to re-establish a new indirect key between them, even if an indirect key between two noncompromised nodes is compromised. Alternatively, the adversary may randomly compromise sensor nodes to attack the path discovery process in order to make it more expensive to establish pairwise keys. In the following, we first investigate the probability of a direct key (secure link) being compromised, and then investigate the probability of any (direct or indirect) key being compromised under node compromises. Assume a fraction pc of sensor nodes in the network is compromised. Then, the probability that exactly i shares on a particular bivariate polynomial has been disclosed is P [i compromised shares] =
m! pi (1 − pc )m−i , i!(m − i)! c
√ where m = ⌈ n N ⌉. Thus, the probability of a particular bivariate polynomial t being compromised is Pcd = 1 − i=0 P [i compromised shares]. If m ≫ t + 1, this is equivalent to the probability of any link (direct key) between noncompromised nodes being compromised. For a small m, Pcd only represents the fraction of compromised bivariate polynomials. For example, when n = 4 and N = 20, 000, we have m = 12 and t = 11. In this case, we do not use the fraction of compromised bivariate polynomial to estimate the fraction of compromised links between noncompromised nodes. Instead, we note that the fraction of compromised links between noncompromised nodes in this situation is zero, which implies perfect security against node compromises. Figure 7(1) shows the relationship between the fraction of compromised links for noncompromised nodes and the fraction of compromised sensor nodes with different number of dimensions.1 We can see that given the fixed network size 1 We
assume there are totally 20,000 sensor nodes in the following simulations. Thus, for a fourdimensional hypercube, we have m = ⌈ 4 20,000⌉ = 12. This means that the degree of bivariate ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
65
Fig. 7. Security performance of the hypercube-based scheme. Assume each sensor has available √ storage equivalent to 50 keys, N = 20,000, m = n N , and t = ⌊ 50 n − 1⌋. (a) Fraction of compromised links versus fraction of compromised nodes. (b) Fraction of compromised direct and indirect keys versus fraction of compromised nodes.
and storage overhead, the hypercube-based scheme with more dimensions has higher security performance. Now let us compute the probability of any (direct or indirect) key between two noncompromised nodes being compromised. Suppose sensor nodes u and v have different subindexes in i dimensions. The key path discovered between them involves i − 1 intermediate nodes and i bivariate polynomials. If none of these i − 1 intermediate nodes and i bivariate polynomials is compromised, the pairwise key is still secure; otherwise, this key cannot be trusted. This means that the probability of this pairwise key being compromised can be estimated by P [compromised | i different subindexes] = 1 − (1 − pc )i−1 × (1 − Pcd )i . polynomial t is not necessary to be larger than 11, and the sensor node needs to store at most 4 × (11 + 1) = 48 coefficients. Therefore, we assume the storage constraint at sensor nodes is equivalent to store 50 keys instead of 200 keys in the analysis of the earlier schemes. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
66
•
D. Liu et al.
Fig. 8. Maximum supported network size for different number of dimensions. Assume each sensor has available storage equivalent to 50 keys.
Thus, the probability of any (direct or indirect) key between two noncompromised nodes being compromised can be estimated by Pc =
n i=1
P [compromised | i differentsubindexes] × P [i differentsubindexes].
Figure 7(b) shows the relationship between the probability Pc and the fraction of compromised nodes for different number of dimensions. We can still see the improvements of the security when we have more dimensions. This is because the probability of a polynomial being compromised decreases quickly when we have more dimensions. We also note that when the fraction of compromised sensor nodes is less than a certain threshold, having more dimensions decreases the security of the scheme. The reason is that having more dimensions increases the average key path length, which in turn increases the probability of at least one intermediate node in the key path being compromised. 5.5.3 Maximum Supported Network Size. Let us consider the maximum supported network size when the perfect security against node compromises is required. Figure 8 shows the maximum supported network size as a function of the number of dimensions given a fixed memory constraint and guarantee of perfect security against node compromises. We can see that the maximum supported network size increases dramatically when we have more dimensions within the range shown in the figure. (Note that once the number of dimension passes a certain threshold, this maximum supported network size will start to drop.) Indeed, when the number of dimensions is smaller, the hypercubebased scheme can support a larger network by adding more dimensions without increasing the storage overhead or sacrificing the security performance. 5.5.4 Probability of Re-Establishing a Pairwise Key. The following analysis estimates the probability to re-establish an indirect key between two noncompromised nodes with the dynamic path discovery algorithm when all predetermined key paths cannot be used due to compromised intermediate nodes or communication failures. Let Ii denotes the probability to establish a pairwise key between two noncompromised nodes having different subindexes in i different dimensions ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
67
Fig. 9. Probability to re-establish a pairwise key between noncompromised nodes versus the fraction of compromised nodes. Assume each sensor node has available storage equivalent to 50 keys, and N = 20,000.
(i.e., the Hamming distance between the two node IDs is d h = i). For a particular node u, we refer to a noncompromised intermediate node as its closer node to the destination node if this node can establish a direct key with node u using a noncompromised polynomial and is closer to destination node in terms of the Hamming distance between their IDs. According to the dynamic key path discovery algorithm, the pairwise key can be established if either of the following two cases is true. In the first case, the source node finds a closer node and the selected closer node finds a key path to the destination node. This probability can be estimated by P1 = 1 − [1 − (1 − Pcd )(1 − pc )]i Ii−1 . In the second case, the source node cannot find any closer node but can establish a direct key using a noncompromised polynomial with a noncompromised node that is able to find a closer node that can find a key path to the destination node. This probability can be estimated by
i P2 = (1 − P1 ) 1 − Pcd 1 − [1 − (1 − Pcd )(1 − pc )]i−1 Ii−1 .
Overall, we have Ii = P1 + P2 for i > 1 and I1 = 1− Pcd . Therefore, the probability of re-establishing an indirect ky between two noncompromised nodes can be estimated by Pre =
n i=1
Ii × P [i differentsubindexes].
Figure 9 shows this probability for different fractions of compromised sensor nodes. It shows that even if a pairwise key is compromised, there is still a high probability to re-establish a new key if the compromised nodes are detected. In addition, we note that the probability of re-establishing a key increases when we have more dimensions. This is because the probability of a polynomial being compromised decreases quickly as the number of dimensions grows. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
68
•
D. Liu et al.
5.6 Comparison with Previous Schemes This subsection compares the hypercube-based key predistribution scheme with the basic probabilistic scheme [Eschenauer and Gligor 2002], the q-composite scheme [Chan et al. 2003], the random pairwise keys scheme [Chan et al. 2003], and the random subset assignment scheme presented in Section 4. In the comparison, we use the hypercube-based scheme with n = 2 (i.e., the grid-based scheme) for simplicity. The communication and computational overheads for direct key establishment in the grid-based scheme and the other schemes are summarized in Table I. We can see that the grid-based scheme is generally much more efficient than the basic probabilistic scheme [Eschenauer and Gligor 2002], the q-composite scheme [Chan et al. 2003], and the random subset assignment scheme in terms of the communication and computational overhead. Compared with the random pairwise keys scheme [Chan et al. 2003], it only involves one more polynomial evaluation, which can be done very efficiently by using the optimization technique in Section 6. To compare the security between different schemes, we assume the network size is N = 20,000, and each node can store up to 200 keys or polynomial coefficients. In the grid-based scheme, we have m = 142 and p = 0.014. The four curves in the right part of Figures 10(a) and (b) show the fraction of compromised links and the fraction of compromised (direct or indirect) keys between noncompromised nodes as a function of the number of compromised sensor nodes given p = 0.014. Similar to the comparison in Section 4, the random subset assignment scheme and the grid-based scheme perform much better when there are a small number of compromise nodes. In fact, these two scheme always have better performance when the number of compromised nodes is less than 14,000. When there are more than 14,000 compromised nodes, none of the schemes can provide sufficient security because of the large fraction of compromised links (over 60% compromised links) or the large fraction of compromised (direct or indirect) keys (over 90% compromised keys). Though p = 0.014 is acceptable for the grid-based scheme, for the basic probabilistic, the q-composite, and the random subset assignment schemes, p should be large enough to make sure the whole network is fully connected. Assume p = 0.33. This requires about 42 neighbor nodes for each sensor node to make sure the whole network with 20,000 nodes is connected with a high probability. The three curves in the left part of Figures 10(a) and (b) show the fraction of compromised links and the fraction of compromised (direct or indirect) keys between noncompromised nodes as a function of the number of compromised sensors for the above three schemes when p = 0.33. We can see that a small number of compromised nodes reveal a large fraction of secrets in the network in these schemes; however, the fraction of compromised links and the fraction of compromised (direct or indirect) keys are much lower in the grid-based scheme for the same number of compromised nodes. To compare with the random pairwise keys scheme [Chan et al. 2003], we set m = t + 1, so that the grid-based scheme can provide the same degree of perfect security guarantee as the random pairwise keys scheme. Assume the ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
69
Fig. 10. Performance of the grid-based key predistribution scheme under attacks. Assume each sensor node has available storage equivalent to 200 keys. (a) Fraction of compromised links between noncompromised nodes versus number of compromised nodes. (b) Fraction of compromised (direct or indirect) keys between noncompromised nodes versus number of compromised nodes.
storage overhead on sensor nodes is 2(t + 1) = 2m. The grid-based scheme can support a network with m2 nodes, and the probability that two sensor nodes 2 share a direct key is p = m+1 . With the same number sensor nodes and storage 2 =m overhead, the random pairwise keys scheme [Chan et al. 2003] has p = 2m , m2 which is approximately the same as our scheme. In addition to the above comparisons, the grid-based scheme has some unique properties that the other schemes do not provide. First, when there are no compromised sensor nodes in the network, it is guaranteed that any pair of sensor nodes can establish a pairwise key either directly without communication, or through the help of an intermediate node when the sensor nodes can communicate with each other. Besides the efficiency in determining the key path, the communication overhead is substantially lower than the previous schemes, which requires real-time path discovery even in normal situations. Second, even if there are compromised sensor nodes in the network, there is still a high probability that two noncompromised sensor nodes can re-establish a pairwise key. Our earlier analysis indicates that it is very difficult for the adversary to ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
70
•
D. Liu et al.
prevent two noncompromised nodes from establishing a pairwise key. Finally, due to the orderly node assignment, this scheme allows optimized deployment of sensor nodes so that the sensor nodes that can establish direct keys are close to each other, thus greatly decreasing the communication overhead in path key establishment. 6. IMPLEMENTATION AND EVALUATION We have implemented the random subset assignment scheme and the gridbased scheme2 on MICA2 motes [Crossbow Technology Inc. 2004] running TinyOS [Hill et al. 2000], which is an operating system for networked sensor nodes. These implementations were written in nesC [Gay et al. 2003], a C-like programming language used to develop TinyOS and its applications. A critical component in our implementations is the algorithm to evaluate a tdegree polynomial, which is used to compute pairwise keys. In the following, we first present an optimization technique for polynomial evaluation on sensor nodes, and then report the evaluation of this optimization technique and our key predistribution schemes. 6.1 Optimization of Polynomial Evaluation on Sensor Nodes Evaluating a t-degree polynomial is essential in the computation of a pairwise key in our schemes. This requires t modular multiplications and t modular additions in a finite filed Fq , where q is a prime number that is large enough to accommodate a cryptographic key. This implies that q should be at least 64 bit long for typical cryptosystems such as RC5. However, processors in sensor nodes usually have much smaller word size. For example, ATmega128, which is used in many types of sensors, only supports 8-bit multiplications and has no division instruction. Thus, in order to use the basic scheme, sensor nodes have to implement some large integer operations. Nevertheless, in our schemes, polynomials can be evaluated in much cheaper ways than polynomial evaluation in general. This is mainly due to the observation that the points at which the polynomials are evaluated are sensor IDs, and these IDs can be chosen from a different finite field Fq ′ , where q ′ is a prime number that is larger than the maximum number of sensors but much smaller than a typical q. During the evaluation of a polynomial f (x) = at x t + at−1 x t−1 + · · · + a0 , since the variable x is the ID of a sensor, the modular multiplication is always performed between an integer in Fq and another integer in Fq ′ . For example, to compute the product of two 64-bit integers on a 8-bit CPU, it takes 64 word multiplications with the standard large integer multiplication algorithm, and 27 word multiplications with the Karatsuba–Ofman algorithm [Knuth 1997]. In contrast, it only takes 16 word multiplications with the standard algorithm to compute the product of a 64-bit integer and a 16-bit integer on the same platform. Similarly, reduction of the later product (which is an 80-bit integer) 2 These
implementations are included in our tiny key management (TinyKeyMan) package, which is available online at http://discovery.csc.ncsu.edu/software/TinyKeyMan. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
71
modulo a 64-bit prime is also about 75% cheaper than the former product (which is a 128-bit integer). Considering the lack of division instruction in typical sensor processors, we further propose to use q ′ in the form of q ′ = 2k +1. Because of the special form of q ′ = 216 +1, no division operation is needed to compute modular multiplications in Fq ′ [Stallings 1999]. Two natural choices of such prime numbers are 257 = 28 + 1 and 65,537 = 216 + 1. Using the random subset assignment scheme, they can accommodate up to 256 and 65,536 sensors, respectively. Using the grid-based scheme, they can accommodate up to 2562 = 65,536 and 65,5362 = 4,294,967,296 sensors, respectively. To take full advantage of the special form of q ′ , we propose to adapt the basic polynomial-based key predistribution in Section 2 so that a large key is split into pieces and each piece is distributed to sensors with a polynomial over Fq ′ . The same technique can be easily applied to all polynomial pool-based schemes with slight modification. Assume each cryptographic key is n bits. The setup server divides the nbit key into r pieces of l -bit segments, where l = ⌊log2 q ′ ⌋ and r = ⌈ nl ⌉. For simplicity, we assume n = l · r. The setup server randomly generates r t-degree bivariate polynomials { f v (x, y)}v=1,...,r over Fq ′ such that f v (x, y) = f v ( y, x) for v = 1, . . . , r. The setup server then gives the corresponding polynomial shares on these r polynomials to each sensor node. Specifically, each sensor node i receives { f v (i, x)}v=1,...,r . With the basic scheme, each of these r polynomials can be used to establish a common secret between a pair of sensors. These sensors then choose the l least significant bits of each secret value as a key segment. The final pairwise key can simply be the concatenation of these r key segments. It is easy to verify that this method requires the same number of word multiplications as the earlier one; however, because of the special form of q ′ , no division operation is necessary in evaluating the polynomials. This can significantly reduce the computation on processors that do not have any division instruction. The security of this scheme is guaranteed by Lemma 6.1. LEMMA 6.1. In the adapted key predistribution scheme, the entropy of the l +1 key for a coalition of no more than t other sensor nodes is r · [log2 q ′ − (2 − 2q ′ )], where l = ⌊log2 q ′ ⌋ and r = ⌈ nl ⌉. PROOF. Assume that nodes u and v need to establish a pairwise key. Consider a coalition of no more than t other sensor nodes that tries to determine this pairwise key. According to the security proof of the basic key predistribution scheme [Blundo et al. 1993], the entropy of the shared secret derived with any polynomial is log q ′ for the coalition. That is, any value from the finite field Fq ′ is a possible value of each of { f j (u, v)} j =1,...,r for the coalition. Since each piece of key consists of the last l = ⌊log2 q ′ ⌋ bits of one of the above values, values from 0 to q ′ − 2l − 1 have the probability q2′ to be chosen, while the values from q ′ − 2l to 2l − 1 have the probability q1′ to be chosen. Denote all the information that the coalition knows as C. Thus, for the coalition, the entropy of each piece ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
72
•
D. Liu et al. Table II. Code Sizes for Our Optimized Polynomial Evaluation Schemes Scheme q ′ = 28 + 1 q ′ = 216 + 1
ROM (bytes) 288 416
RAM (bytes) 11 20
of key segment K j , j = 1, . . . , r, is H(K j |C) =
=
l q ′ −2 −1
i=0
l 2 −1 1 q′ 2 log + log2 q ′ 2 ′ q 2 i=q ′ −2l q ′
2(q ′ − 2l ) q′ 2l +1 − q ′ log + log2 q ′ 2 q′ 2 q′
2l +1 . = log2 q − 2 − ′ q ′
Because the r pieces of key segments are distributed individually and independently, the entropy of the pairwise key for the coalition is r 2l +1 ′ H(K |C) = H(K j |·) = r · log2 q − 2 − ′ . q j =1 Consider a 64-bit key. If we choose q ′ = 216 + 1, the entropy of a pairwise key for a coalition of no more than t compromised sensor nodes is 4×[log2 (216 + 1)− 17 (2 − 2162 +1 )] = 63.9997 bits. If we choose q ′ = 28 + 1, this entropy is then 8 × 9 [log2 (28 + 1) − (2 − 282+1 )] = 63.983. Thus, the adapted scheme still provides sufficient security despite of the minor leak of information. 6.2 Evaluation We first evaluate the performance of our optimization technique for polynomial evaluation. This optimization forms the basis of pairwise key computation in our implementation. We provide two options for this component: one with q ′ = 28 +1 and the other with q ′ = 216 +1. The typical length of a cryptographic key in resource-constrained sensor nodes is 64-bit long. To compute a 64-bit pairwise key, a sensor node has to evaluate eight t-degree polynomials if q ′ = 28 + 1 and four t-degree polynomials if q ′ = 216 +1. The code sizes for the implementations of these two options are shown in Table II. The bytes needed for polynomial coefficients are not included in the code size calculation, since it depends on the applications. Obviously, these two implementations occupy only a small amount of memory at sensor nodes. The cost of our optimization technique in computing a 64-bit cryptographic key on a MICA2 mote [Crossbow Technology Inc. 2004] is shown in Figure 11, which also includes the cost of generating a 64-bit MAC (message authentication code) for a 64-bit long message using RC5 [Rivest 1994] and SkipJack [NIST 1998] with a 64-bit long key. These two symmetric cryptographic techniques are generally believed to be practical and efficient for sensor networks. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
73
Fig. 11. Comparison with RC5 and SkipJack. Table III. The Code Size for Random Subset Assignment and Grid-Based Scheme Scheme Random subset assignment Grid-based
ROM (bytes) 2824 1160
RAM (bytes) 106 67
The storage for the polynomial coefficients and the list of compromised nodes are not included in the calculation of code size.
The result shows that computing a pairwise key in our schemes can be faster than generating a MAC using RC5 or SkipJack for a reasonable polynomial degree t; while the value of t is not necessary to be a very large number in practice due to the storage and security concerns, which can be seen from previous analysis in Sections 4 and 5. The result demonstrates the practicality and efficiency of our proposed schemes. According to the result in Figure 11, the 16-bit version is slightly slower than the 8-bit version. However, the 16-bit version can accommodate a lot more sensor nodes than the 8-bit version. Thus, we use the 16-bit option for both the random subset assignment scheme and the grid-based scheme. The code sizes for these two schemes are shown in Table III, which only includes the size of code loaded on sensor nodes, since the components for the setup server are not installed on sensor nodes. In fact, the setup server is not necessary to be a sensor node. We can see that the code size for the grid-based scheme is much smaller than that for the random subset assignment scheme, since the grid-based scheme can directly determine the direct key shared or the key path involved; while the random subset assignment scheme has to contact other nodes and maintain a lot more states. Together with the analysis in previous sections and the evaluation of computational and storage cost, we can conclude that our schemes are practical and efficient for the current generation of sensor networks. 7. RELATED WORK Our schemes are based on the polynomial-based key predistribution protocol in [Blundo et al. 1993]. The protocol in [Blundo et al. 1993] was intended to distribute group keys, and is generally not feasible in sensor networks. Our ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
74
•
D. Liu et al.
schemes only use the two-party case of this protocol; by enhancing the basic polynomial-based scheme with other techniques such as polynomial pool, our schemes can achieve performance beyond the basic protocol. [Eschenauer and Gligor 2002] proposed a probabilistic key predistribution technique to bootstrap the initial trust between sensor nodes. The main idea is to have each sensor randomly pick a set of keys from a key pool before deployment. Then, in order to establish a pairwise key, two sensor nodes only need to identify the common keys that they share. [Chan et al. 2003] further extended this idea and propose the q-composite key predistribution. This approach allows two sensor nodes to setup a pairwise key only when they share at least q common keys. Chan et al. also developed a random pairwise keys scheme to defeat node capture attacks. In our analysis in earlier sections, we have demonstrated that our techniques have significant advantages over these schemes. [Pietro et al. 2003] proposed a seed-based key deployment strategy to simplify the key discovery procedure, and a cooperative protocol to enhance its performance. [Du et al. 2003] independently discovered a technique equivalent to one of our proposed schemes (random subset assignment) in this paper. It was published at the same time in the same conference as the preliminary version of this paper [Liu and Ning 2003b]. However, our paper proposes a more general framework, which makes it possible to discover novel key predistribution schemes. Based on this framework, we developed two efficient instantiations. In addition, we also provide the implementation of proposed schemes and include the real experiment results, which demonstrate the practicality and efficiency of our techniques in real applications. The idea of using sensor’s predeployment knowledge was also independently proposed in [Du et al. 2004] and [Liu and Ning 2003c] to improve the performance of pairwise key predistribution. There are many other related works in sensor network security. [Stajano and Anderson 1999] discussed bootstrapping trust between devices through location-limited channels such as physical contact. [Carman et al. 2000] studied the performance of a number of key management approaches in sensor network on different hardware platform. [Wong and Chan 2001] proposed to reduce the computational overhead for key exchange in low power computing device with the help of a more power server. Perrig et al. developed a security architecture for sensor networks, which includes SNEP, a security primitive building block, and µTESLA [Perring et al. 2001b], an adaption of TESLA [Perring et al. 2000, 2001a]. In our previous work, we proposed a multilevel key chain method for the initial commitment distribution in µTESLA [Liu and Ning 2003a]. [Basagni et al. 2001] presented a key management scheme to secure the communication by periodically updating the symmetric keys shared by all sensor nodes. However, this scheme assumes a tamper-resistant device to protect the key, which is not always available in sensor networks. Zhu et al. proposed a protocol suite named LEAP (localized encryption and authentication protocol) to help establish individual keys between sensors and a base station, pairwise keys between sensors, cluster keys within a local area, and a group key shared by all nodes [Zhu et al. 2003]. [Deng et al. 2003] developed a collection of mechanisms to improve the security for in-networking processing. [Zhu et al. 2004] proposed an interleaved ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
75
hop-by-hop authentication to defeat malicious data injection in sensor networks. The problem of secure data aggregation in the presence of compromised nodes is studied in [Hu and Evans 2003a] and [Przydatek et al. 2003]. Our key predistribution schemes can be used to address the key management issues in these techniques. [Wood and Stankovic 2002] identified a number of DOS attacks in sensor networks. [Karlof and Wagner 2003] pointed out security goals for routing in sensor networks and analyzed the vulnerabilities as well as the countermeasures for a number of existing routing protocols. [Sastry et al. 2003] proposed a location verification technique based on the round trip time (RTT) to detect false location claims. [Hu and Evans 2003b] proposed to use directional antenna to detect wormhole attacks in wireless ad hoc networks. [Newsome et al. 2004] studied the Sybil attack in sensor networks where a node illegitimately claims multiple identities, and developed techniques to defend against this attack. Our proposed techniques can be combined with these techniques to further enhance the security in sensor networks. 8. CONCLUSION AND FUTURE WORK In this paper, we developed a general framework for polynomial pool-based pairwise key predistribution in sensor networks based on the basic polynomialbased key predistribution in [Blundo et al. 1993]. This framework allows study of multiple instantiations of possible pairwise key establishment schemes. Based on this framework, we developed two specific key predistribution schemes: the random subset assignment scheme and the hypercube-based key predistribution scheme. Our analysis of these schemes indicate that both schemes have significant advantages over the existing approaches. The implementation and experimental results also demonstrate the practicality and efficiency in the current generation of sensor networks. Several research directions are worth investigating. First, we observe sensor nodes have low mobility in many applications. Thus, it may be desirable to develop location-sensitive key predistribution techniques to improve the probability for neighbor nodes to share common keys and at the same reduce the threat of compromised nodes. Second, it is critical to detect and/or revoke compromised nodes from an operational sensor network. Third, the hypercube-based key predistribution scheme has some nice properties in the artificial hypercube with which the sensor nodes are organized. We will study ways to map such structures to sensors’ physical locations using reasonable deployment models. ACKNOWLEDGMENTS
The authsors would like to thank the anonymous reviewers for their valuable comments. REFERENCES BASAGNI, S., HERRIN, K., BRUSCHI, D., AND ROSTI,E. 2001. Secure pebblenets. In Proceedings of ACM International Symposium on Mobile Ad Hoc Networking and Computing. 156–163. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
76
•
D. Liu et al.
BLUNDO, C., DE SANTIS, A., HERZBERG, A., KUTTEN, S., VACCARO, U., AND YUNG, M. 1993. Perfectlysecure key distribution for dynamic conferences. In Advances in Cryptology—CRYPTO ’92. Lecture Notes in Computer Science, vol. 740, Springer-Verlag, Berlin, 471–486. CARMAN, D., KRUUS, P., AND B. J., MATT. 2000. Constrains and Approaches for Distributed Sensor Network Security. Tech. rep., NAI Labs. CHAN, H., PERRIG, A., AND SONG, D. 2003. Random key predistribution schemes for sensor networks. In IEEE Symposium on Research in Security and Privacy. 197–213. CROSSBOW TECHNOLOGY INC. 2004. Wireless sensor networks. http://www.xbow.com/Products/ Wireless Sensor Networks.htm. Accessed in February 2004. DENG, J., HAN, R., AND MISHRA, S. 2003. Security support for in-network processing in wireless sensor networks. In 2003 ACM Workshop on Security in Ad Hoc and Sensor Networks (SASN ’03). DU, W., DENG, J., HAN, Y. S., CHEN, S., AND VARSHNEY, P. 2004. A key management scheme for wireless sensor networks using deployment knowledge. In Proceedings of IEEE INFOCOM’04. DU, W., DENG, J., HAN, Y. S., AND VARSHNEY, P. 2003. A pairwise key pre-distribution scheme for wireless sensor networks. In Proceedings of 10th ACM Conference on Computer and Communications Security (CCS’03). 42–51. ESCHENAUER, L. AND GLIGOR, V. D. 2002. A key-management scheme for distributed sensor networks. In Proceedings of the 9th ACM Conference on Computer and Communications Security. 41–47. GAY, D., LEVIS, P., VON BEHREN, R., WELSH, M., BREWER, E., AND CULLER, D. 2003. The nesC language: A holistic approach to networked embedded systems. In Proceedings of Programming Language Design and Implementation (PLDI 2003). GOLDREICH, O., GOLDWASSER, S., AND MICALI, S. 1986. How to construct random functions. J. ACM 33, 4 (Oct.), 792–807. HILL, J., SZEWCZYK, R., WOO, A., HOLLAR, S., CULLER, D., AND PISTER, K. S. J. 2000. System architecture directions for networked sensors. In Architectural Support for Programming Languages and Operating Systems. 93–104. HU, L. AND EVANS, D. 2003a. Secure aggregation for wireless networks. In Workshop on Security and Assurance in Ad Hoc Networks. HU, L. AND EVANS, D. 2003b. Using directional antennas to prevent wormhole attacks. In Proceedings of the 11th Network and Distributed System Security Symposium. 131–141. KARLOF, C. AND WAGNER, D. 2003. Secure routing in wireless sensor networks: Attacks and countermeasures. In Proceedings of 1st IEEE International Workshop on Sensor Network Protocols and Applications. KNUTH, D. 1997. The Art of Computer Programming, 3rd ed. Vol. 2: Seminumerical Algorithms. Addison-Wesley. Reading, MA. LIU, D. AND NING, P. 2003a. Efficient distribution of key chain commitments for broadcast authentication in distributed sensor networks. In Proceedings of the 10th Annual Network and Distributed System Security Symposium. 263–276. LIU, D. AND NING, P. 2003b . Establishing pairwise keys in distributed sensor networks. In Proceedings of 10th ACM Conference on Computer and Communications Security (CCS’03). 52–61. LIU, D. AND NING, P. 2003c . Location-based pairwise key establishments for static sensor networks. In 2003 ACM Workshop on Security in Ad Hoc and Sensor Networks (SASN ’03). 720082. NEWSOME, J., SHI, R., SONG, D., AND PERRIG, A. 2004. The sybil attack in sensor networks: Analysis and defenses. In Proceedings of IEEE International Conference on Information Processing in Sensor Networks (IPSN 2004). NIST. 1998. Skipjack and KEA algorithm specifications. http://csrc.nist.gov/encryption/ skipjack/skipjack.pdf. PERRIG, A., CANETTI, R., SONG, D., AND TYGAR, D. 2000. Efficient authentication and signing of multicast streams over lossy channels. In Proceedings of the 2000 IEEE Symposium on Security and Privacy. PERRIG, A., CANETTI, R., SONG, D., AND TYGAR, D. 2001a. Efficient and secure source authentication for multicast. In Proceedings of Network and Distributed System Security Symposium. PERRIG, A., SZEWCZYK, R., WEN, V., CULLER, D., AND TYGAR, D. 2001b. SPINS: Security protocols for sensor networks. In Proceedings of 7th Annual International Conference on Mobile Computing and Networks. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Establishing Pairwise Keys in Sensor Networks
•
77
PIETRO, R. D., MANCINI, L. V., AND MEI, A. 2003. Random key assignment for secure wireless sensor networks. In 2003 ACM Workshop on Security in Ad Hoc and Sensor Networks (SASN ’03). PRZYDATEK, B., SONG, D., AND PERRIG, A. 2003. SIA: Secure information aggregation in sensor networks. In Proceedings of the First ACM Conference on Embedded Networked Sensor Systems (SenSys ’03). RIVEST, R. 1994. The RC5 encryption algorithm. In Proceedings of the 1st International Workshop on Fast Software Encryption, vol. 809, 86–96. SASTRY, N., SHANKAR, U., AND WAGNER, D. 2003. Secure verification of location claims. In ACM Workshop on Wireless Security. STAJANO, F. AND ANDERSON, R. 1999. The resurrecting duckling: Security issues for ad hoc networks. In Proceedings of the 7th International Workshop on Security Protocols. 172–194. STALLINGS, W. 1999. Cryptography and Network Security: Principles and Practice, 2nd ed. Prentice-Hall, Englewood Cliffs, NJ. WONG, D. AND CHAN, A. 2001. Efficient and mutually authenticated key exchange for low power computing devices. In Proceedings of ASIA CRYPT 2001. WOOD, A. D. AND STANKOVIC, J. A. 2002. Denial of service in sensor networks. IEEE Comput. 35, 10, 54–62. ZHU, S., SETIA, S., AND JAJODIA, S. 2003. LEAP: Efficient security mechanisms for large-scale distributed sensor networks. In Proceedings of 10th ACM Conference on Computer and Communications Security (CCS’03). 62–72. ZHU, S., SETIA, S., JAJODIA, S., AND NING, P. 2004. An interleaved hop-by-hop authentication scheme for filtering false data in sensor networks. In Proceedings of 2004 IEEE Symposium on Security and Privacy. Received April 2004; revised September 2004; accepted September 2004
ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of Attacker Intent, Objectives, and Strategies PENG LIU and WANYU ZANG Pennsylvania State University and MENG YU Monmouth University
Although the ability to model and infer attacker intent, objectives, and strategies (AIOS) may dramatically advance the literature of risk assessment, harm prediction, and predictive or proactive cyber defense, existing AIOS inference techniques are ad hoc and system or application specific. In this paper, we present a general incentive-based method to model AIOS and a game-theoretic approach to inferring AIOS. On one hand, we found that the concept of incentives can unify a large variety of attacker intents; the concept of utilities can integrate incentives and costs in such a way that attacker objectives can be practically modeled. On the other hand, we developed a gametheoretic AIOS formalization which can capture the inherent interdependency between AIOS and defender objectives and strategies in such a way that AIOS can be automatically inferred. Finally, we use a specific case study to show how attack strategies can be inferred in real-world attack– defense scenarios. Categories and Subject Descriptors: C.2.0 [Computer-Communication Networks]: Security and Protection General Terms: Security, Theory Additional Key Words and Phrases: Attacker intent and strategy modeling, attack strategy inference, game theory
1. INTRODUCTION The ability to model and infer attacker intent, objectives, and strategies (AIOS) may dramatically advance the state of the art of computer security for several reasons. First, for many “very difficult to prevent” attacks such as DDoS, given the specification of a system protected by a set of specific security mechanisms, This work was supported by DARPA and AFRL, AFMC, USAF, under award number F20602-021-0216, and by Department of Energy Early Career PI Award. Authors’ addresses: P. Liu and W. Zang, School of Information Sciences and Technology, Pennsylvania State University, University Park, PA 16802; email: [email protected]; M. Yu, Department of Computer Science, Monmouth University, West Long Branch, NJ 07764. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 1515 Broadway, New York, NY 10036 USA, fax: +1 (212) 869-0481, or [email protected]. C 2005 ACM 1094-9224/05/0200-0078 $5.00 ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005, Pages 78–118.
Incentive-Based Modeling and Inference of AIOS
•
79
this ability could tell us which kind of strategies are more likely to be taken by the attacker than the others, even before such an attack happens. Such AIOS inferences may lead to more precise risk assessment and harm prediction. Second, AIOS modeling and inference could be more beneficial during run time. A big security challenge in countering a multiphase, well-planned, carefully hidden attack from either malicious insiders or outside attackers is “how to make correct proactive (especially predictive) real-time defense decisions during an earlier stage of the attack in such a way that much less harm will be caused without consuming a lot of resources?” Although many proactive defense techniques are developed such as sandboxing [Malkhi and Reiter 2000] and isolation [Liu et al. 2000], making the right proactive defense decisions in real time is very difficult primarily due to the fact that intrusion detection during the early stage of an attack can lead to many false alarms, which could make these proactive defense actions very expensive in terms of both resources and denial of service. Although alert correlation techniques [Cuppens and Miege 2002; Ning et al. 2002] may reduce the number of false alarms by correlating a set of alerts into an attack scenario (i.e., steps involved in an attack) and may even tell which kind of attack actions may follow a given action [Debar and Wespi 2001], they are limited in supporting proactive intrusion response in two aspects. (1) When many types of (subsequences of) legitimate actions may follow a given suspicious action, alert correlation can do nothing except for waiting until a more complete attack scenario emerges. However, intrusion response at this moment could be “too late.” (2) When many types of attack actions may follow a given (preparation) action, alert correlation cannot tell which actions are more likely to be taken by the attacker next. As a result, since taking proactive defense actions for each of the attack actions can be too expensive, the response may have to wait until it is clear what attack actions will happen next—perhaps during a later stage of the attack. However, late intrusion response usually means more harm. By contrast, with the ability to model and infer AIOS, given any suspicious action, we can predict the harm that could be caused; then we can make better and affordable proactive intrusion response decisions based on the corresponding risk, the corresponding cost (e.g., due to the possibility of false alarms), and the attack action inferences. Moreover, the intrusion response time is substantially shortened. However, with a focus on attack characteristics [Landwehr et al. 1994] and attack effects [Browne et al. 2001; Zou et al. 2002], existing AIOS inference techniques are ad hoc and system or application specific [Gordon and Loeb 2001; Syverson 1997]. To systematically model and infer AIOS, we need to distinguish AIOS from both attack actions and attack effects. Since the same attack action can be issued by two attackers with very different intents and objectives, AIOS cannot be directly inferred from the characteristics of attacks. Although the attacker achieves his or her intents and objectives through attacks and their effects, the mapping from attack actions and/or effects to attacker intents and/or objectives is usually not one-to-one but one-to-many, and more interestingly, the (average) cardinality of this mapping can be much larger than the mapping from attacker intents and/or objectives to attack actions and/or ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
80
•
P. Liu et al.
effects. This asymmetry nature indicates that in many cases using AIOS models to predict attack actions can be more precise than using the set of actions already taken by the attacker based on either their effects or the causal relationship between them and some other attack actions.1 As a result, although a variety of attack taxonomies and attribute databases have been developed, people’s ability to model and infer AIOS, to predict attacks, and to do proactive intrusion response is still very limited. Nevertheless, a good understanding of attacks is the foundation of practical AIOS modeling and inference. In this paper, we present a systematic incentive-based method to model AIOS and a game-theoretic approach to inferring AIOS. On one hand, we found that the concept of incentives can unify a large variety of attacker intents; the concept of utilities can integrate incentives and costs in such a way that attacker objectives can be practically modeled. On the other hand, we developed a gametheoretic AIOS formalization which can capture the inherent interdependency between AIOS and defender objectives and strategies in such a way that AIOS can be automatically inferred. Finally, we use a specific case study to show how attack strategies can be inferred in real-world attack–defense scenarios. The proposed framework, in some sense, is an economics-based framework since it is based on economic incentives, utilities, and payoffs. The rest of the paper is organized as follows. In Section 2, we discuss the related work. Section 3 presents a conceptual, incentive-based framework for AIOS modeling. In Section 4, we present a game-theoretic formalization of this framework. Section 5 addresses show to infer AIOS. In Section 6, we use a specific case study to show how attack strategies can be inferred in real-world attack–defense scenarios. In Section 7, we mention several future research issues. 2. RELATED WORK The use of game theory in modeling attackers and defenders has been addressed in several other research. In Syverson [1997], Syverson talks about “good” nodes fighting “evil” nodes in a network and suggests using stochastic games for reasoning and analysis. In Lye and Wing [2002], Lye and Wing precisely formalize this idea using a general-sum stochastic game model and give a concrete example in detail where the attacker is attacking a simple enterprise network that provides some Internet services such as web and FTP. A set of specific states regarding this example are identified, state-transition probabilities are assumed, and the Nash equilibrium or best-response strategies for the players are computed. In Browne [2000], Browne describes how static games can be used to analyze attacks involving complicated and heterogeneous military networks. In his example, a defense team has to defend a network of three hosts against an attacking team’s worms. The defense team can choose either to run a worm 1 To
illustrate, consider a large space of strategies the attacker may take according to his or her intent and objectives where each strategy is simply a sequence of actions. An attack action may belong to many strategies, and the consequences of the action could satisfy the preconditions of many other actions, but each strategy usually contains only a small number of actions. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
81
detector or not. Depending on the combined attack and defense actions, each outcome has different costs. In Burke [1999], Burke studies the use of repeated games with incomplete information to model attackers and defenders in information warfare. In Hespanha and Bohacek [2001], Hespanha and Bohacek discuss zero-sum routing games where an adversary (or attacker) tries to intersect data packets in a computer network. The designer of the network has to find routing policies that avoid links that are under the attacker’s surveillance. In Xu and Lee [2003], Xu and Lee use game-theoretical framework to analyze the performance of their proposed DDoS defense system and to guide its design and performance tuning accordingly. Our work is different from the above game theoretic attacker modeling works in several aspects. First, these works focus on specific attack–defense scenarios, while our work focuses on general AIOS modeling. Second, these works focus on specific types of game models, for example, static games, repeated games, or stochastic games; while our work focuses on the fundamental characteristics of AIOS, and game models are only one possible formalization of our AIOS framework. In addition, our AIOS framework shows the inherent relationship between AIOS and the different types of game models, and identifies the conditions under which a specific type of game models will be feasible and desirable. Third, our work systematically identifies the properties of a good AIOS formalization. These properties not only can be used to evaluate the merits and limitations of game-theoretic AIOS models, but also can motivate new AIOS models that can improve the above game theory models or even go beyond standard game-theoretic models. In Gordon and Loeb [2001], information security is used as a response to game theoretic competitor analysis systems (CAS) for the purpose of protecting a firm’s valuable business data from its competitors. Although understanding and predicting the behavior of competitors are key aspects of competitor analysis, the behaviors CAS want to predict are not cyber attacks. Moreover, security is what our game theoretic system wants to model while security is used in Gordon and Loeb [2001] to protect a game-theoretic system. The computational complexity of game-theoretic analysis is investigated in several research. For example, Conitzer and Sandholm [2002] show that both determining whether a pure strategy Bayes–Nash equilibrium exists and determining whether a pure strategy Nash equilibrium exists in a stochastic (Markov) game are NP-hard. Moreover, Koller and Milch [2001] show that some specific knowledge representations, in certain settings, can dramatically speed up equilibrium finding. The marriage of economics and information security has attracted a lot of interests recently (a lot of related work can be found at the economics and security resource page maintained by Ross Anderson at http://www.cl.cam.ac.uk/∼rja14 /econsec.html). However, these work focuses on the economics perspective of security (e.g., security market, security insurance), while our approach is to apply economics concepts to model and infer AIOS. In recent years, it is found that economic mechanism design theory [Clarke 1971; Groves 1973; Vickrey 1961] can be very valuable in solving a variety of Internet computing problems such as routing, packet scheduling, and web ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
82
•
P. Liu et al.
Fig. 1. Network topology.
caching [Feigenbaum et al. 2002; Nisan and Ronan 2001; Wellman and Walsh 2001]. Although when market-based mechanisms are used to defend against attackers [Wang and Reiter 2003], the AIOS are incentive based, which is consistent with our framework, market-based computing does not imply an in-depth AIOS model. Finally, it should be noticed that AIOS modeling and inference are very different from intrusion detection [Lunt 1993; McHugh 2001; Mukherjee et al. 1994]. Intrusion detection is based on the characteristics of attacks, while AIOS modeling is based on the characteristics of attackers. Intrusion detection focuses on the attacks that have already happened, while AIOS inference focuses on the attacks that may happen in the future. 3. AN INCENTIVE-BASED FRAMEWORK FOR AIOS MODELING In this section, we present an incentive-based conceptual model of attacker intent, objectives, and strategies. Our model is quite abstract. To make our presentation more tangible, we will first present the following example, which will be used throughout the paper to illustrate our concepts. Example 1. In recent years, Internet distributed denial-of-service (DDoS) attacks have increased in frequency, severity, and sophistication and become a major security threat. When a DDoS attack is launched, a large number of hosts (called zombies) “controlled” by the attacker flood a high volume of packets toward the target (called the victim) to downgrade its service performance significantly or make it unable to deliver any service. In this example, we would model the intent and objectives and infer the strategies of the attackers that enforce brute-force DDoS attacks. (Although some DDoS attacks with clear signatures, such as SYN flooding, can be effectively countered, most DDoS attacks without clear signatures, such as bruteforce DDoS attacks, are very difficult to defend against since it is not clear which packets are DDoS packets and which are not.) An example scenario is shown in Figure 1 where many zombies (i.e., a subset of source hosts {S0 , . . . , S64 }) are flooding a couple of web sites (i.e., the victims) using normal HTTP requests. Here, Rx. y denotes a router; the bandwidth of each type of links is marked; and the web sites may stay on different subnets. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
83
Although our modeling and inference framework can handle almost every DDoS defense mechanism, to make this example more tangible, we select pushback [Ioannidis and Bellovin 2002], a popular technique, as the security mechanism. Pushback uses aggregates, that is, a collection of packets from one or more flows that have some properties in common, to identify and rate limit the packets that are most likely to cause congestion or DoS. Pushback is a coordinated defense mechanism that typically involves multiple routers. To illustrate, consider Figure 1 again, when router R1.0 detects a congestion caused by a set of aggregates, R1.0 will not only rate-limit these aggregates, but also request adjacent upstream routers (e.g., R2.1) to rate-limit the corresponding aggregates via some pushback messages. The effectiveness of pushback can be largely captured by four bandwidth parameters associated with the incoming link to the victims (i.e., the link that connects R1.0 and R0.0): (a) BN , the total bandwidth of this link; (b) Bao , the (amount of) bandwidth occupied by the DoS packets; (c) Blo , the bandwidth occupied by the legitimate packets; (d) Blw , the bandwidth that the legitimate users would occupy if there are no attacks. For example, pushback is effective if after being enforced Bao can become smaller and Blo can become larger. We build our AIOS models on top of the relationships between the attacker and a computer system (i.e., the defender). In our model, the computer system can be any kind (e.g., a network system, a distributed system, a database system). We call it the system for short. For example, in Example 1 the system consists of every router on a path from a zombie to a victim. The attacker issues attacks to the system. Each attack is a sequence of attack actions associated with the system. For example, an action can be the sending of a message, the submission of a transaction, the execution of a piece of code, and so on. An attack will cause some effects on the system, that is, transforming the system from one state to another state. For example, in Example 1 the main attack effects are that many legitimate packets could not reach the victims. Part of the system is a set of specific security mechanisms. A mechanism can be a piece of software or hardware (e.g., a firewall, an access controller, an IDS). A mechanism usually involves a sequence of defense actions associated with the system when being activated. For example, in Example 1 a router sending out a pushback message is a defense action, and this action can trigger the receiving router(s) to take further defense actions. A security mechanism is activated when an event arrives which causes a set of specific conditions to be satisfied. Many of these conditions are associated with the effects of an attack action in reactive defense, or the prediction of an incoming attack action in proactive defense. For example, in Example 1 a packet arriving at a router is an event. When there is no congestion at the router, this event will not activate any security mechanism. However, when this event leads to “the detection of a congestion” (i.e., the condition), pushback will be activated. And it is clear that whether this condition can be satisfied is dependent upon the accumulated effects of the previous DoS packets arriving at the router. Finally, a defense posture of the system is defined by the set of security mechanisms and the ways they are activated. For example, in Example 1, pushback may be configured ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
84
•
P. Liu et al.
to stay at various defense postures based on such parameters as congestion thresholds and target drop rate, which we will explain in Section 3.3 shortly. The attacker-system relation has several unique characteristics (or properties) that are important in illustrating the principles of our attack strategy inference framework. These properties are as follows. — Intentional Attack Property. Attacks are typically not random. They are planned by the attacker based on some intent and objectives. — Strategy-Interdependency Property. Whether an attack can succeed is dependent on how the system is protected. Whether a security mechanism is effective is dependent on how the system is attacked. In other words, the capacity of either an attack or a defense posture should be measured in a relative way. We will define the notion of strategy shortly. And we will use concrete attack and defense strategies derived from Example 1 to illustrate this property shortly in Section 3.3. — Uncertainty Property. The attacker usually has incomplete information or knowledge about the system, and vice versa. For example, in Example 1 the attacker usually has uncertainty about how Pushback is configured when he or she enforces a DDoS attack. 3.1 Incentive-Based Attacker Intent Modeling Different attackers usually have different intents even when they issue the same attack. For example, some attackers attack the system to show off their hacking capacity, some hackers attack the system to remind the administrator of a security flaw, cyber terrorists attack our cyberspace for creating damage, business competitors may attack each other’s information systems to increase their market shares, just to name a few. It is clear that investigating the characteristics of each kind of intents involves a lot of effort and complexity, and such complexity actually prevents us from building a general, robust connection between attacker intents and attack actions. This connection is necessary to do almost every kind of attacker behavior inference. We focus on building general yet simple intent models. In particular, we believe that the concept of economic “incentives” can be used to model attacker intent in a general way. In our model, the attacker’s intent is simply to maximize his or her incentives. In other words, the attacker is motivated by the possibility of gaining some incentives. Most, if not all, kinds of intents can be modeled as incentives such as the amount of profit earned, the amount of terror caused, and the amount of satisfaction because of a nice show-off. For an example, in Example 1 the incentives for the attacker can be the amount of DoS suffered by the legitimate users. For another example, the incentives for an attacker that enforces a worm attack can be the amount of network resources consumed by the worm’s scanning packets plus the amount of DoS caused on certain type of services. We may use economics theory to classify incentives into such categories as money, emotional reward, and fame. To infer attacker intents, we need to be able to compare one incentive with another. Incentives can be compared with each other either qualitatively or ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
85
quantitatively. Incentives can be quantified in several ways. For example, profits can be quantified by such monetary units as dollars. For another example, in Example 1, the attacker’s incentives can be quantified by two metrics: (a) Bao /BN , which indicates the absolute impact of the DDoS attack; and (b) Blo /Blw , which indicates the relative availability impact of the attack. Accordingly, the attacker’s intent is to maximize Bao /BN but minimize Blo /Blw . One critical issue in measuring and comparing incentives is that under different value systems, different comparison results may be obtained. For example, different types of people value such incentives as time, fame, and differently. As a result, very misleading attacker strategy inferences could be produced if we use our value system to evaluate the attacker’s incentives. After an attack is enforced, the incentives (e.g., money, fame) earned by the attacker are dependent on the effects of the attack, which are typically captured by the degradation of a specific set of security measurements that the system cares about. Each such measurement is associated with a specific security metric. Some widely used categories of security metrics include but not limited to confidentiality, integrity, availability (against denial-of-service), nonrepudiation, and authentication. For example, in Example 1 the major security metrics of the system are (a) Blo , which indicates the absolute availability provided by the system; and (b) Blo /Blw , which indicates the relative availability provided by the system. In our model, we call the set of security metrics that a system wants to protect the metric vector of the system. (Note that different systems may have different metric vectors.) For example, the metric vector for the system in Example 1 can be simply defined as Blo , Blo /Blw . At time t, the measurements associated with the system’s metric vector are called the security vector of the system at time t, denoted by Vts . As a result, assume an attack starts at time t1 and ends at t2 , then the incentives earned by the attacker (via the attack) may be measured by degradation(Vts1 , Vts2 ), which basically computes the distance between the two security vectors. For example, in Example 1 assume the security vector is Vts1 = 1000 Mbps, 100% before the attack and Vts2 = 50 Mbps, 5% after the attack, then degradation (Vts1 , Vts2 ) = −950 Mbps, −95%. The above discussion indicates the following property of AIOS inference: — Attack Effect Property. Effects of attacks usually yield more insights about attacker intent and objectives than attack actions. For example, in Example 1, a DoS packet indicates almost nothing about the attacker’s intent which can only be seen after some DoS effects are caused. 3.2 Incentive-Based Attacker Objective Modeling In real world, many attackers face a set of constraints when issuing an attack, for example, an attacker may have limited resources; a malicious insider may worry about the risk of being arrested and put into jail. However, our intent model assumes no constraints. To model attacker motivations in a more realistic way, we incorporate constraints in our attack objective model. In particular, we classify constraints into two categories: cost constraints and noncost constraints. (a) Cost constraints are constraints on things that the attacker can “buy” or “trade” such as hardware, software, Internet connection, and time. Such things ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
86
•
P. Liu et al.
are typically used to measure the cost of an attack. In addition, risk is typically a cost constraint. (b) Noncost constraints are constraints on things that the attacker cannot buy such as religion-based constraints and top secret attacking tools that the attacker may never be able to “buy.” The cost of an attack is not only dependent on the resources needed to enforce the attack, but also dependent on the risk for the attacker to be traced back, arrested, and punished. Based on the relationship between incentives and costs, we classify attackers into two categories: (a) rational attackers have concerns about the costs (and risk) associated with their attacks. That is, when the same incentive can be obtained by two attacks with different costs, rational attackers will pick the one with a lower cost. (b) Irrational attackers have no concerns about the costs associated with their attacks. They only want to maximize the incentives. Given a set of (cost) constraints, inferring the attack actions of an irrational attacker is not so difficult a task since we need only to find out “what are the most rewarding attack actions in the eyes of the attacker without violating the constraints?” By contrast, we found that inferring the attack actions of a rational attacker is more challenging. In this paper, we will focus on how to model and infer the IOS of rational attackers. In our model, an attacker’s objective is to maximize his or her utilities through an attack without violating the set of cost and noncost constraints associated with the attacker. The utilities earned by an attacker indicate a distance between the incentives earned by the attacker and the cost of the attack. The distance can be defined in several ways, for example, utilities = incentives − cost, utilities = incentives . Note that the cost of an attack can be measured by a set of cost cost values which captures both attacking resources and risk. To illustrate, let us revisit Example 1. The attacker’s total incentives may be measured by α Bao /BN + (1 − α) (1 − Blo /Blw ), where α determines how the attacker weighs the two aspects of the impact of the DDoS attack. The attack’s costs in this example are not much, though the attacker needs a computer and Internet access to “prepare” the zombies and the needed controls. The cost will become larger when the risk of being traced back is included. Let us assume the cost is a constant number η. Then the attacker’s utilities can be measured by α Bao /BN +(1−α) (1− Blo /Blw )−η, and the attacker’s objective can be quantified as Max α Bao /BN + (1 − α) (1 − Blo /Blw ). 3.3 Incentive-Based Attacker Strategy Modeling Strategies are taken to achieve objectives. The strategy-interdependency property indicates that part of a good attacker strategy model should be the defense strategy model because otherwise we will build our AIOS models on top of the assumption that the system never changes its defense posture, which is too restrictive. See that whenever the system’s defense posture is changed, the defense strategy is changed. In our model, attack strategies are defined based on the “battles” between the attacker and the system. Each attack triggers a battle which usually involves multiple phases. (For example, many worm-based attacks involve such phases ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
87
Fig. 2. Phases in Mstream DDoS attacks.
as reconnaissance, probe and attack, toehold, advancement, stealth (spreading quietly), and takeover.) In each phase, the attacker may take some attack actions and the system may take some defense actions (automatically). How such attack actions are taken in each phase of the battle defines the attacker’s strategy for the battle. How such defense actions are taken defines the system’s defense strategy. Example 2. Mstream DDoS attacks, a common type of DDoS attacks, have several phases, which are shown in Figure 2. In phase 1, the attacker uses Sadmind Ping (attacking tools or commands) to find out the vulnerable Sadmind services. In phase 2, the attacker uses Sadmind Over flow to break into the hosts that run these vulnerable Sadmind services. In phase 3, the attacker uses rsh or rexecute to install the mstream daemon and master programs and the needed backdoors. In phase 4, the attacker uses Mstream DoS to send commands to activate the master programs (via the backdoors). In phase 5, the DDoS master programs communicate with the daemon or zombie programs to ask them to send out a large number of DoS packets. Let us assume that the attack scenario is shown in Figure 1 and that all of the 64 source hosts run vulnerable Sadmind services. Then two simple attack strategies can be specified as follows: (A1) Sadmind Ping and Sadmind Over flow the 64 hosts in phase 1 and phase 2, respectively; use rsh in phase 3 and install a master program on S1 but a daemon program on each of the 64 hosts; use Mstream DoS in phase 4; ask only 10 zombies to send CBR (constant bit rate) DoS traffic to a single web site (the victim) in the speed of 201.3 kbps per zombie. (A2) The same as A1 except for using rexecute in phase 3 and asking 30 zombies to send CBR DoS traffic to the victim in the speed of 67.1 kbps. Similarly, two simple defense strategies can be specified as follows: (D1) Take no defense action in the first four phases. Enforce pushback (shown in Example 1) in phase 5 and set the target drop rate for each router (i.e., the upper-bound drop rate of the router’s output queue) as 7%, while keeping all the other configuration parameters in default values. (D2) The same as D1 except that setting the target drop rate as 3%. Note that an attack strategy is not simply a sequence of attack actions; it may also include such dynamic, strategic decision-making rules as “what action should be taken under what state or condition.” For example, in Example 2 an ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
88
•
P. Liu et al.
attack strategy may include the following rule: when rsh is disabled by the system, use rexecute. Hence, during two different battles with the system, the same attack strategy may result in two different sequences of attack actions. When a battle has multiple phases, we could have two possible types of attack or defense strategies: (1) static strategies take exactly the same set of actions in every phase; (2) dynamic strategies adjust actions when a new phase arrives based on what has happened. For example, in Example 2 both A1 and A2 are static attack strategies. In our model, each defense posture defines a defense strategy since it specifies how a set of security mechanisms behave in the face of an attack. Some security mechanisms are adaptive, but adaptations do not indicate a different defense strategy because the adaptation rules are not changed. The way we define defense postures is general enough to support a variety of defense strategies. The definition allows us to (dynamically) add, activate, deactivate, or remove a security mechanism. It also allows us to reconfigure a security mechanism by “replacing” an old mechanism with the reconfigured mechanism. In our model, an attacker’s strategy space includes every possible attack strategy of the attacker under the set of constraints associated with the attacker. To infer an attacker’s strategy space, a good understanding of the system’s vulnerabilities and the attack/threat taxonomy is necessary. Moreover, constraints and costs help infer the boundary of a strategy space, since they imply which kind of attacks will not be enforced. Similarly, the system’s strategy space is determined by the set of defense postures of the system. Due to the constraints associated with the system and the cost of security,2 the system’s strategy space is usually bounded. A key issue in modeling attacker strategies is how to compare two attack strategies and tell which one is better (for the attacker). Based on the purpose of attack strategies, the answer is dependent on the degree to which the attacker objectives can be achieved with a strategy. Based on the definition of attacker objectives, the answer is then dependent on determining which strategy can yield more utilities to the attacker. Based on the definition of utilities, if we assume that the costs for these two strategies are the same, the answer is then dependent on determining which strategy can yield more incentives to the attacker. Since attacker incentives are determined by degradation(Vts1 , Vts2 ), the answer is then dependent on determining which strategy can cause more degradation to the system’s security vector. However, the answer to this question is in general determined by the defense strategy that the system will take, since different battles may lead to different amount of security degradation. To illustrate, let us revisit Example 2. Based on our simulations (done in Section 6), we found that when D1 is taken by the system, A2 is better than A1 (i.e., causing more security degradation); however, when D2 is taken by the system, A1 is better than A2. More interestingly, we found that when A1 is taken by the attacker, D2 is better than D1; however, when A2 is taken, D1 is better than D2. 2 Security
mechanisms not only consume resources but also can have a negative impact on the system’s functionality and performance. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
89
Fig. 3. A game-theoretic formalization.
The above discussion not only confirms the strategy-interdependency property, but also implies the following property of attack strategy inference: — Dual Property. (a) Given two attack (defense) strategies, determining which one is a better attack (defense) strategy is dependent on the defense (attack) strategies the system (attacker) is going to take. (b) Each type of information useful for the attacker (system) to choose a good attack (defense) strategy will also be useful for the system (attacker) to choose a good defense (attack) strategy. 4. A GAME-THEORETIC FORMALIZATION Our goal is to formalize the AIOS models developed in the previous section in such a way that good inferences of AIOS can be automatically computed. For this purpose, we first propose a game-theoretic AIOS formalization, then we show why it is a good formalization. Our game-theoretic AIOS formalization is shown in Figure 3, where — Instead of neglecting the attacker and viewing attacks as part of the system’s environment, we model the attacker as a “peer” of the system, namely the attacking system. — The environment only includes the set of good accesses by a legitimate user. — We further split the system into two parts: the service part includes all and only the components that provide computing services to users; and the protection part includes the set of security mechanisms. For example, in Example 1 the service part mainly includes the hardware and software components (within the routers) that route packets; the pushback components belong to the protection part. — Instead of passively monitoring, detecting, and reacting to attacks, the relation between the system and the attacker is modeled as a game (or battle) across the time dimension where the system may actively take defense actions. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
90
•
P. Liu et al.
— The game is a 6-tuple. — The two players, namely the (secure) system and the attacking system. Note that the “real” player for the system is the set of security mechanisms. — The game type (e.g., a Bayesian game or a stochastic game) and the set of type-specific parameters of the game. — The two strategy spaces of the two players, defined in the same say as in a Section 3. The attacker’s strategy space is denoted as S a = {s1a , . . . , sm }, a where si is an attack strategy. The system’s strategy space is denoted as d S d = {s1d , . . . , sm }, where sid is a defense strategy. Note that the constraints associated with the attacker and the cost of each attack imply the boundary of S a . A more detailed formalization of attack strategies is described in Section 5. — A set of game plays. A play is a function pl i : S a × S d → O, where O is the set of outcomes which indicate the effects of an attack. Each play involves one battle due to an attack. Each play may have several phases. We assume each player uses a game engine to determine which strategy should be taken in a specific play. For example, in Example 2 a game play between the DDoS attacker and the network may involve attack strategy A1 and defense strategy D2. — The two utility (or payoff) functions which calculate the utilities earn by the two players out of each play. The attacker’s utility function is ua : S a × S d → R, where R is the set of utility measurements. Given a play (sia , sid ), the attack cost is an attribute of sia , denoted as cost(sia ). The attacker’s incentives are determined by degradation(Vts1 , Vts2 ), where t1 is the time when the play starts; t2 is the time when the play ends; and security vector Vts2 is dependent on the outcome of the play, namely pli (sia , sid ). And ua (sia , sid ) is a distance between cost(sia ) and the attacker’s incentives. By contrast, the system’s utility function is ud : S a × S d → R. Given a play (sia , sid ), the system’s cost is cost(sid ). The system’s incentives are determined by improvement(V∅s , Vssd ), where V∅s is the security vector resulted i after the attack when no security mechanisms are deployed; Vssd is the veci tor resulted after the attack when strategy sd is taken. And ud (sia , sid ) is still a distance between the system’s incentives and cost. — A knowledge base maintained by each player. The attacker’s (system’s) knowledge base maintains the attacker’s (system’s) knowledge about the system’s (attacker’s) strategy space (including the system’s (attacker’s) cost and constraints), the system’s (attacker’s) value system, the system’s metric and security vectors. Note that the attacker’s (system’s) knowledge may not always be true; it in fact captures the attacker’s (system’s) beliefs. —Note that for clarify, only the game-relevant components are shown in Figure 3. Note also that the game model can be extended to cover multiple attackers who are either cooperating with other attackers (i.e., cooperative) or not (i.e., noncooperative). This extension is out of the scope of this paper. Discussion. We believe a game-theoretic formalization can be very valuable for AIOS modeling and inference because (1) such a formalization shifts the ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
91
focus of traditional AIOS modeling from attacks to attackers; (2) such a formalization captures every key property of the attacker-system relation such as the Intentional Attack Property and the Strategy Interdependency Property; (3) such a formalization captures every key element of our incentivebased AIOS modeling framework such as incentives, utilities, costs, risks, constraints, strategies, security mechanisms, security metrics, defense postures, vulnerabilities, attacks, threats, knowledge, and uncertainty; (4) such a formalization can be used to infer AIOS. The rationale is that (a) noncooperative game theory is the primary tool to handle strategic interdependence [Mas-Colell et al. 1995], which is the fundamental property of the attacker-system relation; (b) game-theoretic models have been successfully used to predict rational behaviors in many applications such as auctions and their rationality notion (that each player plays an expected-utility maximizing best response to every other player) is consistent with the goals of many, if not most, attackers and systems; (c) Nash equilibria of attacker-system games can lead to good AIOS inferences since Nash equilibria indicate the “best” rational behaviors of a player, and when the system always takes a Nash equilibrium defense strategy, only a Nash equilibrium attack strategy can maximize the attacker’s utilities. 5. GAME-THEORETIC AIOS INFERENCE As we mentioned in previous sections, the ability to infer attacker intent, objectives, and strategies (in information warfare) may dramatically advance the literature of risk assessment, harm prediction, and proactive cyber defense. In the previous section, we show how to model AIOS via a game-theoretic formalization. In this section, we address how to exploit such formalizations to infer AIOS. In particular, we tackle two types of AIOS inference problems, which are illustrated below. Type A—Infer Attack Strategies. Given a specific model of attack intent and objectives, infer which attack strategies are more likely to be taken by the attacker. The previous presentation implies the following pipeline in inferring attack strategies: (1) Make assumptions about the system and the (types of) attacks that concern the system. Note that practical attack strategies inferences may only be able to be computed within some domain or scope (due to the complexity). (2) Model the attacker intent, objectives, and strategies (conceptually). Specify the attacker’s utility function and strategy space. Estimate the attacker’s knowledge base. (3) Specify the system’s metric vector and security vector. Specify the system’s utility function and strategy space. Build the system’s knowledge base. (4) Determine the game type of the game-theoretic attack strategy inference model that will be developed, then develop the model accordingly. (5) Compute the set of Nash equilibrium strategies of the attack strategy inference game model developed in step 4. A key task is to handle the computation complexity. If the complexity is too much, we need to do (inference) ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
92
•
P. Liu et al.
precision-performance trade-offs properly using some (semantics-based) approximate algorithms. (6) Validate the inferences generated in step 5. The relevant tasks include but are not limited to accuracy analysis (i.e., how accurate the inferences are) and sensitivity analysis (i.e., how sensitive the inferences are to some specific model parameters). The relevant validation techniques include but are not limited to (a) investigating the degree to which the inferences match the real-world intrusions; (b) extracting a set of high-level properties or features from the set of inferences and asking security experts to evaluate if the set of properties match their experiences, beliefs, or intuitions. (7) If the validation results are not satisfactory, go back to step 1 to rebuild or improve the inference model. Type B—Infer Attacker Intent and Objectives. Based on the attack actions observed, infer the intent and objectives of the attacker in enforcing the corresponding attack. To a large degree, the pipeline for inferring attacker intent and objectives is the reverse of that for inferring attack strategies. In particular, the pipeline has two phases: the learning phase and the detection phase, which are as follows. — In the learning phase, do the same thing in step 1 as the previous pipeline. In step 2, identify and classify the possible models of attacker intent and objectives into a set of representative attacker intent and objectives models. Then model the attack strategies for each of the representative models. In steps 3–5, do the same thing as the previous pipeline. As a result, a (separate) set of attack strategy inferences will be generated for each of the representative AIOS models built in step 2. — In the detection phase, once an attack strategy is observed, match the observed attack strategy against the inferred attack strategies generated in the learning phase. Once an inferred attack strategy is matched, the corresponding attacker intent and objective model(s) will be the inference(s) of the real attacker’s intent and objectives. (Note that sometimes an observed attack strategy may “match” more than one attacker intent and objective models.) Nevertheless, when none of the inferred attack strategies can be matched, go back to the learning phase and do more learning. In summary, both type A and type B inference problems need a gametheoretic inference model, and we call such inference models AIOS inference models in general. As we will show shortly in Section 6, given a specific attack– defense scenario, once we have a good understanding of the attack, the defense, the attacker and the system, most steps of the two pipelines are fairly easy to follow, but the steps of determining the game type of the AIOS inference model are not naive and require substantial research. Therefore, in the following, before we show how an AIOS inference pipeline can be implemented in a realworld attack–defense scenario in Section 6, we would first show how to choose the right game type for a real-world AIOS inference task. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
93
5.1 How to Choose the Right Game-Theoretic AIOS Model? A good AIOS inference model must be built on top of the real characteristics of the attack–defense (A–D) scenario. Different A–D scenarios may require different inference models. Hence, to develop a taxonomy of game-theoretic AIOS inference models, we need a general, simple model to classify the characteristics of A–D scenarios. For this purpose, we will start with two critical factors of the attacker-system relation, namely state and time. In our model, there are two categories of states: — System State. At one point of time, the state of a system is determined by the state of each component of the system’s service part. A component of the system’s service part can be a piece of data or a piece of code. Note that sometimes a piece of code can be handled as a piece of data. For example, in Example 1 a system state captures such state information as the length of the packet queue(s) in each router. It should be noted that the system’s state has nothing to do with the system’s defense posture, which is determined by the configuration of each component of the system’s protection part. — Attack State. Attack states classify system states from the attack–defense perspective. Every attack action, if successfully taken, will have some effects on the system state. Such effects are usually determined by the characteristics of the attack. After a specific attack happens, the resulting effects are specified as an attack state. For example, all the possible states of a web server system after its Ftpd service is hacked can be denoted as the Ftpd hacked attack state. Hence each attack state usually covers a cluster of system states. It is clear that the attacker can know the current attack state by analyzing the defense system and his attack strategies even before the attacks, but the defender (i.e., the system) is usually not. The system uses an intrusion detector to learn the current attack state. Due to the latency of intrusion detection, the system may know an attack state with some delay. Due to the false alarms (i.e., false positives), the system may have wrong belief about the current attack state. The relation between states and time is simple. At one point of time, the system must be associated with a specific system state and attack state. Good accesses, attack actions, and defense actions can all change the system state; however, only attacks and defense operations can change attack states. Changes of both system states and attack states indicate changes of time. An interesting question here is: when should we terminate an attack state? One way to solve this problem is to give each attack a lifetime. When the lifetime of an attack is over, we make the corresponding attack state part of the history. The lifetime of an attack should involve some defense actions or operations, since when the life of the attack is over, the system should have already recovered from the attack in many possible ways, for example, replacing the system with a new system, repairing the damaged part of the system, and so on. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
94
•
P. Liu et al.
Fig. 4. An example battle between the attacker and system.
We model the battles between the attacker and the system as follows: Definition 1. (General model) A battle between the attacker and the system is an interleaved sequence of system states and actions such that —Each action belongs to one of three possible types: (a) the action can be an attack action which is part of an attack strategy, (b) the action can be an action or operation taken by a legitimate user which indicates a good access, (c) the action can be a defense action which is part of a defense strategy. We denote an attack action as obi . We denote a good access action as oig . We denote a defense action as odi . We use i as the action index, for example, the obi is the ith action taken by the attacker. — There must be either one attack action or one good access action between two adjacent states. (Two system states SSi and SS j are adjacent if there does not exist a third state that stays between SSi and SS j in the interleaved sequence of system states and actions.) No more than one attack action can happen between two adjacent states. No more than one good access action can happen between two adjacent states either. — There is exactly one defense action between two adjacent states. However, a defense action can be a null action, but neither an attack action nor a good access action can be a null action. Example 3. Consider the battle shown in Figure 4. Good access action o1g transforms the system state from SS1 to SS2 . Then attack action ob1 transforms SS2 to SS3 which is part of an attack state. We assume there is some latency in detection, so no defense action is taken until alert 1 is raised in system state SS4 . Hence, the effects of attack action ob1 are initially embodied in SS3 , and o2g , though a good action, may further spread the damage accidentally. When alert 1 is raised, since it may not be able to confirm an intrusion and since it may be a false alarm, the system is unsure whether SS4 belongs to an attack state, and the system needs to determine whether a proactive defense action should be taken or the system should just wait until an intrusion is detected.3 Suppose proactive defense is done here. After defense action od1 is taken, ob1 ’s direct and indirect effects may be removed, at least to some extent. However, ob2 , a new attack action, may be simultaneously taken together with od1 , and a 3 Note that false negatives may also affect the battle. In particular, the detection of an attack usually
involves a sequence of alerts. When there are false negatives, some alerts will not be raised. As a result, the system can have more uncertainty about whether there is an intrusion and the response can become even less proactive. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
95
Fig. 5. A battle between the attacker and system under instant accurate intrusion detection.
new attack state is reached. Finally, after the intrusion is detected in SS6 and od2 is taken, the attack state may terminate. Under some specific conditions, the above model can be further simplified. In particular, when every attack action can be detected instantly after it happens with accuracy, the battles between the attacker and the system can be modeled as follows.4 Here, we model an intrusion as a sequence of attack actions, namely I j = {ob1 , ob2 , . . . , obn }. Note that here since we can distinguish bad actions from good ones, a set of system states can be clustered into a specific attack state, and good actions need not be explicitly modeled. Definition 2. (Under instant accurate intrusion detection.) A battle between the attacker and the system is an interleaved sequence of attack states and actions such that — Each action belongs to one of two possible types: (a) an attack action; or (b) a defense action. — Between any two adjacent attack states, attack and/or defense actions can only be taken in three possible ways: (a) a single attack action is taken; (b) a single defense action is taken; or (c) a pair of attack and defense actions (obi , odi ) are taken simultaneously by the attacker and the system, respectively. Two attack states ASi and AS j are adjacent if there does not exist a third attack state that stays between them in the interleaved sequence. — The battle can involve several fights. A fight is composed of two adjacent attack states and a pair of attack and defense actions between them, denoted by (obi , odi ). It is clear that (obi , odi ) can transform the system from one attack state to another. Example 4. Consider the battle shown in Figure 5, where we assume intrusion detection can be instantly done with accuracy. Within the initial attack state AS0 which consists of two system states (i.e., SS1 and SS2 ), there is no attack effects, and a good action such as o1g can transform one system state (e.g., SS1 ) to another (e.g., SS1 ). When attack action ob1 is taken, the attack state transits to AS1 , where some attack effects are caused. Since AS1 can be instantly detected, the time interval between ob1 and defense action od1 can be very short. After od1 is taken, suppose 80% of the attack effects caused by ob1 is 4 Note that since Definition 2 is mainly for theoretic analysis and meant to show how simple a battle
model can theoretically be, it is OK to make this not very practical assumption. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
96
•
P. Liu et al.
repaired,5 then a new attack state, that is, AS2 , is reached. Finally, note that after ob2 is taken, within the new attack state AS3 , a system state transition is possible before od2 is taken, since it may take a while for the system to determine the proper defense action(s) to take. When intrusions can be instantly detected with accuracy, it is clear that both the system and the attacker know the current attack state for sure. The system’s utility earned after each battle, denoted by ud (obi , odi ), is computable if we know which good actions are involved in the battle, so is ua (obi , odi ). Note that the system is clear about the set of good actions involved in each battle, but the attacker could have some uncertainty about the good actions. However, when intrusion detection has delay or when the detection is not 100% accurate, the simplified model cannot realistically model the battles between the attacker and the system, and the general model is the model we should use. Why? When the accuracy is low, even if you can instantly raise alarms, the simplified model still has too much uncertainty that makes the inferences generated by the model difficult to be validated. See that because of the inaccuracy, the system is actually not sure about the current attack state, and taking the defense action as if the raised alarm is true is not only not secure but also very expensive. When the detection latency is long, after an attack action is detected, several attack states may have already been bypassed, and as a result, the system can only take a null defense action for every bypassed state. This indicates that the attacker can take a lot of advantage if the simplified model is used to guide the defense. The above discussion shows that (a) if the game model is not properly chosen and followed, the system can lose a lot of security and assurance, and that (b) the agility and accuracy of intrusion detection play a critical role in finding optimal AIOS game models. In addition, we found that the correlation among attack actions also plays a critical role in finding optimal AIOS game models. Based on these two factors, the taxonomy of AIOS models can follow the regions shown in Figure 6, and the taxonomy can be simply summarized as follows: — In region 9, two types of dynamic games can be used together with primarily reactive defense, which are illustrated below. In both scenarios, the attack is composed of a sequence of highly correlated attack actions that are complementary to each other; and each attack action can be detected with agility. And it should be noticed that the goal of both the attacker and the system is to win the battle in a long run after a sequence of fights are finished. — If the attacker can clearly recognize each defense action and wants to see the effects of the current defense action against his latest attack action before choosing a new action, a dynamic observe-then-act game (with perfect information) can be used. In this game, the attacker and system take actions in turn, and at each move the player with the move knows the full history of the game play thus far. The theory of backwards induction [Mesterton-Gibbons 1992] can be used to compute the optimal 5 Note that being able to accurately detect intrusions does not always mean being able to accurately
identify all the attack effects and repair them. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
97
Fig. 6. A taxonomy of game-theoretic AIOS models.
attack/defense strategies. And the idea is that when an attack action ob is followed by a defense action, the attacker will take the system’s best response to ob into account when he or she chooses the “best” ob. Figure 5 shows an example play of this type of game. Finally, the defense should be primarily reactive, since each attack action can be detected with agility but it can be fairly difficult to predict the next action the attacker will take. — If the attacker has substantial uncertainty in recognizing a defense action but is good at identifying an attack state, multistage dynamic games with simultaneous moves can be used. In this game, the first attack action and a null defense action are simultaneously taken in stage 1, the first defense action and the second attack action are simultaneously taken in stage 2, and so forth. In this scenario, observe-then-act games are not very suitable because the attacker could not identify the exact defense action and thus waiting for the current defense action to end will not help the attacker much in choosing the next attack action. Moreover, in general the optimal attack/defense strategies of this game can be captured by subgame-perfect Nash equilibrium (see Appendix). Finally, if the attack state transitions are probabilistic, stochastic games, a special type of multi stage dynamic games, should be used. When intrusion detection is highly effective, stochastic games become feasible. See that not only that each attack state can be accurately identified by the system with agility, which enables effective reactive defense, but also that the transition probability among attack states can be estimated with good accuracy. When there is strong correlation among attack actions, stochastic game models are better than repeated game models, since they can model the correlation relation among attack actions, but repeated game models cannot. — In region 1, Bayesian-repeated games should be used together with proactive defense. When the intrusion detection effectiveness is poor, the system ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
98
•
P. Liu et al.
can have substantial uncertainty about the current attack state, and such uncertainty usually makes stochastic game models infeasible, since the utility of stochastic game models is dependent on the assumption that each attack state can be instantly identified by each player with accuracy. In this case, Bayesian game models, which we will discuss shortly in Section 5.2, are a robust realistic solution, since they do not require accurate detection, and they do not require instant detection either. Finally, since the detection latency is not short, proactive defense can be more beneficial than reactive defense, and Bayesian-repeated game theory may guide us to find the “optimal” proactive defense strategies. — In region 7, Bayesian-repeated (signaling) games can be used. First, repeated games can be used since the correlation degree among attack actions is low. As a result, we can assume that in each stage game both the attacker’s and system’s action spaces are the same. Second, although the detection accuracy is very good, 100% accuracy is usually not achievable, so Bayesian games are still beneficial. Third, since intrusion detection is both accurate and agile, the system can gain much better knowledge about the attacker in this region than in region 1. And such knowledge should be fully exploited to do better defense than in region 1 where simple Bayesian-repeated games are played. In particular, we believe that in each stage a signaling game can be played, where the system observes-then-act and exploits its knowledge about attack actions to make its (type) belief about the observed action more precise. Fourth, in this region effective detection favors reactive defense mechanisms, and doing proactive defense may not be cost effective, since substantial denial-of-service can be caused. — In region 3, normal multistage dynamic games should be used, and subgame perfect Nash equilibrium strategies should be taken by the players. Specifically, since the detection latency is long, reactive defense can be very limited. When defense actions are not taken until an intrusion is detected, the effects of the intrusion can seriously spread throughout the system (via both attack and good actions) during the detection latency. Hence, proactive defense can be more beneficial. To support proactive defense, a simple multistage dynamic game can be used, where each stage is associated with (a) a good or bad action, but not both; and (b) a defense action which could be “null.” Note that these two actions can be simultaneous or the system can observethen-act. Since the detection accuracy is poor, in each stage the system has uncertainty about the other action’s type. Such uncertainty can be handled by Bayesian type belief and expected payoffs. And in many cases, such uncertainty can be substantially reduced by the alerts raised and the alert correlation results, especially when the detection accuracy is not so bad (e.g., in region 6). Compared with the combination of probabilistic “attack states” and stochastic game models, simple multistage dynamic games are easier, cheaper, having a smaller search space, more accurate, and having no need to know all the attack states. Finally, note that Bayesian-repeated games cannot be directly applied here because the attack actions are highly correlated. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
99
So in each stage, the action spaces for both the attacker and the system are typically different and a different game is played. — Finally, the “gray” areas (i.e., region 2,4,5,6, and 8 usually need a trade-off between the extreme cases we discussed above when we need to build a good game-theoretic AIOS model for such a region. For example, a good AIOS game model for region 4 should be a trade-off between Bayesian-repeated (signaling) games (which are used in region 7) and Bayesian-repeated games (which are used in region 1). These trade-offs are dependent on many factors, such as the amount of uncertainty, accuracy, and sensitivity, as we will discuss shortly. — Note that every type of AIOS inference games can support both pure strategies and mixed strategies. 5.2 Bayesian Game-Theoretic AIOS Models In this section, we present a concrete Bayesian game-theoretic AIOS model, which can be used to handle regions 1 and 7. This model will be used shortly to do the case study in Section 6. A Bayesian game-theoretic AIOS inference model is composed of two parts: a Bayesian game model that characterizes the attacker-system relation, and a set of AIOS inferences generated by the game model. In particular, the game model is a specific two-player finitely repeated Bayesian game between the system and a subject, where (a) there can be multiple types of subjects. And the type space is denoted as T sub = {good, bad}. A subject’s type is privately known by the subject. (b) Asys is the action space of the system, and Asub is the action space of the subject. One or more actions can build a strategy. (c) The game has a finite number of plays (or stages), and each play includes a pair of simultaneous actions (asys , asub ). And each play will have an outcome denoted by o(asys , asub ). (d) The system is uncertain about the type of the subject. This uncertainty is type type measured by the system’s type belief, denoted as psys . For example, psys (bad), a probability, denotes the system’s belief about the statement that the subject is an attacker. (e) For each outcome o, the system’s utility function is usys (o) = type good type psys (good)usys (o) + psys (bad)ubad sys (o). If the subject is a legitimate user, his or her utilities are determined by usub (o; good), otherwise, his or her utilities are determined by usub (o; bad). On the other hand, the set of AIOS inferences are determined by the Nash equilibria of the game model based on the rationality notion of an expectedutility maximizer.6 In particular, for each Nash equilibrium of the game, de∗ ∗ ∗ ∗ , abad , agood ), the game model will output abad as the attack stratnoted as (asys ∗ egy inferences (i.e., abad indicates the kind of strategies that are more likely to be taken by the attacker); output usub (o; bad) (i.e., the utility function) ∗ ∗ , abad ; bad) as the attacker intent and objectives inferences, where and usub (asys ∗ ∗ usub (asys , abad ; bad) can be mapped to the amount of security vector degradation 6 The
Nash equilibrium theory can be found in Appendix and Mesterton-Gibbons [1992]. Note that mixed strategy Nash equilibria exist for every Bayesian game, although sometimes no pure strategy Nash equilibrium exists. Also a game may have multiple Nash equilibria. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
100
•
P. Liu et al.
∗ caused by the attack. Moreover, as side benefits, asys indicates a better defense ∗ ∗ posture and usys (asys , abad ) indicate the overall resilience of the system. Discussion. Bayesian AIOS inference models are simple, robust and may work well even when a very little amount of information is available. For example, in region 1, although neither the intrusion detector nor the previous actions (of a subject) can provide hints, timely inferences could still be generated based on a probabilistic estimation of how intense the attacks are. Since a small number of disturbing attacks will not affect the estimated intensity degree much, Bayesian AIOS inference models are very robust to disturbing alerts.
6. CASE STUDY: INFERRING THE ATTACK STRATEGIES OF DDOS ATTACKERS In this case study, we want to infer the strategies of the attackers that enforce brute-force DDoS attacks. Regarding the network topology, the attack model, the system model, and the defense mechanism, we make exactly the same assumptions as in Example 1. In particular, we assume pushback is deployed by the system. Based on the aggregates and the corresponding traffic volume, pushback classifies both the traffic and the users into three categories: good, poor, and bad. The bad traffic is sent by a bad user (attacker) and is responsible for the congestion. The poor and good traffics are legitimate traffics, and are sent by the poor and good users (both legitimate), respectively. However, the poor traffic has the same aggregate properties as the bad traffic, but the good traffic has not, though the good traffic may share some paths with the bad traffic. To illustrate, in Figure 1. we assume the attacker compromises S0 and sends “bad” packets to a victim denoted by d 0 . Simultaneously, S31 sends legitimate packets to d 0 . If router R1.0 uses destination address to identify the congestion aggregate, the poor packets sent from S31 to d 0 may be viewed as “bad” packets since they have the same destination address as the bad traffic, and dropped by the defense system. In summary, if the aggregates are destination-address based in Figure 1, then all packets sent to the same destination will belong to the same aggregate. Accordingly, when the attacker floods DoS packets to a set of victims, all the legitimate packets sent to the victims are poor traffic and would be rate limited together with the bad traffic. Nevertheless, the legitimate packets sent to other hosts are good traffic, such as the traffic between hosts in {S0 , . . . , S64 }. 6.1 The Game-Theoretic AIOS Model Now, we are ready to present the specific Bayesian game-theoretic AIOS model for DDoS attack/defense, which is specified as follows. Without losing generality, we assume that in each DDoS attack, there is one attacker and multiple legitimate users. (Nevertheless, it should be noticed that our AIOS model can be easily extended to handle collusive attackers.) For concision, we only mention the differences from the generic Bayesian game model proposed in Section 5.2. 1 i DDoSGM = {Aatt , A1leg , . . . , Aileg , Asys , Tatt , Tleg , . . . , Tleg , Tsys , 1 i patt , pleg , psys , uatt , u1leg , . . . , uileg , usys }, , . . . , pleg ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
101
where (1) The players are the attacker, the system, and several legitimate users. It should be noticed that we cannot model this game as a two-player game and we must extend the two-player Bayesian game model proposed in Section 5.2, since zombies and legitimate hosts are sending packets to the victim(s) simultaneously and neither zombies nor legitimate hosts can “control” the actions taken by the other side. Also note that our game model can be easily extended to model collusive DDoS attacks among multiple attackers. (2) The attacker’s action space is Aatt = {A1 , . . . , Am }, where Ai is a DDoS attack launched by the attacker. No matter which kind of DDoS attacks Ai belongs to, there are typically some common properties among the attacking packets involved in Ai . For example, they may have the same destination address, or they may use the same protocol. In this model, we use {Sour, V , AttTraf, Config} to specify a DDoS attack. In particular, Sour is the set of zombies “selected” by this attack. V = {v1 , . . . , vl } is the set of victims. Note that the victims may stay on different subnets. AttTraf specifies the attacking traffic patterns, for example, the protocol and transmission rate patterns between Sour and V. Config specifies the adaptable or reconfigurable aspects of the zombies. For example, the zombies may adjust their sending rate or traffic patterns according to the response of the defense system. In this study, we assume Config = Null, that is, the zombies will not adjust their behaviors during this attack and all of them will have simply the same behavior. (3) The action space for legitimate user k is Akleg = {T1 , . . . , Tm ; 1 ≤ k ≤ i}, where Ti is a specific network application (or service). In the model, we use {Sour, Dest, Traffic, Config} to specify such an application. Each network application may involve multiple hosts that transmit packets simultaneously in a coordinated or interactive way to execute a business process. Within a network application, Sour is the set of source hosts (or initiators) involved; and Dest = {d 1 , . . . , d k } is the set of destinations (or responders) involved. Moreover, Traffic captures the traffic patterns between Sour and Dest. Config specifies the adaptable or reconfigurable aspects of the application. In this study, we assume Config = Null. (4) The system’s action space Asys is determined by the pushback postures of each router in the system. The system is composed of every router that is part of the pushback defense, denoted as {R1 , . . . , Rn }. In particular, the pushback behavior of a router is determined by the following configurable parameters: 1 (default value: 2 s). The router — Congestion checking time, denoted as psys 1 seconds. When serious conchecks if the network is congested every psys gestion is detected, the router will identify (and rate limit) the aggregate(s) responsible for the congestion and send out some pushback messages. Note that in this study the thresholds are fixed for “reporting” ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
102
•
P. Liu et al.
serious congestion and for determining who should receive pushback messages. Note also that how the rate limits (for each aggregate) are set up is also fixed. 2 (default value: 5 s) is the interval time that —Cycle time, denoted as psys the router reviews the limits imposed on its aggregates and sends refresh messages to the adjacent upstream routers to update their rate limits. Note that how such rate limits are updated is fixed in this study. 3 (default value: 5%), determines the upper-bound — Target drop rate, psys drop rate of the router’s output queue. To achieve this upper bound, the rate limiter should make the bit rate toward the output queue less than B/(1 − target drop rate), where B is the bandwidth of the output link. 4 (default value: 20 s), is the earliest time to —Free time, denoted as psys release a rate-limited aggregate after it goes below the limit imposed on it. 5 — Rate-limit time, denoted as psys (default value: 30 s), determines how long a newly identified aggregate must be rate limited. After the period, the router may release an aggregate. 6 — Maximum number of sessions, psys (default value: 3), determines the maximum number of aggregates the rate limiter can control. 7 — Aggregate pattern, denoted as psys (default value: “destination address prefix”) determines which kinds of properties will be used to identify aggregates. (5) The attacker’s type space is Tatt = {bad, good}. Legitimate user i’s type i space is also Tleg = {bad, good}. The system’s type space is Tsys = sys. (6) Regarding the system’s type belief, since when a packet arrives at a router, the router cannot tell whether the sender of the packet is a zombie or not, the system’s belief (or uncertainty) about every other player’s type is the good bad same, that is, psys = θ , and psys = 1 − θ. In our simulation, for simplicity, we assume there are one attacker and one legitimate user. Accordingly, θ = 0.5. In real world, the value of θ can be estimated based on some specific statistics of the DDoS attacks that have happened toward the system. (7) Regarding the attacker and legitimate users’ type belief, since both the type attacker and the legitimate users know the system’s type, patt (sys) = type pleg (sys) = 1. Since the attacker knows who are zombies and who are legitimate nodes, the attacker’s uncertainty about a legitimate user’s type type is pbad (good) = 1. However, a legitimate user typically has uncertainty about the type of a node that is not involved in his application, since he is not sure whether the node is a zombie or not. So a legitimate user’s uncertainty about the attacker’s type and another legitimate user’s type type type are the same, namely pleg (bad) = β and pleg (good) = 1 − β. (8) For each outcome o of a game play, the attacker’s utility is uatt (o) = leg sys sys αuatt (o) + (1 − α) ik=1 uattk (o), where uatt (o) measures the attack’s impact legk on the network, while uatt (o) measures the attack’s impact on legitimate user k. In particular, uatt (o) = α Bao /BN + (1 − α)(1 − Blo /Blw ), where Bao ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
103
is the bandwidth occupied by the attacker; BN is the bandwidth capacity; Blo is the bandwidth occupied by the legitimate user (note that we assume there is only one legitimate user); Blw is the bandwidth that the legitii i mate user wants to occupy. For simplicity, Bao , BN , Blo , and Blw are all measured based on the incoming link to the edge router of the victim(s), as shown in Figure 1. Note that Bao /BN indicates the absolute impact of k k the attack on the (whole) network, while 1 − Blo /Blw indicates the relative availability impact of the attack on legitimate user k. α is the weight that balances these two aspects. Usually the attacker is mainly concerned with the attack’s impact on legitimate users, so in this study we let α = 0.2. sys type (9) The legitimate user’s utility is uleg (o) = uleg (o)+ pleg (bad)ubad leg (o). Since the system controls both the legitimate and the bad traffic, and the attacker does not control the legitimate traffic directly, we simply let ubad leg (o) = 0. Therefore, uleg (o) = Blo /Blw . (10) The system’s utility function is usys (o) = wθ Blo /Blw + (1 − w)(1 − θ)(−Bao /BN ); and it is defined in the standard way. w is the weight that helps the system make the trade-off between throttling the attacker and providing more bandwidth to legitimate users, and we set it as 0.8 in the simulations. Although in this case study several specific parameter values are set, the above DDoS attack strategy inference model is a general model and can handle a variety of other DDoS attack scenarios beyond the case study. For example, our model can handle the scenario where the zombies adjust their strategies (e.g., attacking rate, traffic pattern) according to the response of the defense system. Moreover, although in our model the system’s action space is pushback specific, our model can be extended to support other DDoS defense mechanisms such as traceback. 6.2 Simulation In order to obtain concrete attack strategy inferences of real-world DDoS attackers, we have done extensive simulations on the game plays specified above using ns-2 [ns2 ]. The network topology of our experiments is shown in Figure 1, which is the same as the topology used in pushback evaluation [Ioannidis and Bellovin 2002]. There are 64 source hosts and four levels of routers. Except the routers at the lowest level, each router has a fan-in of 4. The link bandwidths are shown in the figure. Each router uses a ns-2 pushback-module to enforce the pushback mechanism. It should be noticed that although there can be multiple victims staying on different subnets, we assume all the victims share the same incoming link, namely R1.0 − R0.0. In our experiments, Asys is materialized as follows. Asys includes 11 defense 7 1 strategies, as shown in Table I. The default value combination of { psys } , . . . , psys is the 7th defense strategy, which is the default defense strategy. In the experiment, we only change one parameter each time and compare the results with those under the default strategy. The 1st strategy is the same as the 7th except 1 that psys = 4s. The 2nd is the same as the 7th except that the cycle time is 10 s. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
104
•
P. Liu et al. Table I. The Eleven System Strategies
System Strategy 1st 2nd 3rd 4th 5th 6th 7th 8th 9th 10th 11th
1 (s) psys
2 (s) psys
3 psys
4 (s) psys
5 (s) psys
6 psys
7 psys
4 2 2 2 2 2 2 2 2 2 2
5 10 5 5 5 5 5 5 5 5 5
0.05 0.05 0.03 0.07 0.05 0.05 0.05 0.05 0.05 0.05 0.05
20 20 20 20 10 30 20 20 20 20 20
30 30 30 30 30 30 30 15 50 30 30
3 3 3 3 3 3 3 3 3 5 3
Destination Address Prefix Destination Address Prefix Destination Address Prefix Destination Address Prefix Destination Address Prefix Destination Address Prefix Destination Address Prefix Destination Address Prefix Destination Address Prefix Destination Address Prefix Destination Address Prefix plus traffic pattern
The 3rd is different in that that the target drop rate is 0.03. The difference of the 4th is that the target drop rate is 0.07. The 5th is different in that the free time is 10 s. The 6th is different in that the free time is 30 s. The 8th is different in that the rate limit time is 15 s. The 9th is different in that the rate limit time is 50 s. The 10th is different in that the maximum number of sessions is 5. The 11th is that the aggregate property is destination address prefix plus traffic pattern. In our experiments, Aleg is materialized as follows: — The poor traffic volume is determined based on several real-world Internet traces posted at http://ita.ee.lbl.gov/html/traces.html. These traces show three typical volume patterns when there are no attacks: RATE1 = 67.1 kbps (the average rate to a web site during the rush hour); RATE2 = 290 kbps (the average rate from an Intranet to the Internet); RATE3 = 532 kbps (the average rate from an Intranet to the Internet during the rush hour). Based on these statistics, we let the total poor traffic volume be 67.1 kbps, 290 kbps, or 532 kbps. — The traffic pattern of the good and poor traffic is CBR (constant bits rate). — There is only one legitimate user in the system. In each DDoS experiment, the legitimate user selects 2 (FEWPOOR) or 4 (MANYPOOR) hosts to send packets to the victim. When the poor traffic volume is 290 kbps and there are four poor hosts, each host will send out 290/4 bps traffic to the victims. Since the traffic pattern for poor traffic is fixed, the poor traffic belongs to a single aggregate. — Moreover, in each DDoS experiment, the legitimate user selects 5 or 10 hosts to sends packets to other destinations. We assume the good traffic flows will not cause any congestion by themselves. Hence, they will not be involved in any aggregate in our experiments and their influence can be neglected. — The poor and good hosts are randomly chosen from the 64 hosts. — In summary, for each poor traffic volume, there are four legitimate strategies corresponding to different numbers of poor hosts and good hosts. So in total there are 12 legitimate strategies. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
105
Aatt is materialized as follows: — We set the number of zombies as 12 (FEWBAD) or 32 (MANYBAD). The zombies are randomly chosen from the 64 hosts. — We determine the total attack traffic volume based on a parameter called the bad-to-poor ratio. For example, when the ratio is 30 and the poor traffic volume is 290 kbps, the total attack traffic volume is 30*290 bps. Moreover, if there are 32 zombies, each zombie will send out 30*290/32 bps traffic to the victim(s). — When the poor traffic volume is 67.1 kbps or 290 kbps, we let the bad-to-poor ratio be 30, 35, 40, 45, or 50. When the poor volume is 532 kbps, we let the ratio be 30, 35, or 40. In this way, we totally get 13 possible attack traffic volumes. — The traces also show four traffic patterns. They are constant bits rate (CBR), exponential (EXP), ICMP, and mixed (i.e., half CBR and half ICMP). We let the attack traffic patterns be of these four types. — If we count the number of value combinations of these attack strategy parameters, we can know that there are 40 possible strategies under RATE1 or RATE2, and there are 24 possible strategies under RATE3. —We number the attack strategies as follows. In the first 20 (12) strategies of the 40 (24) strategies, the number of zombies is FEW. In the second 20 (12) strategies, the number of zombies is MANY. Within each 20 (12) strategy group, the first 5 (3) strategies use CBR traffic, the 2nd use exponential traffic, the 3rd use ICMP traffic, and the 4th use mixed traffic. Within each such 5 (3) strategy group, the strategies are ordered according to the bad-to-poor ratio, and the order is 30, 35, 40, 45, and 50 (30, 35, and 40). Finally, it should be noticed that when the system takes strategy 10, the attacker will target 4 victims in each of the 40 (24) strategies, although in every other case the attacker will target only one victim. 6.3 Payoffs and Their Attack Strategy Implications Figure 7 shows the attacker’s, legitimate user’ and defense system’s payoffs under different network scenarios (i.e., poor traffic volumes), attacking strategies, and defense strategies when the aggregate property is destination address prefix. Figure 8 is different in that the aggregate property is destination address prefix plus traffic pattern. That is, two traffic flows sent to the same host or subnet may not always belong to the same aggregate because their traffic patterns may be different from each other. Note that for clarity, we show the legitimate strategies’ effects in a special way. Since our results show that poor traffic volumes can have a significant impact on the players’ payoffs while the numbers of poor or good hosts have almost no impact, we break down the 12 legitimate strategies into three groups based on the three different poor traffic volumes. And within each group, for each pair of attack and defense strategies, first the players’ payoffs are measured based on the four legitimate strategies in that group, then an average payoff will be calculated for each player. Hence, each payoff shown in Figures 7 and 8 is an average payoff. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
106
•
P. Liu et al.
Fig. 7. The attacker’s, legitimate user’s and defense system’s payoffs under different defense and attack strategies.
Based on the simulation results summarized in these two figures, where axis X represents the 10-defense strategies and axis Y represents the attacking strategies, we found that (1) The attacker’s payoffs are dependent upon not only attack strategies, but also network scenarios and defense postures, which well justifies the strategy interdependency property of our AIOS model. (2) Our experiments confirm many well-known observations about DDoS attack and defense. For example, the attacker prefers more zombies and the defense system prefers lower drop rate. Nevertheless, our experiments give more insights on DDoS attack and defense. For example, many people believe that the attacker’s and defense system’s payoffs are mainly determined by the attack and defense strategies, but Figure 9 shows that the ratio between the poor traffic volume and the total bandwidth is a very important factor, and this ratio may greatly affect the attacker’s and defense system’s payoffs. (3) Our experiments also yield several surprising observations. (a) Many people may believe that the more packets the zombies send out to the victims, ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
107
Fig. 8. The attacker’s, legitimate user’s and defense system’s payoffs under different defense and attack strategies.
the more bandwidth and payoffs the attacker should earn. (b) Many people may believe that using different traffic patterns should be more effective in attacking than a single traffic pattern. (c) Many people may believe that exponential bit rate should be more effective in attacking than constant bit rate. (d) Many people may believe that using UDP should be more effective in attacking than TCP or ICMP. However, our results show that neither the attacking rate nor the traffic pattern matters, and different badto-poor ratios (30, 35, or 50) or different traffic patterns (UDP or ICMP) give the attacker similar amounts of payoffs. Actually, Figure 7 shows that among all the attacking strategies, only the number of zombies and the traffic aggregate properties can substantially affect the payoffs of the attacker. (4) For the system, to obtain higher resilience against DDoS attacks, it needs only be concerned with three specific pushback parameters, namely target 6 3 drop rate ( psys ), and aggregate pattern ), maximum number of sessions ( psys 7 ( psys ). The other parameters do not affect the results much. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
108
•
P. Liu et al.
Fig. 9. The attacker’s and defense system’s payoffs under different poor traffic.
6.4 Nash Equilibrium Calculation Figures 7 and 8 show the attacking capacity of the attacker, the survivability of the legitimate user, and the resilience of the defense system under different defense and attacking strategies. In the real world, the legitimate user, attacker, and defense system will only choose optimal strategies from their action spaces to maximize their payoffs. Hence, to know what the attacker, legitimate user, ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
109
and defense system will do when a DDoS attack really happens, we need to know the Nash equilibrium strategies of the players. Nash equilibria specify the expected payoff maximizing best response of one player to every other player. For each game play that involves a legitimate strategy, an attacking strategy, and a defense strategy, we can get three payoffs for the legitimate user, the attacker, and the system, respectively. We denote the payoffs by a 3-tuple L, A, D. “L” is the legitimate user’s payoff, “A” is the attacker’s payoff, and “D” is the system’s payoff. Note that each payoff 3-tuple is associated with a strategy 3-tuple which records the corresponding legitimate, attack, and defense strategies. Based on the experimental results of multiple game plays, we can get a set of payoff 3-tuples, which is called the payoff list. In accordance with the definition of Nash equilibria, we use the following steps to calculate the Nash equilibria: (1) In the payoff list, for each legitimate strategy and attack strategy combination, we look for the defense strategies that give the highest payoff to the system. The resulting strategy 3-tuples forms strategy sublist 1. (2) In the payoff list, for each legitimate strategy and defense strategy combination, we look for the attack strategies that give the highest payoff to the attacker. The resulting strategy 3-tuples forms strategy sublist 2. (3) In the payoff list, for each defense strategy and attack strategy combination, we look for the legitimate strategies that give the highest payoff to the legitimate user. The resulting strategy 3-tuples forms strategy sublist 3. (4) Every strategy 3-tuple in the intersection of sublist 1, sublist 2, and sublist 3 is a Nash Equilibrium. It should be noticed that even in the same experimental environment, we may get different results in each experiment. Due to experiment errors, when we repeat an experiment, the payoffs of the three players may not be exactly the same as those produced by the original experiment. For example, in ns2, when a host is set up to send packets at rate 10 kbps, we cannot guarantee that the host will send exactly 10 kbps in every experiment. Some minimum errors usually exist, and the host may send 10.2 kbps or 9.8 kbps. Therefore, when we calculate the Nash equilibria, we set a relative measurement error. If the difference between two payoffs is less than the given measurement error, we view them equivalent to each other. 6.5 Nash Equilibria and Their Attack Strategy Implications We get 42 Nash equilibria in the experiments when the relative measurement error is 0.005.7 And some of them are shown in Table II. We found that several interesting and fresh attack strategy inferences can be obtained from the distributions of the set of equilibria. In particular, (1) In terms of the traffic pattern, the distribution is shown in Table III. “Dest” means the aggregate property is destination address prefix. “DestPatt” 7 When we reduce the relative measurement error, we get fewer Nash equilibria but the distributions
of the Nash equilibria are almost the same. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
110
•
P. Liu et al. Table II. Nash Equilibrium Strategies
System Strategy
Legitimate Strategy
Target-drop-rate = 0.03 Target-drop-rate = 0.03 Target-drop-rate = 0.03 Maximum session = 5 Maximum session = 5 Maximum session = 5 Maximum session = 5
Attacking Strategy
RATE1,
ManygoodFewpoor ManygoodManypoor RATE1, ManygoodFewpoor RATE2, ManygoodManypoor RATE2, ManygoodManypoor RATE2, ManygoodManypoor RATE2, ManygoodManypoor RATE1,
Few, 35, CBR, One aggregate Many, 30, CBR, One aggregates Many, 30, CBR, One aggregate Many, 35, CBR, Multiple aggregates Many, 30, EXP, Multiple aggregates Many, 35, EXP, Multiple aggregates Many, 45, EXP, Multiple aggregates
Table III. Nash Equilibrium Distribution under Different Attacking Patterns
(2)
(3)
(4)
(5)
Aggregate Property
CBR
EXP
ICMP
Mixed
DEST DESTPATT
0.09 0.38
0.50 0.25
0.27 0
0.14 0.38
means that the aggregate property is destination address prefix plus traffic pattern. Table III shows that the attacker is more likely to use EXP traffic. In this way, he has more chances to stay at a Nash equilibrium since 50% Nash equilibria occur when the traffic pattern is EXP. When the aggregate is DESTPATT, obviously, the attacker prefers to use the same traffic patterns as those used by poor and good users. The distribution under bad-to-poor ratio is shown in Table IV. Surprisingly, the distribution shows that the attacker is most unlikely to use a high ratio. Some people may believe that higher bad-to-poor ratio should make the attack more successful since more packets will be flooded to the victim(s). However, our analysis of Nash equilibria distributions shows that the attacker has better opportunities to converge to a Nash equilibrium strategy with low bad-to-poor ratio. We believe that an important reason for this phenomenon is because our DDoS game is not a zero-sum game. The distribution under different combinations of the number of zombies, poor hosts and good hosts is shown in Table V. In the table, “F” means “Few” and “M” means “Many”. “FMF” means “FewgoodManypoorFewbad”. The table indicates that the attacker prefers to use as many zombies as possible, which is consistent with the common sense of DDoS attacks. The distribution under different defense strategies indicates that most Nash equilibria occur when the target-drop-rate is 0.03 or when the maxnumber-of-sessions is 5. The probability that Nash equilibria occur under target-drop-rate 0.03 is 0.45, and under max-number-of-session 5 is 0.36. Hence, to be more resilient, the system can increase the number of sessions and decrease the target-drop-rate. Our analysis also shows that the impact of other defense strategy parameters on this distribution is minimum. Based on the set of Nash equilibria calculated and Figure 7, we can get the upper bounds of the attacking capacity of the attacker and the upper bounds of the assurance capacity of the defense system under different network scenarios. These upper bounds are shown in Table VI. In particular,
ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
111
Table IV. Nash Equilibrium Distribution under Different Attacking Ratios Aggregate Property DEST DESTPATT
30
35
40
45
50
0.23 0.50
0.32 0
0.14 0.13
0.09 0.25
0.23 0.13
Table V. Nash Equilibrium Distribution under Different Number of Users Aggregate Property
FFF
FFM
FMF
FMM
MFF
MFM
MMF
MMM
DEST DESTPATT
0 0.12
0 0
0 0.13
0 0.13
0 0
0.09 0
0.55 0.38
0.36 0.25
Table VI. Upper Bounds of the Assurance Capacity and Attacking Capacity Network Scenario
Attacking Capacity
Assurance Capacity
0.2076 0.2668 0.2862
0.3941 0.3820 0.3320
RATE1 RATE2 RATE3
the upper bounds of the assurance capacity tell us how well the system (i.e., pushback) is resilient to DDoS attacks. The upper bounds of the attacking capacity tell us how serious the damage could be in the worst case. According to the definition of payoff function, the highest attacking capacity is 1 and the highest defense capacity is θ, which is 0.5 in the paper. We use different normal traffic to the victim to represent different network scenarios in the paper. When the traffic rate is very low, such as RATE1, no matter how hard the attacker tries, the highest attacking capacity he could get is only 0.2076. And the highest assurance capacity of the defense system is 0.3941. When the traffic rate is high, such as RATE3, the highest attacking capacity is 0.2862. 6.6 Converging to Nash Equilibria If the attacker, legitimate user, and defense system are rational, they should take a Nash equilibrium (optimal) strategy. Even if they did not choose a Nash equilibrium strategy at the very beginning, incentives may automatically “guide” them to converge their ad hoc strategies to a Nash equilibrium strategy at a later stage. In this section, we give a simple example in Figure 10 to explain how a Nash equilibrium can be dynamically converged. We assume that the legitimate user, attacker, and system start from state 1. The strategies and payoffs are listed in the sequence of legitimate user, attacker, and defense system in box 1, where MF means MANYPOORFEWGOOD, F,CBR,35 means FEWBAD, CBR traffic and ratio = 35. Moreover, we assume that each player may change his strategy, while strategy changes are not simultaneously done, and the outcome of each strategy change can be observed by the other players before another strategy change is performed. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
112
•
P. Liu et al.
Fig. 10. Converging to Nash equilibrium strategies.
In state 1, based on who is going to perform the next strategy change, the state may transit to different states. In particular, (a) if it is the attacker’s turn, since he is not satisfied with his attack effects and since he finds that if he changes his strategy to M,EXP,50, he can maximize his attack effects, he may take this strategy and transform the state to state 2. (b) If it is the system’s turn, since the system is not satisfied with its resilience either and changing its strategy to DESTPATT, target-drop-rate 0.03 can maximize its resilience, the system may take this strategy and transform the state from state 1 to state 3. (c) Finally, if it is the legitimate user’s turn, he will probably stay in state 1 without changing his strategy since his payoffs will decrease if he changes his strategy unilaterally. In state 2, the system finds it can maximize its resilience if it changes its strategy to target-drop-rate 0.03, so the system may take this strategy and transform the state to state 4. Similarly, in states 3 and 4, the attacker wants to change his strategy to M,CBR,30 to maximize his attack effects, and the state finally transits to 5. In state 5, everyone finds that if he changes his strategy unilaterally, his payoffs will decrease. Therefore, no one wants to change his strategy. Not surprisingly, this strategy 3-tuple in state 5 is a Nash equilibrium, as shown in Table II. The example shows that no matter what the start state is, the three players can ultimately “agree” on a Nash equilibrium to maximize their own incentives. Note that if there are more than one Nash equilibrium points, the convergence state would be dependent on the starting state. 6.7 Using Attack Strategy Inferences to Improve the Network’s Resilience One side benefit of AIOS modeling and inferring is that the distribution of Nash equilibria and the payoff results could be used to optimize the system’s defense posture for more resilience. In particular, we found that to better defend against DDoS attacks, the following issues need to be concerned by the network in its pushback defense: ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
113
(a) Normal Bandwidth Usage Planning. We measure the degree of normal bandwidth usage by the ratio between the bandwidth occupied by the legitimate traffic and the total network bandwidth. Our results show that the higher the usage degree is, the lower the network’s resilience will be. When the usage degree is high, more legitimate packets would be considered as malicious packets. When the usage degree is 0.05, less than 5% legitimate packets will be dropped no matter how the attacker changes his strategies. However, when the usage degree is 0.25, about 30% legitimate packets will be dropped. Hence, the degree of normal bandwidth usage should be carefully planed for good resilience. In practice, based on the profile of the legitimate traffics and their availability requirements, the system should be able to figure out the suitable degree of normal bandwidth usage. (b) Target Drop Rate Selection. The lower the target drop rate is, the less packets will be sent to the output queue when a router is doing pushback, and more packets will be dropped by the defense system. From Figure 7, we found a lower drop rate is better than a higher drop rate. But it is hard to say if a lower drop rate is always better than a higher drop rate, since lower drop rates may cause more legitimate packets to be dropped by the system, especially when the legitimate traffic volume is high. To find the best drop rate, we give the simulation results of the assurance capacity of the system and the attacking capacity under different drop rate {0, 0.005, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08} in Figure 11. We found when the percentage of poor traffic in the whole bandwidth is low, such as 15%, a lower drop rate is always better for the system even when the legitimate traffic rate is high. When the percentage of poor traffic goes up to 25%, the system gets the best payoffs at the highest target drop rate. Hence, the target drop rate is dependent upon the percentage of poor traffic. When the percentage of poor traffic is low, the system should use a low drop rate, and vice versa. In practice, to find out the suitable target drop rate, the system needs to analyze the legitimate traffic when there are no attacks to get a profile of the legitimate traffic. (c) Configuration of the Number of Rate-Limited Sessions. When the number of rate-limited sessions is less than the number of the attacking aggregates, there will be some malicious traffic not rate limited and the system will be jeopardized by these attack traffic. Hence, we need to make the number of rate limited sessions larger than the number of aggregates of the attacking traffic. Some people may believe that having too many rate-limited sessions is not good, since the legitimate traffic may be considered as malicious traffic and get rate limited. However, since the volume of malicious traffic is much larger than the normal traffic, our experiment results show that a larger number of rate-limited sessions do not affect the system’s resilience seriously. In the real world, it is usually hard to predict accurately the number of attacking aggregates, therefore, we suggest the system just set a large number of rate-limited session for better resilience. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
114
•
P. Liu et al.
Fig. 11. The attacker’s and defense system’s payoffs under different drop rates.
ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
115
Table VII. The Distribution of Nash Equilibria under Different Topologies Topology
cont4
cyct10
dr0.03
dr0.07
ft10
ft30
default
sess5
rt15
rt50
PUSHBACK BRITETOPO
0 0.02
0 0
0.45 0.38
0 0
0 0
0.14 0
0 0.08
0.36 0.39
0 0.06
0.05 0.06
6.8 Experiments with a Larger Network Topology So far, our simulations are based on the original pushback topology composed of 64 hosts and 22 routers. To see whether the characteristics of the simulated DDoS attack/defense game (e.g., characteristics of the payoffs and Nash equilibria) and the corresponding conclusions we have drawn can still hold in a large scale DDoS attack/defense game, we have done some experiments with a larger network topology. In particular, we use Brite [Medina et al. 2001], a popular topology generator, to create a network with 101 routers and more than 1000 hosts. We randomly select 200 hosts as zombies. To compare the experiment results with those generated with the pushback topology, we let the attacking bit rate (of each zombie) be the same as before, and we also let the legitimate bit rate be the same as before. We found that with the Brite topology, the legitimate user’s and system’s payoffs are slightly smaller than those with the pushback topology, but the attacker’s payoffs are slightly larger than those with the pushback topology. We believe the reason is mainly due to the fact that the pushback mechanism works in slightly different ways under different topologies. Nevertheless, the absolute values of payoffs are not very important for this comparison, and we are mainly concerned with the impact of the game parameters on the players’ payoffs and the distributions of the Nash Equilibria. Through a comparison study, we found that compared with the DDoS attack/defense game with the pushback topology, the impact of the game parameters on the players’ payoffs is of almost the same set of properties, and the distributions of the Nash equilibria, as shown in Table VII, are very similar. For example, with the Brite topology, (a) the legitimate user always gets the highest payoffs when the target-drop-rate is 0.03 and gets the lowest payoffs when the target-drop-rate is 0.07; and (b) most Nash equilibria occur when the target-drop-rate is 0.03 or when the max-number-of-sessions is 5. These encouraging results, though still preliminary, show that the set of attack strategy characteristics (inferences) we have identified (computed) about DDoS attackers should hold in a large network and can be fairly consistent with the IOS of real-world DDoS attackers against the Internet. 7. CONCLUSION AND FUTURE WORK In this paper, we present a general incentive-based method to model AIOS and a game-theoretic approach to infer AIOS. On one hand, we found that the concept of incentives can unify a large variety of attacker intents; the concept of utilities can integrate incentives and costs in such a way that attacker objectives can be practically modeled. On the other hand, we developed a game-theoretic AIOS formalization which can capture the inherent interdependency between ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
116
•
P. Liu et al.
AIOS and defender objectives and strategies in such a way that AIOS can be automatically inferred. Moreover, we developed a taxonomy of game-theoretic AIOS inference models. Finally, we use a specific case study on DDoS attack and defense to show how attack strategies can be inferred in real-world attack– defense scenarios. Nevertheless, our work in inferring AIOS is still preliminary and several important research issues need to be further explored in order to get better AIOS inferences. In particular, in our future work (a) we would investigate model level inference accuracy analysis and sensitivity analysis that can model and predict the influence of incomplete information, asymmetric information (between the attacker and system), and uncertainty; (b) we would investigate approximate algorithms that can do optimal, quantitative trade-offs between inference precision and efficiency during Nash equilibria estimation; (c) we would investigate AIOS inference models beyond Bayesian games, that is, the other type of AIOS inference models identified by our taxonomy.
APPENDIX: A SIMPLE REVIEW OF GAME THEORY The normal-form representation of an n-player game specifies the players’ strategy spaces S1 , . . . , Sn and their payoff functions u1 , . . . , un . We denote this game by G = {S1 , . . . , Sn ; u1 , . . . , un }. In this game, the strategies (s1∗ , . . . , sn∗ ) are a Nash equilibrium if, for each player i, si∗ is (at least tied for) player i’s best ∗ response to the strategies specified for the n − 1 other players, (s1∗ , . . . , si−1 , ∗ ∗ ∗ ∗ ∗ ∗ ∗ si+1 , . . . , sn ). That is, si solves maxsi ∈Si ui (s1 , . . . , si−1 , si , si+1 , . . . , sn ). A pure strategy for player i is an element of set Si . Suppose Si = {si1 , . . . , sik }, then a mixed strategy for player i is a probability distribution pi = ( pi1 , . . . , pik ), where o ≤ pik ≤ 1 for k = 1, . . . , K and pi1 + · · · + pik = 1. Although a game does not always have a pure strategy Nash equilibrium, Nash [1950] proved that a game always has at least one mixed strategy Nash equilibrium. The static Bayesian game theory is mentioned in Section 5.2. Note that a Bayesian Nash equilibrium can be defined in a way very similar to a normal Nash equilibrium. Given a stage game G (e.g., a static Bayesian game), let G(T ) denote the finitely repeated game in which G is played T times, with the outcomes of all preceding plays observed before the next play begins. The payoffs for G(T ) are simply the sum of the payoffs from the T stage games. If the stage game G has a unique Nash equilibrium then, for any finite T , the repeated game G(T ) has a unique subgame-perfect outcome: the Nash equilibrium of G is played in every stage. Moreover, in a finitely repeated game G(T ), a player’s multistage strategy specifies the action the player will take in each stage, for each possible history of play through the previous stage. In G(T ), a subgame beginning at stage t + 1 is the repeated game in which G is played T − t times, denoted by G(T − t). There are many subgames that begin at stage t + 1, one for each of the possible histories of play through stage t. A Nash equilibrium is subgame-perfect if the player’s strategies constitute a Nash equilibrium in every subgame. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Incentive-Based Modeling and Inference of AIOS
•
117
Besides repeated games, there are several types of more general dynamic games. For example, in multistage dynamic games, a different game can be played in each stage instead of playing the same game repeatedly. In dynamic observe-then-act games, players can take actions in turn instead of taking actions simultaneously. Nevertheless, the definitions of strategies, subgames, and subgame-perfect Nash equilibriums are very similar to those in repeated games. Finally, a standard formal definition of stochastic games is as follows. An nplayer stochastic game Ŵ is a tuple S, A1 , . . . , An , r 1 , . . . , r n , p , where S is the state space, Ai is the action space of player i for k = 1, . . . , n, r i : S × A1 × · · · × An → R is the payoff function for player i, p : S × A1 × · · · × An → ∇ is the transition probability map, where ∇ is the set of probability distributions over state space S [Thusijsman 1992]. In Ŵ, a strategy π = (π0 , . . . , πt , . . .) is defined over the entire course of the game, where πi is called the decision rule at time t. A decision rule is a function πt : Ht → σ (Ai ), where Ht is the space of possible histon 1 ries at time t, with each Ht ∈ Ht , Ht = (s0 , a01 , . . . , a0n , . . . , st−1 , at−1 , . . . , at−1 , st ), i and σ (A ) is the space of probability distributions over agent i’s actions. π is called a stationary strategy if πt = π for all t, that is, the decision rule is independent of time. Otherwise, π is called a behavior strategy. In Ŵ, a Nash equilibrium point is tuple of n strategies (π∗1 , . . . , π∗n ) such that for all s ∈ S and i = 1, . . . , n, vi (s, π∗1 , . . . , π∗n ) ≥ vi (s, π∗1 , . . . , π∗i−1 , π i , π∗i+1 , . . . , π∗n ) for all π i ∈ Pi i , where i is the set of strategies available to agent i. REFERENCES BROWNE, H., ARBAUGH, W. A., MCHUGH, J., AND FITHEN, W. L. 2001. A trend analysis of exploitations. In Proceedings of the 2001 IEEE Symposium on Security and Privacy. 214–229. BROWNE, R. 2000. C4i defensive infrastructure for survivability against multi-mode attacks. In Proceedings of 21st Century Military Communication-Architectures and Technologies for Information Superiority. BURKE, D. 1999. Towards a Game Theory Model of Information Warfare. Tech. rep., Air force Institute of Technology. Master’s Thesis. CLARKE, E. H. 1971. Multipart pricing of public goods. Public Choice 11, 17–33. CONITZER, V. AND SANDHOLM, T. 2002. Complexity Results About Nash Equilibria. Tech. rep., Carnegie Mellon University. CMU-CS-02-135. CUPPENS, F. AND MIEGE, A. 2002. Alert correlation in a cooperative intrusion detection framework. In Proceedings of the 2002 IEEE Symposium on Security and Privacy. DEBAR, H. AND WESPI, A. 2001. Aggregation and correlation of intrusion detection alerts. In Proceedings of the 2001 International Symposium on Recent Advances in Intrusion Detection. 85–103. FEIGENBAUM, J., PAPADIMITRIOU, C., SAMI, R., AND SHENKER, S. 2002. A BGP-based mechanism for lowest-cost routing. In Proceedings of the 2002 ACM Symposium on Principles of Distributed Computing. GORDON, L. A. AND LOEB, M. P. 2001. Using information security as a response to competitor analysis systems. Commun. ACM 9, 44. GROVES, T. 1973. Incentives in teams. Econometrica 41, 617–663. HESPANHA, J. P. AND BOHACEK, S. 2001. Preliminary results in routing games. In Proceedings of the 2001 American Control Conference. IOANNIDIS, J. AND BELLOVIN, S. M. 2002. Implementing pushback: Router-based defense against ddos attacks. In Proceedings of the 2002 Annual Network and Distributed System Security Symposium. KOLLER, D. AND MILCH, B. 2001. Multi-agent influence diagrams for representing and solving games. In Proceedings of the 2001 International Joint Conference on Artificial Intelligence. ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
118
•
P. Liu et al.
LANDWEHR, C. E., BULL, A. R., MCDERMOTT, J. P., AND CHOI, W. S. 1994. A taxonomy of computer program security flaws. ACM Comput. Surv. 26, 3. LIU, P., JAJODIA, S., AND MCCOLLUM, C. D. 2000. Intrusion confinement by isolation in information systems. J. Comput. Security 8, 4, 243–279. LUNT, T. F. 1993. A survey of intrusion detection techniques. Computers & Security 4, 12 (June), 405–418. LYE, K. AND WING, J. M. 2002. Game strategies in network security. In Proceedings of the 2002 IEEE Computer Security Foundations Workshop. MALKHI, D. AND REITER, M. K. 2000. Secure execution of java applets using a remote playground. IEEE Trans. Software Eng. 26, 12. MAS-COLELL, A., WHINSTON, M. D., AND GREEN, J. R. 1995. Microeconomic Theory. Oxford University Press, Oxford, UK. MCHUGH, J. 2001. Intrusion and intrusion detection. Int. J. Inf. Security 1, 14–35. MEDINA, A., LAKHINA, A., MATTA, I., AND BYERS, J. 2001. An approach to universal topology generation. In Proceedings of the International Workshop on Modeling, Analysis and Simulation of Computer and Telecommunications Systems. MESTERTON-GIBBONS, M. 1992. An Introduction to Game-Theoretic Modeling. Addison-Wesley Publishing, Reading, MA. MUKHERJEE, B., HEBERLEIN, L. T., AND LEVITT, K. N. 1994. Network intrusion detection. IEEE Network, 26–41. NASH, J. 1950. Equilibrium points in n-person games. In Proceedings of the National Academy of Sciences. 48–49. NING, P., CUI, Y., AND REEVES, D. S. 2002. Constructing attack scenarios through correlation of intrusion alerts. In Proceedings of the 2002 ACM Conference on Computer and Communications Security. NISAN, N. AND RONAN, A. 2001. Algorithmic mechanism design. Games and Economic Behavior 35. NS2. The network simulator. http://www.isi.edu/nsnam/ns/. SYVERSON, P. F. 1997. A different look at secure distributed computation. In Proceedings of the 1997 IEEE Computer Security Foundations Workshop. THUSIJSMAN, F. 1992. Optimality and Equilibria in Stochastic Games. Centrum voor Wiskunde en Informatica, Amsterdam. VICKREY, W. 1961. Counterspeculation, auctions, and competitive sealed tenders. J. Finance 16, 8–37. WANG, X. AND REITER, M. 2003. Defending against denial-of-service attacks with puzzle auctions. In Proceedings of the 2003 IEEE Symposium on Security and Privacy. WELLMAN, M. P. AND WALSH, W. E. 2001. Auction protocols for decentralized scheduling. Games and Economic Behavior 35. XU, J. AND LEE, W. 2003. Sustaining availability of web services under distributed denial of service attacks. IEEE Trans. Comput. 52, 4 (Feb.), 195–208. ZOU, C., GONG, W., AND TOWSLEY, D. 2002. Code red worm propagation modeling and analysis. In Proceedings of the 2002 ACM Conference on Computer and Communications Security. Received May 2004; revised September 2004; accepted September 2004
ACM Transactions on Information and System Security , Vol. 8, No. 1, February 2005.
Modeling and Assessing Inference Exposure in Encrypted Databases ALBERTO CESELLI, ERNESTO DAMIANI, and SABRINA DE CAPITANI DI VIMERCATI Universita` di Milano SUSHIL JAJODIA George Mason University STEFANO PARABOSCHI Universita` di Bergamo and PIERANGELA SAMARATI Universita` di Milano
The scope and character of today’s computing environments are progressively shifting from traditional, one-on-one client-server interaction to the new cooperative paradigm. It then becomes of primary importance to provide means of protecting the secrecy of the information, while guaranteeing its availability to legitimate clients. Operating online querying services securely on open networks is very difficult; therefore many enterprises outsource their data center operations to external application service providers. A promising direction toward prevention of unauthorized access to outsourced data is represented by encryption. However, data encryption is often supported for the sole purpose of protecting the data in storage while allowing access to plaintext values by the server, which decrypts data for query execution. In this paper, we present a simple yet robust single-server solution for remote querying of encrypted databases on external servers. Our approach is based on the use of indexing information attached to the encrypted database, which can be used by the server to select the data to be
This paper extends the previous work by the authors appeared under the title “Balancing Confidentiality and Efficiency in Untrusted Relational DBMSs” in Proc. of the 10th ACM Conference on Computer and Communication Security (CCS 2003), Oct. 2003, Washington, DC, USA. This work was supported in part by the European Union within the PRIME Project in the FP6/IST Programme under contract IST-2002-507591 and by the Italian MIUR within the KIWI and MAPS projects. Authors’ addresses: Alberto Ceselli, Ernesto Damiani, Sabrina De Capitani di Vimercati, and Pierangela Samarati, Dipartimento di Tecnologie dell’Informazione, Universita` di Milano, Via Bramante, 65, 26013 Crema, Italy; email: {ceselli,damiani,decapita,samarati}@dti.unimi.it; Sushil Jajodia, George Mason University, Fairfax, VA 22030-4444; email: [email protected]; Stefano Paraboschi, Dipartimento di Ingegneria Gestionale e dell’Informazione, Universita` di Bergamo, Viale Marconi, 5, 24044 Dalmine—Italy; email: [email protected]. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 1515 Broadway, New York, NY 10036 USA, fax: +1 (212) 869-0481, or [email protected]. C 2005 ACM 1094-9224/05/0200-0119 $5.00 ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005, Pages 119–152.
120
•
A. Ceselli et al.
returned in response to a query without the need of accessing the plaintext database content. Our indexes balance the trade-off between efficiency requirements in query execution and protection requirements due to possible inference attacks exploiting indexing information. We investigate quantitative measures to model inference exposure and provide some related experimental results. Categories and Subject Descriptors: H.2.4 [Database Management]: Systems—Relational databases; H.2.7 [Database Management]: Database Administration—Security, integrity, and protection; H.3.1 [Information Storage and Retrieval]: Content Analysis and Indexing—Indexing methods; H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval—Query formulation General Terms: Security, Design Additional Key Words and Phrases: Cryptography, database service, indexing, inference
1. INTRODUCTION In most organizations, databases hold a critical concentration of sensitive information. Ensuring an adequate level of protection to databases’ content is therefore an essential part of any comprehensive security program. Database encryption [Davida et al. 1981] is a time-honored technique that introduces an additional layer to conventional network and application-level security solutions, preventing exposure of sensitive information even if the database server is compromised. Database encryption prevents unauthorized users, including intruders breaking into a network, from seeing the sensitive data in databases; similarly, it allows database administrators to perform their tasks without accessing sensitive information (e.g., sales or payroll figures) in plaintext. Furthermore, encryption protects data integrity, as possible data tampering can be recognized and data correctness restored (e.g., by means of backup copies). While much research has been done on the mutual influence of data and transmission security on organizations’ overall security strategy [Walton 2002], the influence of service outsourcing on data security has been less investigated. Conventional approaches to database encryption have the sole purpose of protecting the data in storage and assume trust in the server, which decrypts data for query execution. This assumption is less justified in the new cooperative paradigm, where multiple Web services cooperate exchanging information in order to offer a variety of applications. Effective cooperation between Web services and content providers often requires critical information to be made continuously available for online querying by other services or final users. To name but a few, telemedicine applications involve network transfers of medical data, location-based services require availability of users’ cartographic coordinates, while e-business decision support systems often need to access sensitive information such as credit ratings. Customers, partners, regulatory agencies, and even suppliers now routinely need access to information originally intended to be stored deep within companies’ information systems. Operating online querying services reliably on open networks is very difficult. For this reason, many enterprises prefer to outsource their data center operations to external application providers. Remote storage technologies (e.g., storage area networks [Ward et al. 2002]) are used to place sensitive and even critical company information at a provider’s site, on systems ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
121
Fig. 1. Overall scenario.
whose architecture is specifically designed for database publishing and access is controlled by the provider itself. As a consequence of this trend toward data outsourcing, highly sensitive data are now stored on systems run in locations that are not under the data owner’s control, such as leased space and untrusted partners’ sites. Therefore, data confidentiality and even integrity can be put at risk by outsourcing data storage and management. The requirement that the database content remains secret to the database server itself introduces several new interesting challenges. Conventional encrypted DBMSs assume trust in the DBMS, which can then decrypt data for query execution. In an outsourced environment scenario, such an assumption is not applicable anymore as the party to which the service is being outsourced cannot be granted full access to the plaintext data. Since confidentiality demands that data decryption must be possible only at the client side, techniques are needed enabling servers to execute queries directly on encrypted data. A ¨ us ¨ first proposal toward the solution of this problem was presented in Hacigum et al. [2002a], where the authors proposed storing, together with the encrypted database, additional indexing information. Such indexes can be used by the DBMS to select the data to return in response to a query. The basic idea is illustrated in Figure 1. Each plaintext query (1) is mapped onto a corresponding query (2) on the indexing information and executed in that form at the server side. The server returns the encrypted result (3), which is then decrypted at the trusted front end. If the mapping between indexing information and the original database plaintext is not exact, an additional query (4) may need to be executed to eliminate spurious tuples that do not belong to the result set. A major challenge in this scenario is how to compute and represent indexing information. Two conflicting requirements challenge the solution of this problem: on one side, the indexing information should be related with the data well enough to provide for an effective query execution mechanism; on the other side, the relationship between indexes and data should not open the door to inference and linking attacks that can compromise the protection granted by ¨ us ¨ encryption [Denning 1982]. The indexing information provided in Hacigum et al. [2002a], based on using as indexes names of sets containing value intervals, proves limited in this respect (see Section 2). In this paper, we provide an approach to indexing encrypted data constructed with efficiency and confidentiality in mind, providing a balance between these ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
122
•
A. Ceselli et al.
two requirements. A trade-off can be observed between the degree of protection that our family of techniques is able to offer, and a corresponding decrease in efficiency that can be produced by the use of these protection measures. Then, a general motivation of our investigation is an assessment of the degree of protection provided by different indexing techniques (each of which affects efficiency in query execution in a different way). It turns out that the data protection (and its relation with efficiency) cannot be synthesized by simple mathematical formulae. Instead, the paper proposes a family of abstract models for solving the inference problem using algorithmic techniques. Our analysis supports a sequence of experiments showing the behavior of different indexing solutions. The contributions of this paper can be summarized as follows. First, we propose an approach to indexing encrypted data based on direct encryption and hashing. Second, we define a suite of graph theoretical models supporting quantitative evaluations of the inference exposure of the two approaches. Third, we present the result of a set of experiments that quantify the protection increase that hashing is able to provide. 2. RELATED WORK Database encryption has been proposed since long as a fundamental tool for providing strong security for “data at rest.” Thanks to recent advances in processors’ capabilities and to the development of fast encryption techniques, the notion of encrypted database is nowadays well recognized, and several commercial products reached the market. However, developing a sound security strategy including database encryption still involves many open issues. Key management and security are of paramount importance in any encryption-based system and were therefore among the first issues to be investigated in the framework of database encryption ¨ us ¨ and Mehrotra 2004]. Later, techniques have [Davida et al. 1981; Hacigum been developed aimed at efficiently querying encrypted databases [Song et al. 2000], some of them related to parallel efforts by the text retrieval community [Klein et al. 1989] for executing hidden queries, that is, queries where only the ciphertext of the query arguments is made available to the DBMS. On the other hand, architectural research investigated optimal sharing of the encryption burden between secure storage, communication channels and the application where the data originates [Jensen 2000], looking for a convenient trade-off between data security and application performance. Recently, much interest was devoted to secure handling of database encryption in distributed, Web-based execution scenarios, where data management is outsourced to external services [Bouganim and Pucheral 2002]. The main purpose of this line of research is to find techniques for delegating data storage and the execution of queries to external servers while preserving efficiency. ¨ us ¨ et al. [2002a] in the The index of range technique proposed in Hacigum framework of a database-service-provider architecture relies on partitioning the domains of attributes in client tables into sets of intervals. The value of each remote table attribute is stored as the index countersigning the interval to which the corresponding plain value belongs. Indexes may be ordered or not, and the intervals may be chosen so that they have all the same length, or ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
123
are associated with the same number of tuples. This representation supports efficient evaluation on the remote server of both equality and range predicates; however, it makes it awkward to manage the correspondence between intervals and the actual values present in the database. In Damiani et al. [2004], we illustrate an approach for obfuscating data that guarantees protection of data while allowing the execution of both equality and range queries on the obfuscated data. Privacy homomorphism has also been proposed for allowing ¨ us ¨ et al. the execution of aggregation queries over encrypted data [Hacigum 2004]. The proposed approach is based on the technique introduced by Rivest et al. [1978] according to which an encrypted function E() is homomorphic if given E(x) and E( y), one can obtain E(xθ y) without decrypting x and y for some operation θ. In this case, the server stores an encrypted table with an index for each aggregation attribute (i.e., an attribute on which the aggregate operator can be applied) obtained from the original attribute with privacy homomorphism. An operation on an aggregation attribute can then be evaluated by computing the aggregation at the server site and by decrypting the result at the client side. Other work on privacy homomorphism illustrates techniques for performing arithmetic operations (+, −, ×, ÷) on encrypted data and does ¨ not consider comparison operations [Boyens and Gunter 2003; Domingo-Ferrer 1996; Domingo-Ferrer and Herrera-Joanconmart´ı 1998]. In Agrawal et al. [2004], an order preserving encryption schema (OPES) is presented to support equality and range queries as well as max, min, and count queries over encrypted data. The basic idea is that given a target distribution, the plaintext values are transformed by using an order-preserving transformation and in such a way that the transformed values follow the target distribution. While our technique is applicable to any kind of data and robust against different class of attacks (i.e., known-plaintext attacks and ciphertext-only attacks), OPES is only applicable to numeric data and is secure against ciphertext-only attacks. A distinct, though related solution is proposed in Bouganim and Pucheral [2002], where smart cards are used for key management. On a different line of related work, we note that the protection/exposure given by hashing can resemble the generalization approach for microdata protection; and correspondingly inference attacks exploiting it can resemble record linkage techniques examined in that context [Samarati 2001]. However, the two problems turn out to be quite different: while generalization replaces values in a given interval with a single identifier and preserves some information on plaintext values, hashing replaces uncorrelated values with a single bucket identifier. Also, it is important to note that the problem we consider differs from existing approaches protecting encrypted data, which investigated solutions for the private information retrieval problem (protecting the query search criteria, that is, the information the user is looking for) or for the problem of limiting the amount of data that users can acquire. 3. DATA ORGANIZATION We consider a relational DBMS where data are organized in tables, where the underlined attribute represents the key of the table (e.g., see table ACCOUNTS in ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
124
•
A. Ceselli et al.
Fig. 2. A plaintext relation (a) and the corresponding encrypted relations with direct encryption (b) and hashing (c).
Figure 2). In principle, different granularity choices are possible for database encryption, such as encrypting at the level of whole tables, columns (i.e., attributes), rows (i.e., tuples), and cells (i.e., elements). Encrypting at the level of tables (columns resp.) implies that the whole table (column resp.) involved in a query should always be returned, providing therefore no means for selecting the data of interest and leaving to the client the burden of query execution on a possibly huge amount of data. On the other hand, supporting encryption at the finest granularity of single cells is also inapplicable as it would severely affect performance, since the client would be required to execute a potentially very large number of decrypt operations to interpret the results of queries ¨ us ¨ et al. 2002b]. In the same line as Hacigum ¨ us ¨ et al. [2002a], we [Hacigum assume encryption to be performed at the tuple level. To provide the server with the ability to select a set of tuples to be returned in response to a query, we associate with each encrypted tuple a number of indexing attributes. An index can be associated with each attribute in the original table on which conditions need to be evaluated in the execution of queries. Each plaintext table is represented as a table with an attribute for the encrypted tuple and as many attributes as indexes to be supported. More specifically, each plaintext tuple t(A1 , . . . , An ) is mapped onto a tuple t ′ (Tk , I1 , . . . , Im ), where m ≤ n, t ′ [Tk ] = Ek (t), with Ek () denoting an invertible encryption function over key k, and each Ii corresponds to the index over some A j . Figure 2 illustrates an example of a plaintext table ACCOUNTS and the corresponding encrypted/indexed1 table ENC ACCOUNTS1 where Enc tuple contains 1 In
the remainder of the paper, for the sake of simplicity, we shall designate this table format with the term encrypted. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
125
the encrypted triples, while IA , IC , and IB are indexes over attributes Account, Customer, and Balance, respectively. For the sake of readability, we use easy-tounderstand names for the attributes and table names in the encrypted schema and Greek letters as index values. Of course, in a real example, attributes and table names would be obfuscated and actual values for indexes would be the results of an invertible encryption function and would then look like the ones reported for the encrypted tuples in Figure 2. Let us now discuss how to represent indexing information. An approach providing the same fine-grained selection capability as using plaintext values is to use their corresponding encrypted values as indexes. Then, for each indexed cell, the outcome of an invertible encryption function over the cell value is used, that is, t[Ii ] = Ek (t[Ai ]). Query execution is simple: each plaintext query can be translated into a corresponding query on encrypted data by simply applying the encryption function to the values mentioned in the query. For instance, with reference to the tables in Figure 2, query “SELECT * FROM ACCOUNTS WHERE CUSTOMER = Alice” would be translated into “SELECT ENC TUPLE FROM ENC ACCOUNTS1 WHERE IC = α.” This solution has the advantage of preserving plaintext distinguishability, together with precision and efficiency in query execution, as all the tuples returned belong to the query set of the original query. In particular, the solution is convenient for queries involving equality constraints over the attributes. Also, since equality predicates are almost always used in the computation of joins, a join between two tables that use the same encryption function on the join attribute can be computed precisely. As a drawback, however, in this approach encrypted values reproduce exactly the plaintext values distribution with respect to values’ cardinality (i.e., the number of distinct values of the attribute) and frequencies. This opens the door to frequency-based attacks (see next section). An alternative approach to counter these attacks is to use as index the result of a secure hash function over the attribute values rather than straightforwardly encrypting the attributes; this way, the attribute values’ distribution can be flattened by the hash function. A flexible characteristic of a hash function is the cardinality of its codomain B, which allows us to adapt it to the granularity of the represented data. When B is small compared with the cardinality of the attribute, the hash function can be interpreted as a mechanism that distributes tuples in B buckets; a good hash function (and a secure hash has to be good) distributes uniformly the values in the buckets. For instance, the ACCOUNTS table in Figure 2 can be indexed by considering three buckets (α, β, δ) for IC and three buckets (µ, κ, θ) for IB . The encrypted relation ENC ACCOUNTS2 in Figure 2 can then be obtained when Alice and Chris are both mapped onto α, Donna and Elvis are both mapped onto β, while Bob and Fred are both mapped onto δ. Also, 200 and 400 are both mapped to κ, 100 is mapped onto µ, and 300 is mapped onto θ . With respect to direct encryption, hash-based indexing provides more protection as different plaintext values are mapped onto the same index. Using attribute hashes in remote tables permits an efficient evaluation of equality predicates within the remote server. If the same hash function is used to compute values of two attributes of different tables on which the equality predicate must be evaluated in the context of a join query, the join query itself ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
126
•
A. Ceselli et al.
can be efficiently computed at the remote server simply by combining all of the pairs of tuples characterized by the same hash value. When direct encryption is used for indexing, the results returned by a query on the encrypted table is exactly the query set of the original query. The only task left for the front end is then decryption. By contrast, when hashing is used, the results will often include spurious tuples (all those belonging to the same bucket of the index) that will have to be removed by the front end receiving them. In this case, the additional burden on the front end consists in purging from the result returned by the remote server all the pairs of tuples that, once brought back in plaintext form, do not satisfy the equality predicate on the join attribute. Intuitively, every query Q of the front end corresponds to a query Q ′ to be passed onto the server-side DBMS for execution over the encrypted database and a query Q ′′ to be executed at the front end on the results of Q ′ . As an example, consider the encrypted table ENC ACCOUNTS2 in Figure 2 and the user query Q “SELECT BALANCE FROM ACCOUNTS WHERE CUSTOMER=Bob.” The query is translated as Q ′ = “SELECT ENC TUPLE FROM ENC ACCOUNTS2 WHERE IC = δ” for execution by the server-side DBMS which returns the third and seventh encrypted tuples. The trusted front end then decrypts the result obtaining the third and seventh tuples of the original table ACCOUNTS and re-executes on them the original query, eliminating the seventh one (whose presence was due to index collision).2 Incidentally, we observe that the trusted front end is a relatively complex piece of software, as it has to integrate most of the functionalities of a relational engine. 4. INFERENCE EXPOSURE Being closely related to plaintext data, indexing information could open the door to inferences that exploit data analysis techniques to reconstruct the database content and/or break the indexing code. It is important to be able to evaluate quantitatively the level of exposure associated with the publication of certain indexes and therefore to determine the proper balance between index efficiency and protection. There are different ways in which inference attacks could be modeled. In this paper, we consider two cases that differ in the assumption about the attacker’s prior knowledge. In common, the two attack models have the fact that the attacker has complete access to the encrypted relations. In the first case, which we call Freq + DBK scenario, we assume the attacker is aware of the distribution of plaintext values (Freq) in the original database in addition to knowing the encrypted database (DBK ). This knowledge can be exact (e.g., in a database storing accounting information, the account holder list can be fully known) or approximate (e.g., the ZIP codes of the geographical areas of the account holders can be estimated based on population data). For the sake of simplicity, in the following we will assume exact knowledge (which represents the worst case scenario). In this scenario, there are two possible 2 We
refer the reader to Damiani et al. [2003] for an illustration of the working of the query translation and the query evaluation processes. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
127
Fig. 3. Synoptic guide to the exposure evaluation on the four attack scenarios.
inferences that the attacker can draw: (i) the plaintext content of the database, that is, determine the existence of a certain tuple (or association of values) in the original database, and/or (ii) the indexing function, that is, determine the correspondence between plaintext values and indexes. In the second case, which we call DB+DBK scenario, we assume the attacker has both the encrypted (DB K ) and the plaintext database (DB). In this case the attacker aims at breaking the indexing function, thus establishing the correlation between plaintext data and the corresponding index values. The hosting server will then be able to decode additional encrypted tuples that are inserted into the database. This attack may, for example, occur when the database owner switches from a remote plaintext database to the use of encryption. The combination of the two attack models and the two encryption solutions gives us the four different scenarios illustrated in Figure 3, which will be investigated in subsequent sections, and for which we will propose abstract models and algorithmic solutions, using them in the experiments to evaluate exposure to inference. 5. FREQ+DBK WITH DIRECT ENCRYPTION To illustrate this scenario, let us consider the example in Figure 2. The attacker knows the encrypted table ENC ACCOUNTS1; also, she knows that all values for attribute Account have only one occurrence, and she knows the values (and their occurrences) appearing independently in attributes Customer and Balance, namely, Customer = {Alice, Alice, Bob, Chris, Donna, Elvis, Fred} Balance = {100, 200, 200, 200, 300, 300, 400}. Although the attacker does not know which index corresponds to which plaintext attribute, she can determine the actual correspondence by comparing their occurrence profiles. In particular, she can determine that IA , IC , and IB correspond to attributes Account, Customer and Balance, respectively. The attacker can then infer that κ represents value 200 and index α represents value Alice (indexing inference). She can also infer that the plaintext table contains a tuple associating values Alice and 200 (association inference). The other occurrence of the index value corresponding to Alice (i.e., α) is associated with a balance other than 200. Since there are only three other possible values, the probability of guessing it right is 1/3. In other terms, the probability of each association depends on the combination of occurrences of its values. Intuitively, the basic protection from inference in the encrypted table is that values with the same number of occurrences are indistinguishable to the attacker. For instance, all customers but Alice are indistinguishable from one another, as well as all amounts but 200. By contrast, Alice and 200 stand out ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
128
•
A. Ceselli et al.
being, respectively, the only customer appearing twice and the only balance appearing three times. The exposure of an encrypted relation to indexing inference can then be thought of in terms of an equivalence relation where indexes (and plaintext values) with the same number of occurrences belong to the same equivalence class. For instance, denoting each equivalence class with a dot notation showing the attribute name and its number of occurrences (e.g., class A.1 contains all the values of attribute A that occur once), we obtain A.1 = {π, ̟, ξ, ̺, ς, Ŵ, τ } = {Acc1, Acc2, Acc3, Acc4, Acc5, Acc6, Acc7} C.1 = {β, γ , δ, ǫ, φ} = {Bob, Chris, Donna, Elvis, Fred} C.2 = {α} = {Alice} B.1 = {µ, θ} = {100, 400} B.2 = {η} = {300} B.3 = {κ} = {200}. The quotient of the encrypted table with respect to the equivalence relation defined above is the following: QUOTIENT TABLE qt A qtC qt B A.1 C.2 B.1 A.1 C.2 B.3 A.1 C.1 B.2 A.1 C.1 B.3 A.1 C.1 B.1 A.1 C.1 B.3 A.1 C.1 B.2
ic A 1/7 1/7 1/7 1/7 1/7 1/7 1/7
IC TABLE icC ic B 1 1/2 1 1 1/5 1 1/5 1 1/5 1/2 1/5 1 1/5 1
The exposure of the encrypted table to inference attacks can then be evaluated by looking at the distinguishable characteristics in the quotient table. In particular, the association Alice,200 (and its correspondence α, κ) can be spotted with certainty being composed by two singleton equivalence classes (C.2 and B.3). For the other values, probabilistic considerations can be made by looking at the IC table, that is, the table of the inverse of the cardinalities of the equivalence classes. In fact, the probability of disclosing a specific association is the product of the inverses of the cardinalities. The exposure of the whole relation (or projection of it) can then be estimated as the average exposure of each tuple in it. We chose the average because it is a simple operator that best represents the intuition on the degree of exposure that characterizes a database; an alternative would be the use of the maximum exposure of a database tuple as the overall exposure index, but this choice would present a poor behavior and would often produce high (= 1) exposure values, in most cases due to the presence, among a multitude of unrecognizable values, of a single value characterized by a unique cardinality. In the paper, we always use the average to compute the overall database exposure from the exposure of single values. Formally, we can write the exposure coefficient E associated with an encrypted table with inverse cardinality table IC as E=
n k 1 ICi, j . n i=1 j =1
ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
(1)
Modeling and Assessing in Encrypted Databases
•
129
Fig. 4. Problem 1 for attribute Customer and table ACCOUNTS: the correct solution (b), and an incorrect solution (c)
Here, i ranges over the tuples while j ranges over the columns. With reference to our example, we have a value of E = 71 · 12 for the protection 35 of the whole relation, and a value of 71 · 12 for the pair Customer, Balance. 5 Note how a long-tailed distribution of values (i.e., distributions composed of many values having few occurrences) can decrease the exposure to association attacks. This reflects the fact that while the attacker has information on many values, they all fall into the same equivalence class resulting indistinguishable from one another. Since each index value corresponds to a single plaintext one, the exposure computed above may be regarded as a lower bound to vulnerability to association inference. 6. FREQ+DBK WITH HASHING As we already noticed, collisions due to hashing increase protection from inference. For a simple query using a hash value for selection, the average increase in the result size is equal to cf (collision factor), that is, the number of distinct attribute values that on average collide on the same hash value. If the query uses the index to evaluate a join between two tables, the increase in the query size would be cf 2 . For the sake of simplicity, in the following, we consider examples related to attributes taken individually, that is, focus on breaking (or reducing the uncertainty on) the indexing function. Association inference can then be modeled as the combination of information obtained from individual attributes. Breaking the indexing function can be modeled as the problem of finding a mapping λ from the plaintext values A to indexing values H that satisfies the constraints given by the attacker’s prior knowledge, which is represented by the occurrences of each plaintext value and of each hashed one. In the following, given any value v in a set of possible values V (either plaintext values in the original table or hash values in the encrypted table), we use count(v) to denote the number of occurrences of v in the corresponding table. For instance, with reference to our example, count(Alice) = 2 and count(α) = 3. Function λ can be represented graphically by a table (see Figure 4(a)), with plaintext values as rows and hash values as columns and where [i, j ] = 1 if λ(i) = j ; [i, j ] = 0 otherwise. For instance, the table in Figure 4(b) illustrates the mapping for attribute Customer in our example. The bold numbers ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
130
•
A. Ceselli et al.
appearing at the end of each row i and column j represent the attackers knowledge, that is, count(i) and count( j ), respectively. Finding a mapping satisfying the cardinality constraints on the plaintext/hash values equates to finding different ways to fill this table subject to the constraints. For instance, Figures 4(b) and (c) illustrate two possible solutions for the table in Figure 4(a). Note that Figure 4(b) is the correct mapping, which is however indistinguishable, from the attacker’s point of view, from the incorrect mapping in Figure 4(c). This case can be stated as finding solutions to the following problem. Problem 1. Let A and H be the set of plaintext values and hash values, respectively. Find all the solutions such that (1) ∀ j ∈ H : i∈A [i, j ] · count(i) = count( j ); (2) ∀i ∈ A: j ∈H [i, j ] = 1; (3) ∀i ∈ A, ∀ j ∈ H : [i, j ] ∈ {0, 1}. The first constraint states that the number of plaintext values mapped to each hash value must coincide with the number of occurrences of that hash value. The second and third constraints trivially state that each plaintext value is mapped to exactly one hash value. Note that since the attacker has to determine the mapping, a first evaluation of the strength of the encryption could be done by measuring how many solutions Problem 1 has. Intuitively, if the problem has exactly one solution, the encryption function is completely exposed to inference. Counting the number of solutions is, however, not sufficient as different values can be exposed in different ways (e.g., if all solutions have the same mapping for a specific value v, such a value is completely exposed). For this reason it is important to explicitly enumerate the solutions, rather than simply counting them. Problem 1 is a well-known N P-Hard problem addressed in the literature as the multiple subset sum problem [Caprara et al. 2000]. A few algorithms have been proposed, but only for the optimization version of the problem [Caprara et al. 2003]. Evaluating the inference exposure requires then to enumerate all the possible solutions to Problem 1. Fortunately, we can reduce the number of solutions to be enumerated by exploiting indistinguishable characteristics of both plaintext and hash values. 6.1 Reducing the Problem by Exploiting Indistinguishability of Plaintext Values As already noticed in previous section, from the point of view of the attacker, plaintext values with the same cardinality are indistinguishable. Therefore, each instance of Problem 1 has several solutions that differ only for the order in which plaintext elements with the same number of occurrences are considered. If two solutions are in such a relationship, we say that they are symmetric. The number of symmetric solutions grows combinatorially in the size of the plaintext values. For instance, suppose that there are n plaintext values v1 , . . . , vn with the same number of occurrences in the original table and k possible hash ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
131
Fig. 5. Problem 2 for equivalence classes of attribute Customer in table ACCOUNTS (a): the correct solution (b), and an incorrect solution (c).
values h1 , . . . , hk for them. If there exists a solution to Problem 1 mapping to each hi a number ni of values in v1 , . . . , vn , then each partition of the set of the n plaintext values in k subsets of cardinality n1 , . . . , nk also represents a solution to Problem 1. The of these partitions can be expressed as a multino nnumber . For instance, in the correct mapping, the five values of mial coefficient n1 ,...,n k equivalence class C.1 are partitioned in a subset of one value two5!subsets of 5and = (1!·2!·2!) = 30. two values. Hence, the number of symmetric solutions is 1,2,2 Our algorithm exploits this symmetry of the solutions by enumerating only one solution in each of such a symmetry class. This behavior corresponds to stating the problem with reference to equivalence classes of plaintext values (like in the previous section), grouping values with the same number of occurrences. In the following, we therefore consider a variation of the problem where matrix has as rows equivalence classes of the plaintext values. For instance, for attribute Customer of our example, which we abbreviate as C in the following, there are two equivalence classes: C.1 = {Bob, Chris, Donna, Elvis, Fred}; C.2 = {Alice} and the matrix expressing the constraints becomes as illustrated in Figure 5(a). Note that the number in bold at the end of each row i now corresponds to the number of elements in equivalence class i, which we denote by |i|. Also, we denote with count(i) the (equal) number of occurrences of the elements in i. For instance, for class C.1, |C.1| = 5 and count(C.1) = 1. The problem of finding a solution, with reference to equivalence classes, is stated as follows: Problem 2. Let X and H be the set of equivalence classes of plaintext values and of the set of hash values, respectively. Find all the solutions such that (1) ∀ j ∈ H : i∈A [i, j ] · count(i) = count( j ); (2) ∀i ∈ A: j ∈H [i, j ] =| i |; (3) ∀i ∈ A, ∀ j ∈ H : [i, j ] ∈ Z + . The first constraint states that the sum of the occurrences associated with elements of the equivalence classes mapped to a hash value must coincide with the number of occurrences of that hash value. The second and third constraints state that all plaintext values in each equivalence class are mapped to some hash value. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
132
•
A. Ceselli et al.
Fig. 6. Problem 3 for equivalence classes of attribute Customer in table ACCOUNTS and equivalence classes of hash values (a): the correct solution (b), and an incorrect solution (c)
6.2 Reducing the Problem by Exploiting Indistinguishability of Hash Values Following an approach similar to the one used for indistinguishability of plaintext values, we can exploit symmetry also among hash values. In particular, we avoid computing all the different combinations that would map different plaintext values in different hash values which are indistinguishable since they have the same number of occurrences (e.g., β and δ in our example). Therefore, instead of considering individual hash values we collapse together, in a single equivalence class, hash values with the same number of occurrences. For our IC attribute, we have IC .2 = {β, δ} IC .3 = {α}. Intuitively, with reference to the matrix we are now collapsing columns together. Note however a difference here, since this time we cannot simply collapse multiple columns and consider a single value for them, as this would correspond to a single hash value with a number of occurrences equal to the sum of the occurrences of the collapsed columns. Instead, we need to keep track, within each single column, of the different hash values it combines. For our example, the matrix of Figure 5(a) would be transformed into the matrix of Figure 6(a). Note that each cell in the matrix is now a vector. For the sake of simplicity, with a notational abuse, given a vector j , we now use count( j ) to denote the vector of the occurrences of the single elements of j . For instance, count(IC .2) represents now the vector (2, 2). We then restate the problem as follows. Problem 3. Let X and Y be the set of equivalence classes of plaintext values and of hash values, respectively. Find all the solutions such that (1) ∀ j ∈ Y : i∈X [i, j ] · count(i) = count( j ); (2) ∀i ∈ X : j ∈H k∈| j | [i, j ][k] = |i|; (3) ∀i ∈ A, ∀ j ∈ H, ∀k = 1, . . . | j | : [i, j ][k] ∈ Z + . The constraints intuitively extend the constraints in Problem 2 taking into account equivalence classes of hash values. The exposure coefficient can now be computed by enumerating the different solutions to Problem 3. Figure 7 illustrates the recursive enumeration algorithm we used in our experiments. Given a statement of Problem 3, our algorithm computes the different solutions 1 , . . . , n to the problem, starting by enumerating all the different columns that can appear in a solution by solving ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
133
Fig. 7. Algorithm for computing the different solutions 1 , . . . , n of Problem 3.
the corresponding subset sum problem. This is done using an adaptation of Pisinger’s algorithm for the subset sum problem [Pisinger 1995]. Pisinger’s algorithm relies on dynamic programming: the original version runs in pseudopolynomial time (i.e., it runs in time polynomial in the dimension of the problem and the magnitudes of the data, rather than the logarithm of their magnitudes [Garey and Johnson 1979]). However, a dominance criterion has to be removed to obtain all the feasible solutions. Each solution m then corresponds to different combinations of the columns found. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
134
•
A. Ceselli et al.
Once, we have the different solutions, we can evaluate the exposure coefficient as illustrated in the following subsection. 6.3 Computing the Exposure Coefficient While Problem 3 has a number of solutions, which we denote as 1 , . . . , n , only one corresponds to the correct mapping used for indexing. In the following, we use ′ to denote the correct solution in {1 , . . . , n }. For each equivalence class i ∈ X and solution , let [i, ∗] be the part of the solution of describing the mapping of plaintext values in i (i.e., row i in matrix ). For instance, with reference to Figure 6(b), [i, ∗] = [(1), (2, 2)]. To evaluate the exposure coefficient E(i), we first introduce the concept of exposure coefficient for each solution , which we denote as E(i, [i, ∗]). We distinguish two separate cases depending on whether the solution is correct or incorrect with respect to i. Let us first consider the case where is the (unique) correct mapping ′ . Here, the uncertainty remaining for the attacker is only due to the cardinality of the equivalence classes for both the plaintext values and for the hash values. To measure this uncertainty, we first define the set of potential equivalence classes Pot eq(i) of hash values to which values of i could be mapped, that is, for which there is at least a nonzero value in a cell. Formally, Pot eq(i) = j | [i, j ][k] > 0 . k
In our example, Pot eq(C.1) = {IC .2, IC .3} while Pot eq(C.2) = {IC .3}. Then, for each individual value in i, the number of potential mappings Pot(i) can be computed by adding up the cardinality of all equivalence classes of hash values j in Pot eq(i), that is, the ones in which it could be mapped. Formally, Pot(i) = | j |. j ∈P ot eq(i)
In this case, Pot(C.1) = |IC .2| + |IC .3| = 3 and Pot(C.2) = |IC .2| = 1. We are now ready to give the formula for the exposure coefficient E(i, [i, ∗]) with reference to the correct solution = ′ as follows: 1 1 count(i) E(i, [i, ∗]) = · · , [i, j ][k] · |i| j ∈P ot eq(i) Pot(i) k count( j ) where the first factor models the uncertainty on the assignments of each member of i to the hash values; the second factor is the number of members of equivalence class i assigned to hash value in equivalence class j ; while the third expresses, for each of those elements, the probability of guessing it right. For instance, with respect to the correct matching, we have the following entries in the table illustrated above: (C.1, IC .3) = 1, (C.1, IC .2) = 4, (C.2, IC .3) = 1, (C.2, IC .2) = 0. A plaintext value in class C.1 can match with any hash value. When it matches with a hash value in class IC .3, there is a probability 1/3 of correctly identifying the value of a tuple, when it matches ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
135
with a hash value in class IC .2, there is a probability 1/2 of guessing the right value. Therefore, the exposure coefficient for values in equivalence class C.1 is 1 1 1 1 1 E(C.1, 1 [C.1, ∗]) = · 1 · · + 4 · · . 5 3 3 3 2 Let us now consider the case of a solution which is not the correct solution with respect to class i. Since is not the correct solution either too few or too many elements of i might have been assigned to some j . If there are too many, there is no way to map the exceeding ones. If there are too few, the missing ones correspond to exceeding values mapped to some other classes. In either case, we need to consider only the number of values that have been correctly mapped, which corresponds to the ones in the correct solution ′ , the first case, and to the ones in the current solution in the second case. This gives us the formula 1 1 count(i) ′ E(i, [i, ∗]) = [i, j ][k] · [i, j ][k], · · min . |i| j ∈P ot eq(i) Pot(i) count( j) k k As an example, consider the second matching illustrated above, where (C.1, IC .3) = 3, (C.1, IC .2) = 2, (C.2, IC .3) = 0, (C.2, IC .2) = 1. Two of the three plaintext values in class C.1 matched with the hash value in class IC .3 cannot be assigned to the correct value, and the third plaintext value can be identified with probability 1/3. Moreover, the attacker has probability 1/2 of correctly identifying each plaintext value in C.1 assigned to a hash value in IC .2. In this case, the exposure coefficient for values in equivalence class C.1 is 1 1 1 1 1 . E(C.1, 2 [C.1, ∗]) = · 1 · · + 2 · · 5 3 3 3 2 Finally, we observe that the exposure coefficient E(i) of values in equivalence class i can be obtained by averaging all the E(i, [i, ∗]) computed for distinct values of [i, ∗]. Formally, 1 · E(i, [i, ∗]), E(i) = |V | {[i,∗]∈V } where V = {1 [i, ∗], . . . , n [i, ∗]} is the set (i.e., with elimination of duplicates) of all possible solutions. For instance, the exposure coefficient for class C.1 is 1 1 7 4 E(C.1) = E(i, 1 [C.1, ∗]) + E(i, 2 [C.1, ∗]) = · · + . 5 2 9 9 7. DB+DBK WITH DIRECT ENCRYPTION We now consider the situation where the attacker knows both the encrypted and the plaintext database. A typical scenario for this attack occurs when the content owner switches from no encryption to the use of encryption with indexing on the outsourced database. A malicious user with access to the database server may then be interested in reconstructing the correspondence between the plaintext and index values, to monitor the evolution of the database and ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
136
•
A. Ceselli et al.
Fig. 8. Relation ENC ACCOUNTS1 (a) and the corresponding RCV-graph (b).
keep access to most of its content, independently of the strength of the encryption function adopted. In this scenario, the attacker knows precisely the values’ distribution and the relationships among them. Our model of the attack is based on the definition of RCV-graphs that we introduce in the following. 7.1 The RCV-Graph Given a table T with attributes A1 , A2 , . . . , An and tuples t1 , t2 , . . . , tm , we build a three-colored undirected graph G = (V , E) called the RCV-graph (i.e., the row–column-value-graph) as follows. The set of vertices V contains one vertex for every attribute (all of color “column”), one vertex for every tuple (all of color “row”), and one vertex for every distinct value in each of the attributes (all of color “value”); if the same value appears in different attributes, a distinct vertex is introduced for every attribute in which the value appears. The set of edges E is built as follows. First, we add edges connecting each vertex representing a value with the vertex representing the column in which the value appears. Second, we add edges connecting each vertex representing a value with the vertices representing tuples in which the value appears. To illustrate, consider table ENC ACCOUNTS1 in Figure 2, which is repeated in Figure 8(a) for convenience, restricted to attributes Customer and Balance. There are two vertices labeled IC and IB for the attributes, seven vertices labeled t1 . . . t7 for the tuples, and ten vertices labeled α . . . θ for the distinct values appearing in the attributes. The addition of all the edges produces the RCV-graph depicted in Figure 8(b). An important property is that the RCV-graph built starting from the plaintext database is identical to the RCV-graph built starting from the encrypted database, since the cryptographic function only realizes a biunivocal mapping between plaintext and index values (in the relational model, the order of tuples, and the order of attributes within a relation are irrelevant). The identification of the correspondence between plaintext and index values requires then to establish a correspondence between the vertex labels and the plaintext values discussed in the following section. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
137
7.2 RCV-Graph Automorphism The identification of the correspondence between the labels on the graph G = (V , E) and the plaintext values, when the plaintext database is known, can exploit information on the topological structure of the data that permits a more precise reconstruction than the one possible when the only information available is the distribution of values in each attribute (as in Section 5). In the example, it is possible to correctly identify the correspondence between label IC and attribute Customer, label IB and attribute Balance, and it is also possible to correctly identify the correspondence among all values but β and φ (t3 , t7 ) and γ and ε (t4 , t6 ). For the remaining vertices, it is only possible to obtain a probabilistic estimate of the correspondence. While in the above example, we found the correspondence simply by inspecting the graph, in the general case, the complexity of this search is related to the number of automorphisms in the RCV-graph. An automorphism of a graph is an isomorphism of the graph with itself. Formally, an automorphism of a graph is a permutation Ŵ of the graph labels such that G(V , E) = G(V Ŵ , E) (i.e., ∀e(vi , v j ) ∈ E, e(viŴ , v j Ŵ ) ∈ E). If the graph is colored (as in our case), nodes with different color cannot be exchanged by the permutation. The identical permutation trivially satisfies the relationship; then, at least one automorphism exists for any graph. When the RCV-graph presents only the trivial automorphism, the correspondence between the vertex labels and the plaintext values can be fully determined and the knowledge of the plaintext database permits a full reconstruction of the correspondence between plaintext and index values. When there are several automorphisms in the RCV-graph, the identification of a vertex can be uncertain, as there are many alternative ways to reconstruct the correspondence between the vertices. The RCV-graph shown in Figure 8(b) presents four automorphisms, which we represent here by the permutations of labels that characterize them. Each permutation is represented by a different order of the symbols in the following sequences. A1 .{IC , I B , t1 , t2 , t3 , t4 , t5 , t6 , t7 , α, β, γ , δ, ǫ, φ, µ, κ, η, θ} A2 .{IC , I B , t1 , t2 , t3 , t6 , t5 , t4 , t7 , α, β, ǫ, δ, γ , φ, µ, κ, η, θ} A3 .{IC , I B , t1 , t2 , t7 , t4 , t5 , t6 , t3 , α, φ, γ , δ, ǫ, β, µ, κ, η, θ} A4 .{IC , I B , t1 , t2 , t7 , t6 , t5 , t4 , t3 , α, φ, ǫ, δ, γ , β, µ, κ, η, θ} The four automorphisms derive from the choice in the order of the two vertices sets (t4 , γ , κ)-(t6 , ǫ, κ) and (t3 , β, η)-(t7 , φ, η). The number of automorphisms is not a good measure of the protection against inference attacks, since situations with evidently different protection may be characterized by the same number of automorphisms. For instance, starting from 1000 distinct values we can consider two distinct situations: (1) 900 values are fixed and 100 can interchange (100! = 9.3 × 10153 automorphisms), (2) there are 500 pairs of values that can interchange ((2!)500 = 3.3 × 10150 automorphisms); case (1) has a greater number of automorphisms, but higher exposure index (0.9 for case (1), 0.5 for case (2)). Also, the number of automorphisms increases exponentially with the size of the graph and may reach considerable (and inexpressive) values even for ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
138
A. Ceselli et al.
•
graphs of limited size. A more precise measure of protection, which considers the number of alternatives that are offered for the value of each label, is the following. For each value in a tuple in the database, there is a given probability of guessing it based on the knowledge of the plaintext database: if all the RCV-graph automorphisms do not permute the corresponding vertex, there is a probability ( p = 1) of identifying its correct value. In general, if there are K automorphisms for the RCV-graph and in k of them the label assigned to vertex vi is correct, there is a probability pi = k/K of correctly identifying the vertex (i.e., row and column vertices are ignored). Since the identification of the correspondence is of interest only for the vertices representing attribute values, the computation of the exposure coefficient is limited to these nodes. The probability of guessing right a generic value can be estimated by computing the average on all the vertices of the probability pi of guessing each of the m vi correctly, thus obtaining the attribute exposure coefficient E(vi ) = i=1 pi /m, where m is the number of vertices. The automorphism problem has been extensively studied in the context of graph theory, and many results can be directly applied to our context. First, the set of automorphisms of a graph constitute a group (called the automorphism group of the graph), which, for undirected graphs like these, can be described by the coarsest equitable partition [McKay 1981] of the vertices, where each element of the partition (each subset appearing in the partition) contains vertices that can be substituted one for the other in an automorphism. The Nauty algorithm that identifies the automorphism group of the graph [McKay 1981] starts from a partition on the vertices that can be immediately derived grouping all the vertices with the same color and connected by the same number of edges. This partition is then iteratively refined, and a concise representation of all the automorphisms is produced. From the structure of the partition, it derives that all the vertices appearing in the generic partition element C j are equivalently substitutable in all the automorphisms; from this observation, it derives that the probability pi of a correct identification of a vertex vi ∈ C j is equal to the inverse of the cardinality of C j , 1/ | C j |. Then, for the identification of the E, it is sufficient to identify the number of elements in the equitable partition and the total number of attribute vertices (i.e., it is not necessary to keep track of the number of vertices in each partition). In fact, with |C j | vertices in the partition element C j , n elements in the equitable partition and a total number m of vertices, the exposure coefficient E(T ) of the table is m i=1
n n n pi /m = pi /m = 1/(| C j | m) = 1/m = n/m. j =1vi ∈C j
j =1vi ∈C j
(2)
j =1
In the example, the equitable partition for attribute vertices is {(α)(β, φ) (γ , ǫ)(δ)(µ)(η)(κ)(θ)}, which gives E = 8/10 = 4/5. As a check, the reader can verify on the RCV-graph that the vertices appearing in singleton elements are associated with pi = 1 and those in the remaining elements are associated with pi = 1/2. The average of pi on all the vertices is therefore 2/3. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
139
Fig. 9. A RCV representation of the plaintext table (a) and the corresponding encrypted table (b) illustrated in Figures 2(a)–(c), respectively.
When the structure of the database is completely obscured, as it occurs when all the attribute values appear once in the database, the E is minimal at 1/m. The contribution of the knowledge of the plaintext database increases when the structure of the RCV-graph derived from it can impose restrictions that limit the number of options for a vertex, increasing the exposure coefficient. 8. DB+DBK WITH HASHING In this scenario the attacker knows both the encrypted and plaintext databases. Our abstract models for computing the exposure coefficient extend the RCVgraph described in Section 7. 8.1 RCV Graphs and RCV Line Graphs Given a table T with attributes A1 , A2 , . . . , An and tuples t1 , t2 , . . . , tm , we build a three-colored undirected graph, named the RCV-graph, as described in Section 7. As before, identifying the correct correspondence between plaintext and hash values requires finding a matching between each vertex of the plaintext RCV-graph (G A ) and a vertex of the corresponding encrypted RCV-graph (G I ). When collisions occur, the two graphs are not identical, as different vertices of the G A may collapse to the same G I vertex. For instance, Figure 9 illustrates the G A and G I for the tables in Figure 2(a) and Figure 2(c), respectively. We can observe that the number of edges connecting row vertices to value vertices in G A and G I is the same. Therefore, the problem can be viewed as finding a correct matching between the edges of the G A and the edges of the G I . Following this observation, we substitute both G A and G I , with the exclusion of the column vertices, with their line graphs. The line graph L(G) of a graph G is obtained by associating a vertex with each edge of the graph and connecting two vertices with an edge if and only if the corresponding edges of G meet at one or both endpoints [Whitney 1932]. Figure 10 illustrates the line graphs corresponding to the G A and G I in Figure 9. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
140
•
A. Ceselli et al.
Fig. 10. L(G A ) (a) and L(G I ) (b) of the graphs illustrated in Figure 9.
The problem of finding a mapping between each vertex of L(G A ) and each vertex of L(G I ) is thus a graph–subgraph monomorphism, that is, the problem of finding whether L(G A ) is a (partial, or not-induced) subgraph of L(G I ). It is easy to verify that each graph monomorphism between these line graphs corresponds to a feasible mapping between the vertices of the RCV-graphs. Each value vertex of a RCV-graph with degree k corresponds to a clique of cardinality k in the corresponding line graph. Then we can assign a label i to each vertex in a clique corresponding to the plaintext value i. Each clique in the L(G A ) has to be mapped in a clique in the L(G I ) with greater than or equal cardinality (i.e., vertices in L(G A ) labeled with the same plaintext value have to be matched with vertices in L(G I ) labeled with the same hash value). This corresponds to mapping a set of plaintext value vertices v1 , . . . , vk in a hash value vertex j whose degree is the sum of the degrees of v1 , . . . , vk . The number of L(G A ) and L(G I ) vertices is the same, so each plaintext value has to be matched with a hash value in a feasible graph monomorphism. Furthermore, the order of the vertices in the graph monomorphism describes a unique order between the edges of the RCVgraphs. Hence, each graph monomorphism corresponds to exactly one matching between the G A and the G I . The converse is also true: each feasible matching between the G A and the G I has a corresponding graph monomorphism between the L(G A ) and the L(G I ) in which the cliques representing each plaintext value are mapped in the clique representing the corresponding hash value. The presence of apparently identical tuples in the encrypted table (tuples with the same value for every indexing attribute), such as β, κ, represents a symmetry condition. Also in this case, the number of symmetric solutions grows very quickly, making it hard for a search algorithm to enumerate all the feasible matchings. This problem can be handled in our line graph representation of the tables as follows. First, we can establish an order between the tuples in the plaintext table, leaving the uncertainty about the order of the tuples to the corresponding encrypted table. This can be done by representing the plaintext table with a directed graph: each edge in the L(G A ) is replaced by an arc, directed toward vertices belonging to previous tuples. Then, we can establish an order between the identical tuples in the L(G I ) as before, by substituting ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
141
Fig. 11. Directed L(G A ) (a) and L(G I ) (b) corresponding to the line graphs in Figure 10.
each edge connecting the values of these tuples in the same column with an arc, directed toward the previous tuples. For instance, with respect to Figure 10(b), the edges connecting vertices s3 -δ and s7 -δ and vertices s3 -θ s7 -θ with two arcs. The remaining edges are replaced by two arcs, connecting the same pair of nodes, with opposite directions, except for the edges incident to column vertices, that can be replaced by an arc directed toward the column vertex. Both line graphs may be changed according to the above criterion, and the resulting directed graphs are reported in Figure 11. Our efficient enumeration technique (Section 8.3) allows us to efficiently evaluate the exposure corresponding to specific solutions. Namely, whenever a monomorphism between the line graphs is found, a matching between each value vi in the plaintext table and a value v j in the encrypted table can be identified just by looking at the labels of vertices. As before, let count(vi ) and count(v j ) be the number of occurrences of each plaintext value vi and each hash value v j , respectively. If the matching is correct, an attacker has probability count(vi )/count(v j ) of identifying the plaintext value vi of a tuple assigned with a hash value v j . Let ki be the number of different hash values to which plaintext value i can be assigned in a matching: the probability of guessing the right plaintext–hash value correspondence is 1/ki . The exposure of each plaintext value i is 1 count(vi ) . E(vi ) = ki count(v j ) Then, we define the attribute exposure coefficient as the average of these values E(vi ) i∈I , E= |I | where I is the set of the plaintext values. When there are no collisions, count(vi ) / count(v j ) = 1 and therefore the E reduces to the expression reported in Section 7. On the opposite, if we have m columns and n plaintext values in each column, when all the plaintext values of each column collide in the same hashed value, each E(vi ) value is count(vi ) / (n · m). ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
142
•
A. Ceselli et al.
8.2 RCV Line Graphs Monomorphism There is a theoretical difference between graph isomorphism and graph– subgraph isomorphism: neither a polynomial time algorithm is known for the graph isomorphism problem, nor the problem is known to be NP-complete. On the contrary, the graph–subgraph isomorphism was proved to be NP-complete for general graphs [Garey and Johnson 1979]. The graph–subgraph isomorphism problem is polynomial-time solvable for certain classes of graphs (e.g., planar graphs, trees, or bounded degree graphs), but our L(G A ) and L(G I ) do not have such a nice structure. When only one column is involved in the matching, the problem reduces to a multiple subset sum problem that is known to be NP-hard (see Section 6). Several enumeration (exponential time) algorithms for the graph–subgraph isomorphism problem have been proposed: Ullman [1976] devised a widely known enumeration method, which required O(N !N 3 ) time and O(N 3 ) space. More recently, Cordella et al. [1999, 2001] presented an algorithm, called VF2, that reduced the space complexity to O(N ) and the time complexity to O(N !N ). Their method allows to also manage directed graphs and attribute-relational graphs (ARGs), that is, graphs with semantical attributes associated with each vertex and edge. The matching between vertex and edges with incompatible attributes is not considered during the enumeration process. Also, the notion of incompatibility can be customized via user-defined comparing functions. Hence, each vertex of our linear graphs corresponding to an edge connecting a row vertex with a value vertex in the corresponding RCV-graphs can be labeled with color V (value), each column vertex can be labeled with color C (column), each arc connecting vertices in the same column can be labeled with color V, each arc connecting vertices in different columns can be labeled with color T (tuple), and each arc connecting V vertices with C vertices can be labeled with color C. 8.3 Efficiently Pruning the Search Tree Through Feasibility Conditions Although our linear graphs do not have special properties, the corresponding RCV-graphs have a particular structure. They have (2n + 1) layers: there is a layer for the vertices of color T; n layers for the vertices with color V, corresponding to values of the same column; and n layers made by each vertex labeled with color C. Furthermore, each V layer is only connected to the T layer and the corresponding C vertex. By removing the T layer, all the topological information about the structure of the database is lost. Therefore, the problem disaggregates in n2 independent multiple subset sum problems (all the pairs between plaintext and encrypted columns have to be checked), that can be solved with the algorithm described in Figure 7. The idea is to match vertices of G A with vertices of G I just by looking at their degree: the sum of the degrees of a set of vertices i1 , i2 , . . . , ik in G A associated with the same vertex j in G I has to be equal to the degree of j . The solutions of these multiple subset sum problems can be computed before starting the enumeration process for identifying graph monomorphisms. Let S j be the set of plaintext values v1 , v2 , . . . , vk assigned to the hash value v j in some solution of the multiple subset sum problems. An additional semantic ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
143
Fig. 12. Abstract models supporting computation of exposure in the four attack scenarios.
attribute i is added to each vertex in the L(G A ), where i is the vertex label defined in Section 8.1. Analogously, an attribute S j is added to each vertex in the L(G I ) marked with label j . A vertex labeled i of L(G A ) is compatible with a vertex labeled S j of L(G I ) only if i ∈ S j . This feasibility test greatly reduces the computation time required to enumerate graph monomorphisms, supporting experimental quantification of exposure for specific solutions. 9. EXPERIMENTAL RESULTS The abstract models presented above, and summarized in Figure 12, can be used to obtain an indication of the exposure that characterizes generic databases, depending on the information available to the attacker and on the use of hashing. The results of three families of experiments are presented, corresponding to the scenarios considered, respectively, in Section 6, Section 7, and Section 8. The scenario “Freq+DBk with direct encryption” presented in Section 5 has not been the subject of experiments, since it is not interesting from an algorithmic point of view (the behavior of an arbitrary database in this scenario can be precisely obtained by the analysis of the cardinality of the attribute values), and its behavior is dominated by the results, presented in Section 9.1, of the more complex scenario of Section 6. 9.1 Freq+DBK with Hashing We used an implementation in C of the algorithm for the subset-sum problem [Pisinger 1995] presented in Section 6. Other routines needed by the algorithm used for these experiments were implemented in ANSI C. For the experimental analysis, we generated a series of random database instances considering two features of each database: the number N of disN tinct plaintext values and the collision factor cf , defined as the ratio M between the number N of plaintext values and the number M of hash values. The collision factor shows how many values, on average, collide on the same hash value, which represents the expected increase in the size of the encrypted results. In a first set of experiments, databases were randomly generated whose number of occurrences of each plaintext value followed a Zipf distribution. Databases have been considered with 15, 20, 25, and 40 plaintext values. The graph in Figure 13 represents the E for these databases as the collision factor increases. At the right-hand border of the graph, signs indicate the value 1/N , that is, the optimal exposure of each database, occurring, for example, when all plaintext values collide on the same hash value. The E decreases from a value near 0.4 when no collision occurs to a value near 0.1 for a collision factor near to 3.0. A value near 0.1 is certainly adequate in this context, since the optimal is not very far. ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
144
•
A. Ceselli et al.
Fig. 13. Freq+DBk scenario: E for databases whose number of occurrences of each value follows a Zipf distribution.
As a worst-case analysis, a second set of experiments was run, randomly generating databases with about 1200 tuples, where the number of occurrences f i for each plaintext value was drawn as a random integer from a uniform distribution in R = [1, . . . , r]. Databases with a high Nr ratio show very low exposure values even for a collision factor 1.0 (no collisions), due to the contribution given by large equivalence classes between plaintext values. To evaluate the protection from exposure in the most difficult scenario, the experiments considered instances with a ratio N less than or equal to 1.0. These instances have, in general, singleton equivar lence classes between plaintext values. Figure 14 presents a graph describing the E for these databases for increasing values of the collision factor. As before, the lowest exposure value 1/N appears on the right-hand border of the graph. The exposure of the database decreases as the ratio N /r increases (in fact, to high N /r values correspond to a high number of plaintext values in the same equivalence class). A collision factor of about 1.6 and 2.2 is sufficient to come close to the lowest exposure coefficient for databases with 50 and 35 plaintext values, respectively, while a higher collision factor is required for the databases with 20 values and number of occurrences in a wider range. The results support the claim that a relatively modest increase in the cost of query execution, due to the use of a hash index with a low collision factor, produces a significant benefit in terms of protection from inference attacks. The fact that the exposure is relatively high in the experiments when hashing is not used (c f = 1) mostly depends on the fact that the experiments have used databases characterized by a distribution of attribute values difficult to protect (this observation is confirmed by the low exposure that characterizes in Figure 15 the databases where only two indexes are used). Then, a possible ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
145
Fig. 14. Freq+DBk scenario: E for databases whose number of occurrences of each value follows a uniform distribution.
strategy for the Freq+DBk scenario may first evaluate the exposure coefficient in the specific database using the immediate formula in Section 6; if the result is above a chosen threshold, the database owner may switch to the use of a hashing function with a limited collision factor, always able to effectively increase the protection from inference attacks where the attacker knows the distribution of attribute values. 9.2 DB+DBK with Direct Encryption In this section we present the results of the experiments on the scenario where the attacker knows both the encrypted and the plaintext database. In Section 9.3 we shall consider the scenario with hashing at variable collision factor. Since direct encryption corresponds to the use of hashing with collision factor equal to 1, the results of this section could be considered redundant. The reason for carrying out a separate experiment on direct encryption is twofold. First, the greater efficiency allows us to analyze the behavior of a relatively complex database, illustrating the variation of exposure in this scenario with the increase in database size and with the variation in the number of indexed attributes. Second, it allows us to show the application of a tool specific for this scenario, which is much more efficient than the tool used for the analysis in Section 9.3. To optimize the exploration of a large database size, we considered in the experiments only one database instance following the Zipf distribution. The tool we implemented for the experiments of this section takes as input a relational database and builds the RCV-graph that models it, with the construction presented in Section 7. The tool then invokes the Nauty algorithm [McKay 1981] on the RCV-graph, which is able to compute efficiently the automorphism ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
146
•
A. Ceselli et al.
Fig. 15. DB+DBk with direct encryption scenario: Tabular representation of the experimental results (a) and their graphical representation (b) (curve labels refer to the initial of the attributes).
group (around 15 min on a 700 MHz Pentium III PC running Linux, for the greatest RCV-graph derived from a 2000 4-tuple table containing 2262 distinct values). The output of the program is then analyzed to reconstruct the equitable partition that permits to determine the attribute exposure coefficient of the table. In the experiments we used a table of four attributes A, B, C, and D and we applied the tool using a progressively greater number of tuples, up to 2000 tuples. We considered all the combinations of attributes containing attribute A; this choice was due to the fact that the analysis is meaningful only if at least two attributes are present in the table (otherwise, no correlation among attributes can be observed) and it was useful to keep a common attribute in all the experiments. The results appear in tabular form in Figure 15(a) and graphically in Figure 15(b). ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
147
The main result of these experiments is that the number of attributes used for the index has a great impact on the attribute exposure. With only two attributes, exposure coefficients tend to be quite low; when all the four attributes are used as index, the exposure is considerable. Another question answered by the experiments is how the exposure evolves with an increase in the database size. What we observe is that the exposure decreases with the size of the database. The explanation is that as the number of tuples increases, a greater number of values become characterized by a distinct profile and are identifiable. At the same time, the new tuples introduce new values that are infrequent and indistinguishable, and this component prevails over the former. Also, the increase in size of the database will generally make the task of the attacker computationally more expensive. 9.3 DB+DBk with Hashing For this experimentation, the VFLib2.0 C++ library [Foggia 2001] was used, implementing the VF2 and other algorithms for graph morphisms. This library is available on the Web, and it was used to develop a program that computes the exposure coefficient when the DB+DBk scenario with hashing is analyzed. The experiments in Section 9.2 show that the exposure coefficient decreases as the number of tuples in the database increases; the following analysis for small databases can be considered as a worst-case scenario. Two set of tests have been run. The first test analyzed the E for databases where the number of occurrences of each plaintext value follows the Zipf distribution. Figure 16 reports a graph representing the exposure E for a database with 12 plaintext values (about 30 tuples), when an increasing number of columns is considered. The lower bound on the exposure value 1/(N K ), where N is the number of distinct plaintext values for each column and K is the number of columns, is indicated to the right-hand border of the graph. The E in all cases halves for a collision factor of about 3.0. As discussed before, the exposure grows as the number of considered columns increases, due to the larger amount of topological information available to the attacker. The E is still far from the lowest exposure value, but for small databases the correct matching between plaintext and encrypted indexes can always be identified, unless all the plaintext values collide in the same hash value. The second test analyzed databases where the number of occurrences of each plaintext value follows a uniform distribution (Figure 17). Two kinds of databases have been considered: an instance with 8 plaintext values and a range [1..5] for the number of occurrences and an instance with 12 plaintext values and a range [1..4]. For both databases, the cases in which two or three columns are available to the attacker are considered. As before, these instances represent highly exposed databases. Although the exposure is still high when four columns are considered, the E is near to the lowest possible value when only two columns are available to the attacker. The experiments show that the use of several index columns continues to have a significant impact on the exposure index. The use of hashing is able to ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
148
•
A. Ceselli et al.
Fig. 16. DB+DBk with hashing scenario: E for databases whose number of occurrences of each value follows a Zipf distribution.
Fig. 17. DB+DBk with hashing scenario: E for databases whose number of occurrences of each value follows a uniform distribution.
ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
149
reduce considerably the exposure, but relatively high values persist, even with collision factors that in the Freq+DBk scenario are able to protect the database from inference. We conclude with an observation on the impact of database updates. The experiments have considered a static database and the results of the analysis permit to evaluate the collision factor that is adequate for a specific database. As the database evolves the results of the analysis may have to be reconsidered. In this situation, it should be appropriate to use an approach similar to the one typically adopted for the physical design of databases, where a representative database situation is used as a basis for the identification of the physical design, and a monitoring activity is executed that identifies when the changes to the database go beyond a given threshold, requiring a reconsideration of the physical design choices. 10. CONCLUSIONS In this paper, we proposed a solution to the problem of secure database outsourcing on remote servers by providing a hash-based method for database encryption suitable for selection queries. Also, we gave a quantitative model for evaluating our method’s vulnerability in different scenarios, showing that even straightforward direct encryption can provide an adequate level of protection against inference attacks, as long as a limited number of index attributes are used. To achieve a higher degree of protection against inference, it is convenient to use a hash function to encode index values. Indeed, our experiments show that even a hash function with a low collision factor, that is, with a limited impact on the size of the result set, can substantially decrease exposure to inference attacks as measured by a suitable exposure index. Our quantitative approach paves the way to the definition of a query cost model, capable of estimating the performance impact of the choice of attributes to be indexed and of the collision factors of their associated hash functions. Such a cost model should use as input a representative description of the set of queries that the server will have to manage, returning as output a quantitative evaluation of the decrease in performance introduced by the protection measures for that specific query set. The database designer will thus be able to compare this performance degradation to the degree of protection required for the database. When physically designing databases, database designers usually iteratively apply a cost model of the estimated workload, recalibrating physical design parameters until performance reaches a satisfactory level. Much in the same way, we expect that the database designer will use a query cost model to interact with the system in order to iteratively identify, the configuration that best balances the implicit protection requirements and database performance. Ideally, the database designer could be assisted by a more complex program, which will integrate performance and exposure quantitative models and use them to identify a solution optimizing overall system behavior. While such a system looks feasible in principle, we believe that in many environments characterized by complex security and performance requirements an interactive solution keeping the human ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
150
•
A. Ceselli et al.
designer “in the loop” would be preferable. Another line of research stemming from this paper is defining of the set of queries that can efficiently be performed in the outsourced database setting. A well-known problem of hashing indexes is that they do not efficiently support execution of range queries, that is, queries that select tuples based on an interval of attribute values. Alternative index¨ us ¨ et al. [2002a] and Damiani ing structures have been proposed in Hacigum et al. [2003] that are able to support interval queries. The support for range ¨ us ¨ et al. [2002a] unfortunately has a significant queries presented in Hacigum impact on inference exposure: due to the fact that the same index is associated with all the attribute values belonging to an interval, the number of possible assignments of attribute values to indexes is greatly reduced, permitting an easier reconstruction of the correspondence between values. It would be certainly interesting to consider the concrete impact that these techniques would have on the exposure index, extending the approach used in this paper. The problem with the approach using B+-trees presented in Damiani et al. [2003] is the deterioration in performance that it can introduce, due to need to execute of a series of queries to navigate the B+-tree in order to identify the tuples belonging to the interval. An alternative client-based solution was presented in Damiani et al. [2004]. A final consideration can be made on the defense against inference attacks where the attacker controls the encrypted database and knows plaintext updates that are applied to it. For instance, this may happen when the attacker is able to add a specific record to the encrypted database (e.g., an attacker working at the organization where an outsourced database of bank accounts is stored could open a bank account, thus triggering a known update the outsourced database) or when the attacker knows that the data owner will execute a specific update at a given moment. While in this paper we only took static analysis into account, it must be underlined that dynamic attacks could possibly rely on additional information that could facilitate the reconstruction of the correspondence between encrypted and cleartext values. A strategy that can be used to mitigate this attack is inserting random delays between queries, evenly distribute the database load, and even introducing a number of fake queries in order to hide the “real” activities of the database. This line of research is another interesting extension of our work. REFERENCES AGRAWAL, R., KIERNAN, J., SRIKANT, R., AND XU, Y. 2004. Order-preserving encryption for numeric data. In Proceedings of the ACM SIGMOD 2004 Conference, Paris, France. ACM Press, New York. BOUGANIM, L. AND PUCHERAL, P. 2002. Chip-secured data access: Confidential data on untrusted servers. In Proceedings of the 28th International Conference on Very Large Data Bases, Hong Kong, China. Morgan Kaufmann, San Mateo, CA, 131–142. ¨ BOYENS, C. AND GUNTER , O. 2003. Using online services in untrusted environments—a privacypreserving architecture. In Proceedings of the 11th European Conference on Information Systems (ECIS ’03), Naples, Italy. CAPRARA, A., KELLERER, H., AND PFERSCHY, U. 2000. A ptas for the multiple subset sum problem with different knapsack capacities. Inf. Process. Lett. 73, 111–118. CAPRARA, A., KELLERER, H., AND PFERSCHY, U. 2003. A 3/4-approximation algorithm for multiple subset sum. J. Heuristics 9, 99–111.
ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
Modeling and Assessing in Encrypted Databases
•
151
CORDELLA, L., FOGGIA, P., SANSONE, C., AND VENTO, M. 1999. Performance evaluation of the VF graph matching algorithm. In Proceedings of the 10th International Conference on Image Analysis and Processing, Venice, Italy. IEEE Computer Society Press, Los Alamitos, CA. CORDELLA, L., FOGGIA, P., SANSONE, C., AND VENTO, M. 2001. An improved algorithm for matching large graphs. In Proceedings of the 3rd IAPR TC-15 Workshop on Graph-Based Representations in Pattern Recognition, Ischia, Italy, J. Jolion, W. Kropatsch, and M. Vento, Eds. 149–159. DAMIANI, E., DE CAPITANI DI VIMERCATI, S., JAJODIA, S., PARABOSCHI, S., AND SAMARATI, P. 2003. Balancing confidentiality and efficiency in untrusted relational dbmss. In Proceedings of the 10th ACM Conference on Computer and Communications Security, Washington, DC. ACM Press, New York. DAMIANI, E., DE CAPITANI DI VIMERCATI, S., PARABOSCHI, S., AND SAMARATI, P. 2004. Computing range queries on obfuscated data. In Proceedings of the Information Processing and Management of Uncertainty in Knowledge-Based Systems, Perugia, Italy. DAVIDA, G., WELLS, D., AND KAM, J. 1981. A database encryption system with subkeys. ACM Trans. Database Syst. 6, 2 (June), 312–328. DENNING, D. 1982. Cryptography and Data Security. Addison-Wesley, Reading, MA. DOMINGO-FERRER, J. 1996. A new privacy homomorphism and applications. Inf. Process. Lett. 60, 5 (Dec.), 277–282. DOMINGO-FERRER, J. AND HERRERA-JOANCONMART´ı, J. 1998. A privacy homomorphism allowing field operations on encrypted data. Jornades de Matematica ` Discreta i Algor´ısmica. FOGGIA, P. 2001. The vflib graph matching library, version 2.0. http://amalfi.dis.unina.it/graph/ db/vflib-2.0/doc/vflib.html. GAREY, M. AND JOHNSON, D. 1979. Computers and Intractability: A Guide to the Theory of NPCompleteness. Freeman, New York. ¨ US ¨ , H., IYER, B., LI, C., AND MEHROTRA, S. 2002a. Executing SQL over encrypted data in HACIGUM the database-service-provider model. In Proceedings of the ACM SIGMOD’2002, Madison, WI. ACM Press, New York. ¨ US ¨ , H., IYER, B., AND MEHROTRA, S. 2002b . Providing database as a service. In Proceedings HACIGUM of the 18th International Conference on Data Engineering, San Jose, CA. IEEE Computer Society Press, Los Alamitos, CA. ¨ US ¨ , H., IYER, B., AND MEHROTRA, S. 2004. Efficient execution of aggregation queries over HACIGUM encrypted relational databases. In Proceedings of the 9th International Conference on Database Systems for Advanced Applications. Springer, Jeju Island, Korea. ¨ US ¨ , H. AND MEHROTRA, S. 2004. Performance-conscious key management in enHACIGUM crypted databases. In Proceedings of the 18th Annual IFIP WG 11.3 Working Conference on Data and Applications Security, Sitges, Catalonia, Spain. Kluwer Academic Publishers, Dordrecht. JENSEN, C. 2000. Cryptocache: a secure sharable file cache for roaming users. In Proceedings of the 9th Workshop on ACM SIGOPS European Workshop: Beyond the PC. New Challenges for the Operating System, Kolding, Denmark. 73–78. KLEIN, S., BOOKSTEIN, A., AND DEERWESTER, S. 1989. Storing text retrieval systems on CD-ROM: compression and encryption considerations. ACM Trans. Inf. Syst. 7, 3 (July), 230–245. MCKAY, B. 1981. Practical graph isomorphism. Congressus Numerantium 30, 45–87. PISINGER, D. 1995. Algorithms for knapsack problems. Ph.D. thesis, DIKU, University of Copenhagen, Copenhagen, Denmark. Report 95/1. RIVEST, R., ADLEMAN, L., AND DERTOUZOS, M. 1978. Data banks and privacy homomorphisms. In Foundations of Secure Computation. Academic Press, Orlando, FL. 169–179. SAMARATI, P. 2001. Protecting respondent’s privacy in microdata release. IEEE Trans. Knowledge Data Eng. 13, 6 (Nov./Dec.), 1010–1017. SONG, D., WAGNER, D., AND PERRIG, A. 2000. Practical techniques for searches on encrypted data. In Proceedings of the 2000 IEEE Symposium on Security and Privacy, Oakland, CA. IEEE Computer Society Press, Los Alamitos, CA. 44–55. ULLMAN, J. 1976. An algorithm for subgraph isomorphism. J. ACM 23, 31–42.
ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.
152
•
A. Ceselli et al.
WALTON, J. 2002. Developing an enterprise information security policy. In Proceedings of the 30th Annual ACM SIGUCCS Conference on User Services, Providence, RI. ACM Press, New York. WARD, J., O’SULLIVAN, M., SHAHOUMIAN, T., AND WILKES, J. 2002. Appia: Automatic storage area network fabric design. In Proceedings of the Conference on File and Storage Technologies (FAST 2002). The USENIX Association, Monterey, CA. WHITNEY, H. 1932. Congruent graphs and the connectivity of graphs. Am. J. Math. 54, 150–168. Received May 2004; revised September 2004; accepted September 2004
ACM Transactions on Information and System Security, Vol. 8, No. 1, February 2005.