Security in Communication Networks: Third International Conference, SCN 2002, Amalfi, Italy, September 11-13, 2002, Revised Papers (Lecture Notes in Computer Science, 2576) 3540004203, 9783540004202

The Third International Conference on Security in Communication Networks 2002 (SCN 2002) was held in the Salone Morelli

131 27 5MB

English Pages 272 [374] Year 2003

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Security in Communication Networks: Third International Conference, SCN 2002, Amalfi, Italy, September 11-13, 2002, Revised Papers (Lecture Notes in Computer Science, 2576)
 3540004203, 9783540004202

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen

2576

3

Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo

Stelvio Cimato Clemente Galdi Giuseppe Persiano (Eds.)

Security in Communication Networks Third International Conference, SCN 2002 Amalfi, Italy, September 11-13, 2002 Revised Papers

13

Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Stelvio Cimato Giuseppe Persiano Universit`a di Salerno Dipartimento di Informatica ed Applicazioni Via S. Allende, 84081 Baronissi (SA), Italy E-mail: {cimato/giuper}@dia.unisa.it Clemente Galdi Computer Technology Institute and University of Patras Dept. of Computer Engineering and Informatics 26500 Rio, Greece E-mail: [email protected]

Cataloging-in-Publication Data applied for A catalog record for this book is available from the Library of Congress. Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at .

CR Subject Classification (1998): E.3, C.2, D.4.6, K.4.1, K.4.4, K.6.5, F.2 ISSN 0302-9743 ISBN 3-540-00420-3 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2003 Printed in Germany Typesetting: Camera-ready by author, data conversion by Olgun Computergrafik Printed on acid-free paper SPIN: 10872336 06/3142 543210

Preface

The Third International Conference on Security in Communication Networks 2002 (SCN 2002) was held in the Salone Morelli of the Civic Museum of Amalfi, Italy, September 11–13, 2002. The conference takes place every three years (previous ones were held in 1996 and 1999 in Amalfi too) and aims to bring together researchers in the field of security in communication networks to foster cooperation and the exchange of ideas. The main topics included all technical aspects of data security including: anonymity implementation, authentication, key distribution, block ciphers, operating systems security, complexity-based cryptography, privacy, cryptanalysis, protocols, digital signatures, public key encryption, electronic money, public key infrastructure, hash functions, secret sharing, identification, surveys, and the state of the art. The program committee received 90 submissions in electronic format from 5 continents of which 24 were selected for presentation in 8 sessions. We had two invited talks, one by Eyal Kushilevitz from the Technion, Israel on “Some Applications of Polynomials for the Design of Cryptographic Protocols,” and the other by Ueli Maurer from ETH, Zurich, on “Secure Multi-Party Computation Made Simple.” Due to the high number of submissions, the reviewing phase was a very challenging process, and many good submissions had to be rejected. We are very grateful to all the program committee members, assisted by their colleagues, who devoted much effort and valuable time to read and select the papers. We want to thank the Municipality of Amalfi that agreed to host the conference in one of the most beautiful halls in Amalfi. Finally, we would like to thank all the authors who submitted their papers, the Program Committee members, and all the conference participants.

September 2002

S. Cimato C. Galdi G. Persiano

Organization

SCN 2002 was organized with the financial support of the Dipartimento di Informatica ed Applicazioni “R.M. Capocelli” and the Facolt` a di Scienze Matematiche, Fisiche e Naturali of the Universit` a di Salerno under the auspices of the Amalfi Municipality.

Program Chair Giuseppe Persiano

Universit` a di Salerno, Italy

General Chair Carlo Blundo

Universit` a di Salerno, Italy

Program Committee Giuseppe Ateniese Carlo Blundo Christian Cachin Giovanni Di Crescenzo Alfredo De Santis Rafail Ostrovsky Giuseppe Persiano Jacques Stern Doug Stinson Gene Tsudik Moti Yung

(Johns Hopkins University, USA) (Universit` a di Salerno, Italy) (IBM Research, Switzerland) (Telcordia Technologies, USA) (Universit` a di Salerno, Italy) (Telcordia Technologies, USA) (Universit` a di Salerno, Italy) ´ (Ecole Normale Sup´erieure, France) (University of Waterloo, Canada) (University of California at Irvine, USA) (Columbia University, USA)

Organizing Committee Stelvio Cimato Paolo D’Arco Clemente Galdi Barbara Masucci

Universit` a Universit` a Universit` a Universit` a

di di di di

Salerno, Salerno, Salerno, Salerno,

Italy Italy Italy Italy

Publicity Chairs Vincenzo Auletta Domenico Parente

Universit` a di Salerno, Italy Universit`a di Salerno, Italy

Table of Contents

Invited Talks Some Applications of Polynomials for the Design of Cryptographic Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eyal Kushilevitz (Technion)

1

Secure Multi-party Computation Made Simple . . . . . . . . . . . . . . . . . . . . . . . . . 14 Ueli Maurer (ETH)

Forward Security Forward Secrecy in Password-Only Key Exchange Protocols . . . . . . . . . . . . . 29 Jonathan Katz (University of Maryland), Rafail Ostrovsky (Telcordia Technologies, Inc.), and Moti Yung (Columbia University) Weak Forward Security in Mediated RSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Gene Tsudik (University of California, Irvine)

Foundations of Cryptography On the Power of Claw-Free Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Yevgeniy Dodis (New York University) and Leonid Reyzin (Boston University) Equivocable and Extractable Commitment Schemes . . . . . . . . . . . . . . . . . . . . 74 Giovanni Di Crescenzo (Telcordia Technologies) An Improved Pseudorandom Generator Based on Hardness of Factoring . . 88 Nenad Dedi´c, Leonid Reyzin (Boston University), and Salil Vadhan (Harvard University) Intrusion-Resilient Signatures: Generic Constructions, or Defeating Strong Adversary with Minimal Assumptions . . . . . . . . . . . . . . 102 Gene Itkis (Boston University)

Key Management Efficient Re-keying Protocols for Multicast Encryption . . . . . . . . . . . . . . . . . 119 Giovanni Di Crescenzo (Telcordia Technologies) and Olga Kornievskaia (University of Michigan) On a Class of Key Agreement Protocols Which Cannot Be Unconditionally Secure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Frank Niedermeyer and Werner Schindler (BSI)

VIII

Table of Contents

A Group Key Distribution Scheme with Decentralised User Join . . . . . . . . . 146 Hartono Kurnio, Rei Safavi-Naini (University of Wollongong), and Huaxiong Wang (Macquarie University)

Cryptanalysis On a Resynchronization Weakness in a Class of Combiners with Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Yuri Borissov (Bulgarian Academy of Sciences), Svetla Nikova, Bart Preneel, and Joos Vandewalle (Katholieke Universiteit Leuven) On Probability of Success in Linear and Differential Cryptanalysis . . . . . . . 174 Ali Aydın Sel¸cuk (Purdue University) and Ali Bı¸cak (University of Maryland Baltimore County) Differential Cryptanalysis of a Reduced-Round SEED . . . . . . . . . . . . . . . . . . 186 Hitoshi Yanami and Takeshi Shimoyama (Fujitsu Laboratories LTD)

System Security Medical Information Privacy Assurance: Cryptographic and System Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 Giuseppe Ateniese, Reza Curtmola, Breno de Medeiros, and Darren Davis (The Johns Hopkins University) A Format-Independent Architecture for Run-Time Integrity Checking of Executable Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Luigi Catuogno and Ivan Visconti (Universit` a di Salerno)

Signature Schemes How to Repair ESIGN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 ´ Louis Granboulan (Ecole Normale Sup´erieure) Forward-Secure Signatures with Fast Key Update . . . . . . . . . . . . . . . . . . . . . . 241 Anton Kozlov and Leonid Reyzin (Boston University) Constructing Elliptic Curves with Prescribed Embedding Degrees . . . . . . . . 257 Paulo S.L.M. Barreto (Universidade de S˜ ao Paulo), Ben Lynn (Stanford University), and Michael Scott (Dublin City University) A Signature Scheme with Efficient Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Jan Camenisch (IBM Research) and Anna Lysyanskaya (Brown University)

Table of Contents

IX

Zero Knowledge Efficient Zero-Knowledge Proofs for Some Practical Graph Problems . . . . . 290 Yvo Desmedt (Florida State University and University of London) and Yongge Wang (University of North Carolina at Charlotte) Reduction Zero-Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 Xiaotie Deng, C.H. Lee (City University of Hong Kong), Yunlei Zhao (City University of Hong Kong and Fudan University), and Hong Zhu (Fudan University) A New Notion of Soundness in Bare Public-Key Model . . . . . . . . . . . . . . . . . 318 Shirley H.C. Cheung, Xiaotie Deng, C.H. Lee (City University of Hong Kong), and Yunlei Zhao (City University of Hong Kong and Fudan University)

Information Theory and Secret Sharing Robust Information-Theoretic Private Information Retrieval . . . . . . . . . . . . . 326 Amos Beimel and Yoav Stahl (Ben-Gurion University) Trading Players for Efficiency in Unconditional Multiparty Computation . . 342 B. Prabhu, K. Srinathan, and C. Pandu Rangan (Indian Institute of Technology) Secret Sharing Schemes on Access Structures with Intersection Number Equal to One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 Jaume Mart´ı-Farr´e and Carles Padr´ o (Universitat Polit`ecnica de Catalunya)

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365

Some Applications of Polynomials for the Design of Cryptographic Protocols Eyal Kushilevitz Computer Science Department, Technion, Israel [email protected] http://www.cs.technion.ac.il/˜eyalk

Abstract. This paper surveys some recent work on applications of polynomials (over finite fields) to the design of various cryptographic protocols. It is based on a talk given at the 3rd Conference on Security in Communication Networks, 2002.

1

Introduction

Polynomials (over finite fields) serve as an important component in the design of various cryptographic protocols. The usage of them is two-fold; they are used (in more than one way) as representation of functions and other types of data, and their nice algebraic properties are used for further manipulating these representations. Informally speaking, the main properties of polynomials being used by the cryptographic applications include their error correction properties (e.g., the fact that the values of a polynomial in any t + 1 points define a unique degree-t polynomial), their “secrecy” properties (e.g., the fact that the values of a polynomial p in t points x1 , . . . , xt give absolutely no information on its value in another point xt+1 ), and various algebraic properties of the representations in use, such as being homomorphic (i.e., that if data a is represented using a polynomial pa and data b is represented using a polynomial pb then the polynomial pa + pb is a possible representation for a + b). When speaking about polynomials, we consider both univariate polynomials (e.g., x4 + 3x2 + 7) and multivariate polynomials (e.g., 3xyz + x3 y + 2z + 1). In both cases the “complexity” of polynomials is measured in terms of their degree (which is 4 in both examples) and in the case of multivariate polynomials also in terms of the number of variables. As mentioned, there are numerous applications of polynomials. Starting from the (non-cryptographic) application of error correcting codes (see, e.g., [33]) to applications such Shamir’s secret-sharing scheme [40], secure multiparty computations (e.g., [7,10]), key-distribution schemes (e.g., [8]), Rabin’s information dispersal [38] and others. Below, in Section 2, we briefly go over few of the above mentioned applications. Then, we discuss some more recent work. Specifically, in Section 3, we discuss application of polynomials to the design of private information retrieval (PIR) protocols; this application exhibits some new ways of manipulating polynomial representations (this section is based on [5]). Then, in S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 1–13, 2003. c Springer-Verlag Berlin Heidelberg 2003 

2

Eyal Kushilevitz

Section 4, we describe a new representation of functions, that we call randomizing polynomials, and mention its applications to the design of round-efficient secure multiparty computations (this section is based on [24,25]). Disclaimers: (1) this note is not a full survey of the many applications of polynomial in cryptography; it only covers a small fraction of them which reflects the interests of the author. (2) the goal of this note is to present some ideas; for full and precise technical details, the reader is referred to the original papers.

2 2.1

Some Classic Applications Shamir’s Secret Sharing

It is hard to tell what is the first application of polynomials in cryptographic protocols. Some of the earlier applications of polynomials in Cryptography in general is the work on shift registers (e.g., [20]) and for cryptographic protocols is the work of [37]. Among the early works and certainly among the most influential ones is Shamir’s secret-sharing scheme [40]. The setting considered is as follows. There are k players, at most t of which are “bad”. We are working over some field GF(q) of sufficiently large size (i.e., q > k) and we have some fixed k distinct non-zero points ω1 , ..., ωk ∈GF(q). Finally, there is a dealer that has some secret s ∈GF(q). The first ingredient of the scheme is a representation of the secret s using a polynomial fs . This is done by letting the dealer choose a1 , . . . , at ∈R GF(q) and setting fs (Z) = s +

t 

ai Z i .

i=1

In the sharing phase, the dealer provides each player i with the value fs (ωi ). In the reconstruction phase, any t + 1 players can interpolate the polynomial fs (as they know the value of fs at t + 1 distinct points) and can therefore compute s = fs (0). The secrecy of this scheme states that any t players learn nothing about s (indeed, every possible value for s = fs (0) together with the other t values, determine a unique polynomial fs ). 2.2

Secure Multiparty Computation (MPC)

Secure multiparty computation is a more involved task that uses secret-sharing as a building block and extends the use of the algebraic properties of polynomials. Again, we have k players, at most t of which are “bad”; there are inputs x1 , . . . , xn distributed among the players and there is a function f (x1 , . . . , xn ) that the k players wish to compute. The goal is to construct a protocol to evaluate f which is (1) correct; (2) t-secure (which means that no set of t “bad” players, behaving “badly” in the sense that the protocol is supposed to tolerate, can prevent the “good” players from computing the correct output nor they can learn any information on the inputs of “good” players that they are not supposed to know); and (3) efficient in terms of various complexity measures (such as computation, communication, and rounds).

Some Applications of Polynomials for the Design of Cryptographic Protocols

3

Protocols achieving such properties were studied in various settings and under various assumptions. The main settings/assumptions are: (1) the computational setting (in which intractability assumptions are made); and (2) the information-theoretic setting (in which one assumes that secure, untappable, channels are connecting each pair of players). General purpose solutions for constructing secure multiparty protocols in these settings and others were found in [41,21,7,10,39] and others. A typical structure of such protocols (e.g., [7,10]) is described below. The protocol uses an arithmetic circuit representation for the function f to be computed (over the field GF(q)). It starts by secret sharing of all inputs x1 , . . . , xn (each xi is shared by the player to which it is known). The main idea is, in a bottom-up manner, to maintain a secret-sharing of all the values of the wires of this circuit using degree-t polynomials. For this, the gates of the circuit need to be “simulated”. Specifically, addition gates are easy to simulate based on the good properties of polynomials; this is done using the fact that if {fs (ωj )}kj=1 are shares of s and {fs (ωj )}kj=1 are shares of s then {fs (ωj ) + fs (ωj )}kj=1 are shares of s + s (i.e., these are all points on the “random” degree

t polynomial fs+s (x) = fs (x) + fs (x) that satisfies fs+s (0) = s + s ). If the gate is a multiplication gate then things are not as easy. Indeed, the points {fs (ωj ) · fs (ωj )}kj=1 are points on a not-so-random degree-2t polynomial fs·s for s · s . Hence, the main difficult ingredient in MPC protocols is completing the simulation of these gates in a way that transforms fs·s into a new polynomial which is of degree t, it is “random”, and it is a representation of s · s (i.e., fs·s (0) = s · s ). This itself can be implemented in various ways, all of them are based on applying linear transformations and using the nice algebraic properties of polynomials (for details the reader is referred to, e.g. [7,17]). It follows from the above description that if f is represented by some arithmetic circuit then the protocol that is obtained based on this circuit has time complexity and communication complexity which are proportional to the size of the circuit and round complexity which is proportional to the multiplicative depth of this circuit (i.e., the maximal number of multiplication gates in the “best” arithmetic circuit for f ). In particular, every function f that can be represented by a low-degree polynomial (e.g., 4x1 x2 x3 +x2 x5 +2x1 x7 x9 ) is “easy” for roundcomplexity considerations. Later, in Section 4, we present an improved representation that (among other things) gives round-efficient protocols for a larger class of functions.

3

Applications of Polynomials for Constructing Private Information Retrieval (PIR) Protocols

(The contents of this section is based on [5].) We will start by describing the private information retrieval (PIR) problem and then discuss the relevance of polynomials to the construction of PIR protocols.

4

Eyal Kushilevitz

3.1

The Private Information Retrieval (PIR) Problem

In the PIR problem we have some data (sometimes referred to as the database) which is modeled by an n-bit string a. Typically, we think about n as being “very large”. There is a user who wishes to retrieve a data item, modeled by a bit ai , and at the same time it wants to keep i (information-theoretic) private against those who are holding the data a. Few natural “solutions” immediately come to mind. The first is for the user to ask for a copy of a. While this indeed solves the problem, this solution has a severe drawback in that its communication complexity is n bits. A second possibility is for the user to ask for some additional random indices (and randomly permute them and i). While this has better communication complexity (especially if the number of additional indices is “small”), this is not really a solution as it does not provide full privacy; indeed, the server upon seeing the set of indices knows for sure that i is one of them and that if a certain index is not in this set then this is not the index the user is after. The PIR problem was introduced in [12] where, in particular, it was shown that any solution for the PIR problem in the setting described above requires Ω(n) bits (and hence the first solution is optimal). As a way to circumvent the problem, [12] proposed the following model termed information-theoretic PIR1 . In this model there are k separated servers S1 , . . . , Sk each of them holds an identical copy of a (this property of the model is usually referred to as replication). A PIR protocol allows the user to interact with the servers in a way that at the end the user can reconstruct ai but each single server gets no information about i. The main complexity measure is, as above, communication complexity. This model was studied   in many works and many extensions (e.g., Symmetric PIR which is related to n1 -OT [19,35] and PIR in other settings [36,18,13,6]) as well as applications of PIR (e.g., [34,15,9]) were presented. Some known results, which are of relevance to the present note, are the following known upper-bounds: for 2 servers there is O(n1/3 )-bit PIR protocol [12]; and for k servers (for any 1 fixed k) there is an O(n 2k−1 )-bit PIR protocol [1,27,23,4] (where in this sequence of work the asymptotic complexity remains the same but the dependency on k is significantly improved). In terms of lower bounds, the only non-trivial lowerbound known without putting restrictions on the model is roughly 4 log n for 2-server PIR [32]. The new result, from [5], described below, is an upper bound that for k-server 2 log log k (for any fixed k) yields an O(n k·log k )-bit PIR protocol. Moreover, this protocol improves the state-of-the-art already for small values of k (which are the more interesting case); for example, we get 3-server PIR protocol with communication complexity of O(n0.19... ) (vs. O(n0.2 ), which is the previously-known best bound). These results (and the techniques used) are also used to construct improved locally decodable codes (a notion defined in [28]). Below, we outline the main ideas behind the new protocols, with emphasis on the relations to polynomials.

1

Later, in [11,31] and subsequent work, a computational model for PIR was also studied.

Some Applications of Polynomials for the Design of Cryptographic Protocols

3.2

5

PIR and Polynomials

Our first step is to describe a mapping between strings and polynomials. This mapping will be used by the servers in order to transform the n-bit data string a into a corresponding polynomial Pa . We will work over the field GF(2) and fix parameters d (the degree) and m (the number of variables in our polynomials) such that m ≈ n1/d . We also fix distinct non-zero points ω 1 , ..., ω n ∈ {0, 1}m . Then, we map the n-bit string a to the following polynomial Pa over GF [2]: 

Pa (Z1 , . . . , Zm ) =

n  i=1

ai



Z ,

ω i =1

where, as usual, ω i denotes the -th bit of ω i . The main properties of this polynomial are as follows; assume that ω 1 , . . . , ω n are of weight d (by the choice of the parameters m and d, there are enough such points) then, (1) Pa is of degree d; and (2) ∀i ∈ [n], we have Pa (ω i ) = ai . With the above representation at hand, we view PIR as a problem on polynomials by the following series of steps. First, the servers S1 , . . . , Sk represent a as Pa . (PIR in a sense reduces to the problem of evaluating Pa (ω i ) where the servers know Pa and the user knows ω i but wants to keep it secret from the servers; the output of the evaluation should be known to the user only.) Next, the user shares ω i among S1 , . . . , Sk by picking y1 , . . . , yk ∈R {0, 1}m k i such that j=1 yj = ω . It gives yj to all servers except Sj (this is a so called replication-based secret-sharing [26]). (Now, PIR reduces to evaluating k  Pa ( j=1 yj ), while keeping yj secret; this time each server Sr knows Pa and all the yj ’s except yr ; the user knows all the yj ’s.) k k def Finally, define a new polynomial Qa ({Yj,h }) = Pa ( j=1 Yj,1 , . . . , j=1 Yj,m ). h∈[m]

This is a degree-d polynomial in mk variables {Yj,h }j∈[k] . Let yj,h be the bits k h∈[m] of yj then Qa ({yj,h }j∈[k] ) = Pa ( j=1 yj ) = ai . (Similarly, PIR now reduces k h∈[m] to the problem of evaluating Qa ({yj,h }j∈[k] ), while keeping j=1 yj secret and the input is distributed among the participants in a similar manner.) 3.3

Some PIR Protocols

Below we describe some specific PIR protocols, obtained by choosing specific values for the parameter d, as well as observing some properties of Qa corresponding to these choices. Evaluating Qa – the case d = k − 1: Assume that Qa is of degree d = k − 1. This means that each monomial M of Qa has at most k − 1 variables which in turn implies that for each such M , there exists some server Sj that knows all the variables in M . We “assign” each M to the first such server. A PIR protocol with complexity O(n1/(k−1) ) bits now works as follows:

6

Eyal Kushilevitz

1. Each Sj constructs the polynomial Qa (from a). The user, on input i, shares ω i among the servers. 2. Each Sj evaluates all monomials assigned to it, sums up the results and sends the sum (a single bit) to the user. h∈[m] 3. The user sums up the answers from all k servers to get Qa ({yj,h }j∈[k] ) = ai . The communication complexity of the above protocol (for fixed k) is O(m)bits (note that this is the size of each share yj ) which in turn is O(n1/d ) = O(n1/(k−1) ). The secrecy is based on the fact that i (or more specifically, ω i ) is distributed using a secret sharing scheme. Finally, note that choosing d = k − 1 yields the largest possible value of d for the above analysis. Evaluating Qa – the case d = 2k − 1: We now assume that Qa is of degree d = 2k − 1. This implies that each monomial M has at most 2k − 1 variables. Since each variable yj,h is known to all servers except Sj , it follows that for each M , there exists Sj that knows all but at most one variable. Assign each M to the first such server. Then, if we look at the sum of monomials assigned to Sj , it can be expressed as a polynomial Pj (Yj,1 , . . . , Yj,m ) of degree 1. A PIR protocol with complexity O(n1/(2k−1) ) bits now works as follows: 1. Each Sj constructs the polynomial Qa (from a). The user, on input i, shares ω i among the servers. 2. Each Sj computes the coefficients of the polynomial Pj (by evaluating all the monomials assigned to it) and sends these m + 1 coefficients (m + 1 bits) to the user. 3. The user evaluates each Pj (yj,1 , . . . , yj,m ) and sums up the results to get h∈[m] Qa ({yj,h }j∈[k] ) = ai . The communication complexity of this protocol is again O(m), which by the current choice of parameters satisfies O(m) = O(n1/d ) = O(n1/(2k−1) ). Again, note that choosing d = 2k − 1 yields the largest possible value of d for which the above analysis holds. How can we do better? The first observation is that trying a similar approach only with choosing a larger d is useless. Although for each monomial M there is still a server Sj that knows all but at most d/k variables, if we assign each M to such a server, this implies that the number of coefficients for each Pj is too large to get any advantage. The above two protocols can be thought of as presenting methods for decomposing the polynomial Pa (or Qa ). Specifically, in the case d = k − 1 we k decompose the polynomial by Pa (ω i ) = Qa ({yj,h }) = j=1 Pj (), where each of the polynomials Pj in this composition is of degree 0 (in other words, it is just a single value). In the case d = 2k − 1 we can write Pa (ω i ) = Qa ({yj,h }) = k j=1 Pj ({yj,h }h∈[m] ), where this time each Pj is of degree 1. The main technical part in the new results is the following new decomposition lemma: For some “larger” d (appropriately chosen), and parameters , λ parameters one can write

Some Applications of Polynomials for the Design of Cryptographic Protocols

Pa (ω i ) =

k 



Pj ({y1,h }) +

7

PV (ω i ),

V ⊆[k],|V |≥

j=1

where each Pj of degree 1; and each PV is known to all the servers in V and is of degree λ|V | (i.e., the parameter  is used as a lower bound on the size of the sets V considered, and the parameter λ is used to get an upper bound on the degree of the corresponding polynomials PV ). We do not describe the actual protocol in this note. However, based on the above decomposition lemma (and using similar structure to the previously described protocols), the idea is that one can use PIR recursively to obtain the value of PV (ω i ). The point is that, on one hand, the degree of these polynomials is bounded from above and, at the same time, the amount of replication that we have for the recursive calls (i.e., |V |) is bounded from below. To exemplify this consider the following. Example: k = 3. We start by choosing d = 7 (compared to d = 5 that will be chosen had we used d = 2k − 1). We can assume, without loss of generality, that Pa (Z1 , . . . , Zm ) = Z1 Z2 · · · Z7 (because if Pa consists of more monomials then we can deal with each of them separately). In this case we have Zi = Y1,i +Y2,i +Y3,i and  h∈[7] Qa ({Yj,h }j∈[3] ) = Yj1 ,1 Yj2 ,2 · · · Yj7 ,7 . j1 ,...,j7 ∈[3]

Now, when considering each monomial M of Qa , one can easily verify that the following is true. Either (1) there is a server that misses at most one variable; or (2) there are two servers that miss at most 2 variables each (i.e., at most 4 all together). The monomials of the first type are taken care of in the way presented above for the case d = 2k − 1 (by letting each server Sj construct a degree 1 polynomial and sending its coefficients). The monomials of the second type are taken care of by defining, for each V = {a, b} (a = b) and c ∈ / V , a polynomial PV (Z1 , ..., Z7 ) =





A⊆[7],|A|=4

j∈A

Zj



Y,c .

∈A /

It can be verified that each monomial M which is not of the first type is in exactly one such polynomial. This implies that Qa −(P{1,2} +P{1,3} +P{2,3} ) contains only monomials of the first type, which are the easier case. We therefore use recursion to retrieve the needed value from each of the polynomials P{1,2} , P{1,3} and P{2,3} . Each of them is a degree 4 polynomial on m = O(n1/7 ) variables (i.e., it can be represented with O(n4/7 ) bits). Moreover, the coefficients of each of these polynomials are known to a pair of servers and so the recursion can use the best known 2-server PIR; this protocol has complexity of O(|a|1/3 ) and so all together we get that the complexity of our protocol for k = 3 is O(n1/7 + (n4/7 )1/3 ) = O(n4/21 ).

8

4

Eyal Kushilevitz

Randomizing Polynomials – A New Representation of Functions with Applications to Secure MPC

(The contents of this section is based on [24,25].) The results of the previous section are in a sense methods that use algebraic properties of a certain polynomial representation in order to manipulate it. The results presented in the current section are different in that they actually show a new representation (of functions) using polynomials. Consider first the standard polynomial representation of functions. That is, fix a field F (e.g., consider the field GF[2] and functions f : {0, 1}n → {0, 1}). A polynomial p ∈ F [x1 , ..., xn ] represents the function f if for all x ∈ {0, 1}n we have p(x) = f (x). As mentioned in Section 2.2, if a function f can be represented by a low-degree polynomial then this implies a round-efficient secure multiparty protocol for this function. Unfortunately, when trying to represent functions using this standard representation, it is easy to see that degree n is necessary for most functions f . Moreover, even simple functions (e.g., AND,OR) require degree n polynomials under the standard representation. We therefore suggest a new representation of functions by, what we call, randomizing polynomials. Before discussing its definition, let us exemplify its usefulness by showing how it allows a low-degree representation for the function OR. Example (The OR Function): The inputs to this function are x1 , . . . , xn ; in addition, we consider random inputs r1 , . . . , rn . We then look at the polynomial p(x, r) =

n 

xj rj .

j=1

(In other words, p is the inner product of x and r.) This polynomial has the following properties: (1) p is of degree 2. (2) If the input x is such that OR(x) = 0 then p(x, r) ≡ 0. (3) If the input x is such that OR(x) = 1 then p(x, r) ≡random bit. The last two properties give some intuitive “privacy” and “correctness” properties: on one hand, the distribution induced by the polynomial p determines the value of OR(x) and, on the other hand, it does not give any other information about x. The only difficulty is that when seeing that p(x, r) = 0 we cannot tell for sure what is OR(x) and if we choose, for example, to conclude that OR(x) = 0 then we will be wrong with probability up to 0.5 (if the field if GF(2)). This however can be solved naturally using the following amplification: we will use j∈[n] s · n random inputs: {rij }i∈[s] and consider a vector of s degree-2 polynomials n n   p(x, r) = ( xj r1j , ..., xj rsj ). j=1

j=1

This time, OR(x) = 0 implies that p(x, r) ≡ 0s while if OR(x) = 1 then p(x, r) ≡random s-bit string (over the corresponding field F ). Thus, we get “privacy” and “amplified correctness” (since if we decide that OR(x) = 0 whenever we see the output 0s , we will be wrong only with probability at most 1/|F |s ).

Some Applications of Polynomials for the Design of Cryptographic Protocols

9

The extension of the above example into a general definition is now quite natural. We fix some field F . We use a vector r of some m random field elements, and we look at p(x, r) which is a vector of s degree d polynomials. The vector p(x, r) can be viewed as a mapping of x to a probability distribution P (x) over F s . We say that p(x, r) represents a boolean function f (x) if there exist probability distributions D0 , D1 such that the following hold: (1) -correctness: D0 , D1 are “distinguishable” with advantage≥ 0.5 − . (2) privacy: for all x, if f (x) = 0 then P (x) ≡ D0 and if f (x) = 1 then P (x) ≡ D1 . We say that p(x, r) is a perfect randomizing polynomial representation for f if it satisfies perfect correctness (i.e.,  = 0) and privacy. Note that the standard representation is a special case obtained by choosing s = 1 and m = 0 (in particular, since there is no randomness then clearly  = 0). Also note that, as in the OR example, the correctness can be easily amplified. We also remark that while in the OR example above the resulted randomizing polynomial representation is not perfect, still one can obtain a perfect (degree-3) randomizing polynomial representation for the function OR via a more complicated construction (see [25]). Next, we describe how to use randomizing polynomials for secure MPC. Suppose that f (x) is represented by some (low-degree) randomizing polynomial p(x, r). Intuitively, the idea is to securely sample in the probability distribution P (x) and decide, by the output, on the value f (x). The reason that this is not totally trivial has to do with the issue of which of the k players will choose the random input r; note that if a player knows r then the privacy may  be lost. To overcome this, define p (x, r1 , ..., rk ) = p(x, r1 + ... + rk ) and note that deg(p ) =deg(p) (since the transformation that we use is linear) and that P  (x) ≡ P (x). A protocol for computing f (x) works as follows: 1. Each player j chooses the corresponding rj . 2. The players use any secure protocol A for p . 3. Each player computes f (x) from the output p (x, r1 , ..., rk ). We get -correctness for the above protocol by the distinguishability of D0 , D1 . Moreover, the number of rounds (by choosing an appropriate A and by the discussion in Section 2.2) can be made proportional to log(deg(p )) = log(deg(p)). Finally, it is important to note that this is actually a reduction in the sense that all we need to do now is to be able to transform functions f into randomizing polynomials p(x, r). Then, we can plug any off-the-shelf MPC protocol A with the desired properties and get results as guaranteed by this A. In light of the above, the main task becomes now constructing randomizing polynomials for a wide class of functions. In [24,25] two such constructions are presented: 1. A Construction based on the garbled formula technique [41]2 . This construction transforms any size S formula computing a function f into a degree-3 2

This construction mimics that of [41] which is used to build secure protocols from circuits. Our construction is used to construct randomizing polynomials and not protocols; it applies however only to formulae (and not to general circuits).

10

Eyal Kushilevitz

randomizing polynomial representation of size O(S 2 ) for the same function f . For certain f ’s, such as the OR function, √ the construction can be further optimized to give polynomials of size S · 2O( log S) . 2. A construction based on mod-q counting branching programs (CBP) (see below). This construction transforms any CBP of size a computing a function f into a degree-3 randomizing polynomial representation of size O(a2 ) for the same function f . In any case that the above constructions yield an efficient randomizing polynomial (i.e., whenever the function f is such that it has a “small” formula or a “small” CBP), we can get perfect t-secure protocols of each of the following types: – a protocol immune to a passive adversary whenever t < k/3, and using 2 rounds. – a protocol immune to a passive adversary whenever t < k/2, and using 3 rounds. – a protocol immune to an active adversary whenever t ≤ Θ(k), and using 3 rounds. Note that the contribution here is in three directions: (a) improving the number of rounds to the optimal constant; (b) getting rid of the error which is part of all known constant-round protocols and may have seemed inherent prior to our work; and (c) obtaining efficiency improvements (some of which are due to the above mentioned optimization). (For previous work on constant-round secure MPC in various models see, e.g., [2,3,14,22,16].) We end this section by giving some general idea on how the second construction works (as usual, for full details the reader is referred to the original papers [24,25]). We start by defining mod-q counting branching programs. These are labeled directed graphs BP = (G, s, t,edge-labeling) where each label is ei¯i or the constant 1. For any x, we ther some variable xi or a negated variable x look at the induced subgraph Gx of all edges where the label has the value 1, under the assignment x. The value f (x) computed by BP is defined as the number of s-t-paths in Gx (mod q). The size of BP is the number of vertices in G. (A special case of this definition is that of deterministic branching programs.) A main tool used in the construction is the following lemma. Lemma: If f has mod-q CBP of size a then there exist a linear L : F n → F a×a such that f (x) =det(L(x)) (where “linear” means that each entry in the matrix L(x) is a degree-1 polynomial in x1 , . . . , xn ). Given this lemma, the construction of the randomizing polynomial is of the form p(x, R1 , R2 ) = R1 L(x)R2 . Clearly the complexity of this polynomial is O(a2 ) and its degree is 3. Finally, we note that choosing random matrices R1 , R2 (or, alternatively, choosing random non-singular R1 , R2 ) is inherently non-perfect. However, the special form of L that the above lemma finds allows us to choose perfectly non-singular R1 , R2 from a certain subgroup.

Some Applications of Polynomials for the Design of Cryptographic Protocols

5

11

Concluding Remarks

We argue that polynomials are an excellent tool both for representing data and for manipulating it (secretly). There are many applications of polynomials for constructing cryptographic protocols and this note only covers a small number of them. Finally, it is interesting to note that recently an application of polynomials of a very different nature was suggested in [29,30]. Specifically, they suggest intractability assumptions which are based on the hardness of reconstructing polynomials from highly-noisy data.

Acknowledgments I wish to thank Amos Beimel and Moti Yung for providing me with some references and Yuval Ishai for his comments on this paper. I also thank the organizers of SCN 2002, for inviting me to give a talk and writing this short note.

References 1. A. Ambainis. Upper bound on the communication complexity of private information retrieval. In 24th ICALP, LNCS 1256, pp. 401–407, 1997. 2. J. Bar-Ilan and D. Beaver. Non-cryptographic fault-tolerant computing in a constant number of rounds. In Proc. 8th ACM PODC, pages 201–209. ACM, 1989. 3. D. Beaver, S. Micali, and P. Rogaway. The round complexity of secure protocols (extended abstract). In Proc. 22nd STOC, pages 503–513. ACM, 1990. 4. A. Beimel and Y. Ishai. Information-theoretic private information retrieval: A unified construction. In 28th ICALP, vol. 2076 of LNCS, pp. 912–926, 2001. 5. A. Beimel, Y. Ishai, E. Kushilevitz, and J. F. Raymond, “Breaking the O(n1/(2k−1) ) Barrier for Information-Theoretic Private Information Retrieval”, In Proc. of FOCS, 2002. 6. A. Beimel, Y. Ishai, and T. Malkin. Reducing the servers’ computation in private information retrieval: PIR with preprocessing. In CRYPTO 2000, vol. 1880 of LNCS, pp. 56–74, 2000. 7. M. Ben-Or, S. Goldwasser, and A. Wigderson. Completeness Theorems for Noncryptographic Fault-Tolerant Distributed Computations. Proc. 20th STOC88 , pp. 1–10. 8. C. Blundo, A. De Santis, A. Herzberg, S. Kutten, U. Vaccaro, M. Yung PerfectlySecure Key Distribution for Dynamic Conferences. Proc. CRYPTO 1992, 471-486 9. R. Canetti, Y. Ishai, R. Kumar, M. K. Reiter, R. Rubinfeld, and R. N. Wright. Selective private function evaluation with applications to private statistics. In 20th PODC, pp. 293 – 304, 2001. 10. D. Chaum, C. Crepeau, and I. Damgard. Multiparty Unconditionally Secure Protocols. In Proc. 20th STOC88 , pages 11–19. 11. B. Chor and N. Gilboa. Computationally private information retrieval. In 29th STOC, pp. 304–313, 1997. 12. B. Chor, O. Goldreich, E. Kushilevitz, and M. Sudan. Private information retrieval. J. of the ACM, 45:965–981, 1998.

12

Eyal Kushilevitz

13. G. Di-Crescenzo, Y. Ishai, and R. Ostrovsky. Universal service-providers for private information retrieval. J. of Cryptology, 14(1):37–74, 2001. 14. U. Feige, J. Kilian, and M. Naor. A minimal model for secure computation (extended abstract). In Proc. 26th STOC, pages 554–563. ACM, 1994. 15. J. Feigenbaum, Y. Ishai, T. Malkin, K. Nissim, M. J. Strauss, and R. N. Wright. Secure multiparty computation of approximations. In 28th ICALP, vol. 2076 of LNCS, pp. 927–938, 2001. 16. R. Gennaro, Y. Ishai, E. Kushilevitz, and T. Rabin. On 2-Round Secure Multiparty Computation. In Proc. of Crypto, 2002. 17. R. Gennaro, M. O. Rabin, and T. Rabin. Fact-track multiparty computations with applications to threshold cryptography. In Proc. of 17th PODC, pages 101–111, 1998. 18. Y. Gertner, S. Goldwasser, and T. Malkin. A random server model for private information retrieval. In RANDOM ’98, vol. 1518 of LNCS, pp. 200–217, 1998. 19. Y. Gertner, Y. Ishai, E. Kushilevitz, and T. Malkin. Protecting data privacy in private information retrieval schemes. JCSS, 60(3):592–629, 2000. 20. S.W. Golub, “Shift Register Sequences”, 1967. 21. O. Goldreich, S. Micali, and A. Wigderson. How to Play Any Mental Game. In Proc. 19th STOC, pages 218–229. ACM, 1987. 22. Y. Ishai and E. Kushilevitz. Private simultaneous messages protocols with applications. In ISTCS97, pages 174–184, 1997. 23. Y. Ishai and E. Kushilevitz. Improved upper bounds on information theoretic private information retrieval. 31st STOC, pp. 79–88, 1999. 24. Y. Ishai and E. Kushilevitz. Randomizing polynomials: A new representation with applications to round-efficient secure computation. In Proc. 41st FOCS, 2000. 25. Y. Ishai and E. Kushilevitz. Perfect Constant-Round Secure Computation via Perfect Randomizing Polynomials. In Proc. ICALP ’02, pp. 244–256. 26. M. Ito, A. Saito, and T. Nishizeki. Secret sharing schemes realizing general access structures. In Proc. IEEE Global Telecommunication Conf., Globecom 87, pages 99–102, 1987. 27. T. Itoh. Efficient private information retrieval. IEICE Trans. Fund. of Electronics, Commun. and Comp. Sci., E82-A(1):11–20, 1999. 28. J. Katz and L. Trevisan. On the efficiency of local decoding procedures for errorcorrecting codes. In 32nd STOC, pp. 80–86, 2000. 29. A. Kiayias and M. Yung. Secure games with polynomial expressions. In 28th ICALP, vol. 2076 of LNCS, pp. 939–950, 2001. 30. A. Kiayias and M. Yung. Cryptographic Hardness Based on the Decoding of ReedSolomon Codes. In 29th ICALP, pp. 232–243, 2002. 31. E. Kushilevitz and R. Ostrovsky. Replication is not needed: Single database, computationally-private information retrieval. In 38th FOCS, pp. 364–373, 1997. 32. E. Mann. Private access to distributed information. Master’s thesis, Technion, Haifa, 1998. 33. F.J. Macwilliams and N.J.A. Sloane, “The Theory of Error Correcting Codes”, 1977. 34. M. Naor and K. Nissim. Communication preserving protocols for secure function evaluation. In 33th STOC, 2001. 35. M. Naor and B. Pinkas. Oblivious transfer and polynomial evaluation. In 31st STOC, pp. 245–254, 1999. 36. R. Ostrovsky and V. Shoup. Private information storage. In 29th STOC, pp. 294–303, 1997. 37. G.B. Purdy, “A high Security Log-In Procedure”, CACM 17(8), pp. 442-445, 1974.

Some Applications of Polynomials for the Design of Cryptographic Protocols

13

38. M. O. Rabin. Efficient dispersal of information for security, load balancing, and fault tolerance. J. ACM 38, 335-348 (1989). 39. T. Rabin and M. Ben-Or. Verifiable Secret Sharing and Multiparty Protocols with Honest Majority. In Proc. 21st STOC, pages 73–85. ACM, 1989. 40. A. Shamir. How to share a secret. Commun. ACM, 22(6):612–613, June 1979. 41. A. C-C. Yao. How to Generate and Exchange Secrets. In Proc. 27th FOCS, pages 162–167. IEEE, 1986.

Secure Multi-party Computation Made Simple Ueli Maurer Department of Computer Science ETH Zurich CH-8092 Zurich, Switzerland [email protected]

Abstract. A simple approach to secure multi-party computation is presented. Unlike previous approaches, it is based on essentially no mathematical structure (like bivariate polynomials) or sophisticated subprotocols (like zero-knowledge proofs). It naturally yields protocols secure for mixed (active and passive) corruption and general (as opposed to threshold) adversary structures, confirming the previous tight bounds in a simpler formulation and with simpler proofs. Due to their simplicity, the described protocols are well-suited for didactic purposes, which is a main goal of this paper. Keywords: Secure multi-party computation, secret-sharing, verifiable secret-sharing, adversary structures.

1

Introduction

We propose a new, very simple approach to multi-party computation MPC secure against active cheating and, more generally, mixed corruption scenarios. This work is motivated by a protocol of Beaver and Wool BW98 which achieves security only for a passive adversary setting, without the possibility to enhance it to active adversary settings. In this section we review the de nition of secure MPC, discuss various models for specifying the adversary's capabilities, and review di erent types of security and communication models. A reader familiar with these topics can skip Section 1 and much of Section 4, where previous results a reviewed. After discussing some preliminaries in Section 2, our model and results are stated in Section 3. The main parts of the paper are Section 5, where the passively secure protocol and the underlying secret-sharing scheme is presented, and Section 6 which presents the protocol secure in the general corruption model. 1.1

Secure Multi-party Computation

Secure function evaluation, as introduced by Yao Yao82, allows a set = f 1 n g of players to compute an arbitrary agreed function of their private inputs, even if an adversary may corrupt and control some of the players in P

p ::: p



n

Supported in part by the Swiss National Science Foundation, grant no. 20-42105.94.

S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 14–28, 2003. c Springer-Verlag Berlin Heidelberg 2003 

Secure Multi-party Computation Made Simple

15

various ways, to be discussed below. More generally, secure MPC allows the players to perform an arbitrary on-going computation during which new inputs can be provided. This corresponds to the simulation of a trusted party GMW87. Security in MPC means that the players' inputs remain secret except for what is revealed by the intended results of the computation and that the results of the computation are guaranteed to be correct. More precisely, security is de ned relative to an ideal-world speci cation involving a trusted party: anything the adversary can achieve in the real world where the protocol is executed he can also achieve in the ideal world MR98,Can00,PSW00. Many distributed cryptographic protocols can be seen as special cases of a secure MPC. For speci c tasks like collective contract signing, on-line auctions, or voting, there exist very ecient protocols. Throughout this paper we consider general secure MPC protocols, where general means that any given speci cation involving a trusted party can be computed securely without the trusted party. General MPC protocols tend to be less ecient than special-purpose protocols, for two reasons: First, the function must be modeled by a generally quite large circuit over some nite eld, and second, the multiplication sub-protocol is rather inecient but see HM01 for an ecient general MPC protocol .

1.2 Specifying the Adversary's Capabilities The potential misbehavior of some of the players is usually modeled by considering a central adversary with an overall cheating strategy who can corrupt some of the players. Two dierent notions of corruption, passive and active corruption, are usually considered. Passive corruption means that the adversary learns the entire internal information of the corrupted player, but the player continues to perform the protocol correctly. Such players are sometimes also called semihonest. Active corruption means that the adversary can take full control of the corrupted player and can make him deviate arbitrarily from the protocol. If no active corruptions are considered, then the only security issue is the secrecy of the players' inputs. A non-adaptive or static adversary must decide before the execution of the protocol which players he corrupts, while an adaptive adversary can corrupt new players during the protocol, as long as the total set of corrupted players is still admissible. A mobile adversary can release some of the corrupted players, thereby regaining corruption power. We consider adaptive, but not mobile adversaries. In many papers, the adversary's corruption capability is speci ed by a threshold t, i.e., the adversary is assumed to be able to corrupt up to t but not more players. More generally, the adversary's corruption capability could be speci ed by a so-called adversary structure, i.e., a set of potentially corruptible subsets of players. Even more generally, the corruption capability can be speci ed by a set of corruption scenarios, one of which the adversary can choose secretly . For instance, each scenario can specify a set of players that can be passively corrupted and a subset of them that can even be actively corrupted. In Section 4 we describe these models and the results known for them.

16

Ueli Maurer

1.3 Types of Security and Communication Models One distinguishes between two types of security. Information-theoretic security means that even an adversary with unrestricted computing power cannot cheat or violate secrecy, while cryptographic security relies on an assumed restriction on the adversary's computing power and on certain unproven assumptions about the hardness of some computational problem, like factoring large integers. The terms perfect" and unconditional" security are often used for informationtheoretic security with zero and negligible error probability, respectively, is tolerated In this paper we consider perfect information-theoretic security. Several communication models are considered in the literature. In the standard synchronous model for information-theoretic security , any pair of players can communicate over a bilateral secure channel. Some papers RB89,Bea91,CDD+ 99 assume the availability of a broadcast channel which guarantees the consistency of the received values if a sender sends a value to several players, but in practice a broadcast channel must be simulated by a quite inecient protocol among the players e.g. LSP82,BGP89,FM97 . In a threshold setting information-theoretic , such a simulation is possible if and only if t n=3, and for a general adversary structure the necessary and sucient condition is that no three potentially actively corrupted sets cover the full player set HM97,FM98. In asynchronous communication models, no guarantees about the arrival times of sent messages are assumed. Here we do not consider asynchronous communication models, although our techniques may also be applied in that context.

2 Preliminaries 2.1 Structures Denition 1. Consider a nite set P . We call a subset  of the power set 2P of P a monotone structure for P if  is closed under taking subsets, i.e., if S 2  and S  S implies S 2  .1 We dene a commutative and associative operation on structures, denoted t: 1 t 2 is the structure consisting of all unions of one element of 1 and one element of 2 , i.e., 1 t 2 := fS1  S2 : S1 2 1  S2 2 2 g: 0

0

Structures will be described by listing only the maximal sets, their subsets being understood as belonging to the structure. The size j j of a structure  is the number of maximal elements. Example 1. The most common example of a structure is the threshold structure  = fS : S  P jS j  tg for some t. Note that the description that lists the maximal sets has size nt which is exponential in n if t is a fraction of n. 1 Similarly, we call a monotone anti-structure if it is closed under taking supersets, i.e., if the complement c := fS 2 2P : S 62 g is a structure.

Secure Multi-party Computation Made Simple

17

2.2 Secret Sharing and Secrecy Structures A secret-sharing scheme allows a dealer to share a secret among a set = n g of players such that only certain qualied subsets of players can reconstruct the secret, i.e., are qualied, while certain other subsets of players obtain no information about the secret, i.e., are ignorant a term not used in the previous literature. Natural secret-sharing schemes and also those of this paper have the property that every subset of is either qualied or ignorant, and this is why ignorant sets are usually called non-qualied. However, for reasons explained below, we choose the new term. A secret-sharing scheme is usually specied by the so-called access structure2 , the collection of qualied player subsets. In our context, it is more natural to characterize a secret-sharing scheme by the secrecy structure consisting of the collection of ignorant player subsets. As mentioned above, the secrecy structure is typically the complement of the access structure, i.e., = 2P n . Why is it more natural to consider the secrecy structure instead of the access structure, and why is the term ignorant more natural than non-qualied? When the potential misbehavior of players is considered, one leaves the realm of classical secret-sharing. For instance, players could misbehave by not sending their share when supposed to, or by even sending a false share. In such a case, a qualied set can generally not reconstruct the secret, i.e, the notion of being qualied looses its normal meaning. In contrast, the notion of secrecy is not changed by misbehaving players. If a secret is shared according to a certain scheme, then the secrecy structure remains unchanged, even if players misbehave except, of course, restricting the secrecy structure to sets containing the corrupted players. P

fp1  : : :

p

P









2.3 Adversary Structures As mentioned earlier, as a generalization of specifying the adversary's capabilities by a corruption type and a threshold , one can describe it by a corruption type and an adversary structure meaning that the adversary can choose one of the sets in and corrupt these players HM97. For passive corruption we can also call this structure the secrecy structure rather then the adversary structure. t





3

Model and Results of This Paper

We present a very simple approach to secure multi-party computation. Unlike previous approaches, it is based on essentially no mathematical structure like bivariate polynomials or zero-knowledge proofs, and it naturally yields protocols secure against general mixed adversary structures. The main focus of the paper is on simplicity of the protocols, which makes them suitable for didactic purposes. However, it is quite possible that the protocol ideas have applications in other contexts and that for certain applications, 2

But note that, according to our terminology, it is actually an anti-structure.

18

Ueli Maurer

especially when involving only a small number of players, the protocols are the most ecient known. The adversary is specied by a secrecy structure and an adversary structure  , with the following meaning.

Denition 2. Consider a player set P and two structures 

2P . A   -adversary is an adversary who can adaptively corrupt some players passively and some players actively, as long as the set A of actively corrupted players and the set B of passively corrupted players satisfy both

A2

and

A  B  2 :

In other words, a cheating player set A cannot violate the correctness, and all corrupted players together the set A  B  obtain no information not specied by the protocol. This model is the same as that of FM02 where only veriable secret-sharing is considered. The following theorems give increasingly strong conditions for broadcast, for veriable secret-sharing, and for secure MPC to be possible. The eciency of the protocols is polynomial in n, j j, and jj, but this fact is not stated explicitly.

Theorem 1. The simulation of a broadcast channel secure against a   adversary is possible if and only if P 62  t  t . Theorem 2. Perfect veriable secret-sharing secure against a   -adversary is possible if and only if P 62 t  t . Theorem 3. General perfect information-theoretically secure MPC secure against a   -adversary is possible if and only if P 62 t t . Theorem 1 follows from a more general result in HM97 and the ecient broadcast protocol given in FM98 . This theorem is used, but not considered further in this paper. Theorem 3 is equivalent to Theorem 1 of FHM99 , as will be explained in Section 4.4. 4

Review of Results on General Secure Multi-party Computation

In this section we review the previous results on necessary and sucient conditions for general secure MPC to be possible, for various models and degrees of generality. 4.1 Classical Threshold Results

In the original papers solving the general secure MPC problem, the adversary is specied by a single corruption type active or passive and a threshold t on the tolerated number of corrupted players. Goldreich, Micali, and Wigderson

Secure Multi-party Computation Made Simple

setting

adversary type condition reference

cryptographic passive cryptographic active information-theoretic passive information-theoretic active i.t., with broadcast active Table 1.

possible.

19

tn t  n=2 t  n=2 t  n=3 t  n=2

GMW87 GMW87 BGW88,CCD88 BGW88,CCD88 RB89,Beaver91a

Necessary and su cient threshold conditions for general secure MPC to be

GMW87 proved that, based on cryptographic intractability assumptions, general secure MPC is possible if and only if t n=2 players are actively corrupted. The threshold for passive corruption is t n. In the information-theoretic model, where bilateral secure channels between every pair of players are assumed, BenOr, Goldwasser, and Wigderson BGW88 proved that perfect security is possible if and only if t n=3 for active corruption, and if and only if t n=2 for passive corruption.3 In a model with a physical broadcast channel, which helps only in case of active corruption, unconditional security is achievable if and only if t n=2 RB89,Bea91,CDD+ 99. These classical results are summarized in Table 1.

4.2 Mixed Adversary Models

The exact threshold conditions for mixed models under which secure MPC is possible were proved in FHM98, including fail-corruption as a third corruption type. Here we state the results without considering fail-corruption. Let ta and tp be the number of players that can be actively and passively corrupted, respectively. Perfect security is achievable if and only if 3ta + 2tp n, whether or not a broadcast channel is available. This is a special case of Theorem 3.4 This strictly improves on non-mixed threshold results: In addition to tolerating ta n=3 actively corrupted players, secrecy can be guaranteed against every minority, thus tolerating additional tp n=6 passively corrupted players.

4.3 General Adversary Structures

The threshold adversary models were extended to a non-threshold setting in HM97 see also HM00, for either passive or active, but not for mixed corruption. The adversary's capability is characterized by a structure, called secrecy structure  for passive corruption and adversary structure  for active The same result was obtained independently by Chaum, Crepeau, and Damgard CCD88, but with an exponentially small error probability. 4 Exponentially small error probability with a broadcast channel is achievable if and only if 2ta +2tp  n. Without broadcast, the additional condition 3ta  n is necessary and su cient.

3

20

Ueli Maurer

corruption. Again, generalizing the model leads to strictly stronger results compared to those achievable in the threshold model. For instance, in the case of 6 players and active corruption, with P = fA B C D E F g, one can obtain a protocol secure against the structure with  = ffAg, fB Dg, fB E F g, fC E g, fC F g, fD E F gg, whereas in the threshold model one can tolerate only a single active cheater, i.e., the adversary structure  = ffAg, fB g, fC g, fDg, fE g, fF gg. 2 Let Q   be the condition on a structure  that no two sets in  cover the full player set P , i.e., Q2    P 62  t  . Similarly, let Q3   be the condition that no three sets in  cover the full player set P , i.e., Q3    P 62  t  t  . The main results of HM97 state that for passive corruption, Q2   is the necessary and su cient condition for general secure MPC to be possible. For active corruption, the condition is Q3 , and if a broadcast channel is available, then the condition is Q2 . The rst two results are again special cases of Theorem 3. These results were achieved by a recursive player substitution technique, yielding quite complex protocols. The protocols of this paper are much simpler, more intuitive, and considerably more e cient.

4.4 Mixed General Adversary Structures

Finally, general mixed adversary specications were considered in FHM99 and the exact conditions for general secure MPC to be possible were given for a general mixed passiveactive model. For each admissible choice, the adversary can actively corrupt a subset D  P of the players, and, additionally, can passively corrupt another subset E  P of the players. The adversary specication  is hence a set of pairs D E , i.e.,  = fD1 E1  : : :  D  E g for some k, and the adversary may select one arbitrary pair D  E  from  and corrupt the players in D actively and, additionally, corrupt the players in E passively. The adversary's choice is not known before and typically also not after execution of the protocol. It was proved in FHM99 that, with or without broadcast channels, perfect general MPC is achievable if and only if the adversary specication  satises the following condition Q 3,2  : k

k

i

i

i

i

Q 3,2   , 8D1 E1  D2  E2  D3  E3  2  : D1 E1 D2 E2 D3 6= P : At rst sight, these results look more general than Theorem 3 since the adversary specication consists of a general set of pairs rather than two structures. However, they are equivalent, which can be seen as follows. For an adversary specication  = fD1  E1  : : :  D  E g we can dene naturally an associated secrecy structure    = fD E : D E  2  g and an associated adversary structure   = fD : D E  2  for some E g: k

k

Secure Multi-party Computation Made Simple

21

Now we can dene the closure of as := fD E  : D 2   ^ D  E  2   g: It is not dicult to show that Q 3,2   , Q 3,2  . Therefore secure MPC is possible for a given adversary specication if and only if it is possible for . In other words, one can enlarge any specication to its closure for free.5 This justies the consideration of  -adversaries as discussed above. To see this, take any D  E  and D  E , add the new pair D  D  E nD  to , and check that the condition Q 3,2   is still satised. i

5

i

j

j

i

j

j

i

Secure MPC: The Passive Case

5.1 The Format of the Protocol

General secure MPC works as follows: The function more generally, the specication to be computed is specied by a circuit over some nite eld consisting of addition and multiplication gates.6 Each input value and each intermediate result is shared among the players, according to the secrecy structure, using a linear secret-sharing scheme. Due to the linearity, secure addition and more generally computing any linear function of shared values is trivial: every player locally computes the linear function of his shares and keeps the result as a share of the new value. Secrecy is trivially guaranteed because this step involves no communication. Correctness is also trivially guaranteed because due to the lack of communication there is no chance for a corrupted player to cheat. Hence we only need to consider the secure multiplication of shared values.

5.2 The Secret-Sharing Scheme

As a building block, we need a k-out-of-k secret-sharing scheme, i.e., one for k players such that only the complete set of players but no proper subset can reconstruct the secret. Such a scheme actually linear for any k and any domain D of the secret s is obtained by splitting s into a random sum.7 k-out-of-k secret-sharing: Select k  1 shares s1  : : :  sk The ith share is si .

1

at random from D and let sk := s 

P

k 1 i=1

si .

However, there may exist protocols secure for but not for . But in such a case there would exist a dierent protocol secure for , with possibly much higher complexity. 6 More generally, the circuit could contain any gates for linear functions, plus nonlinear multiplication gates. 7 It is trivial to impose an addition operation on D which makes it into an additive group, for instance the group isomorphic to the cyclic group ZjDj.

5

22

Ueli Maurer

The most natural approach to designing a secret-sharing scheme for a given access structure or the secrecy structure  = 2P n  is due to Itoh et al. ISN87 who introduced general access structures in secret-sharing. In this scheme, the secret is shared, independently, to each minimal quali ed player set S 2 , with an jS j-out-of-jS j secret-sharing scheme. This trivially guarantees that any quali ed set can reconstruct the secret and that no non-quali ed set gets any information about the secret. In this paper we use a dierent, in a sense dual approach. Let k be the number of maximal sets in  , i.e.  = fT1 : : :  Tk g,8 and let Ti := P nTi be the complement of the set Ti . Secret-sharing for secrecy structure

 = fT1 : : :  Tk g:

1. Split the secret using the k-out-of-k secret-sharing scheme, resulting in shares s1  : : :  sk . 2. Send si secretly to each player in Ti . The share of player pm is hence the set fsi : m 2 Ti g.

The scheme is trivially  -private because for any set T 2  , at least one share namely that given to the complement of a maximal set of  containing T  is missing, and hence the set T has no information about the secret. Reconstruction by any quali ed set in = 2P n  is also trivial: any set S 2 contains, for every maximal set Ti 2  , a player not in Ti . This player knows si , and hence the players in S know all the shares and are thus quali ed. 5.3

The Multiplication Protocol

As mentioned earlier, the condition Q2  , which is equivalent to

P 62  t 

1

is necessary and sucient for information-theoretically secure MPC for passive corruption. Condition 1 means that for any two maximal sets T1  T2 2  we have T1  T2 6= P , which is equivalent to the condition that for any T1 T2 2  , their complements intersect, i.e., P n T1   P n T2  6= fg. A set of sets, no two of which are disjoint, is also called a quorum system. Condition 1 is thus equivalent to the statement that the sets P n Ti for i = 1 : : :  k form a quorum system. The product of two shared values s and t is de ned by

st =

0 1 X ! @X A X X = k

i

8

=1

k

k

si

j

=1

tj

k

=1 j=1

si tj 

i

Remember that a structure is specied by the maximal sets.

Secure Multi-party Computation Made Simple

23

i.e., by a sum of k2 share products. Therefore we can use the following observation by Beaver and Wool BW98, used originally for a dierent secret-sharing scheme

see Section 5.4 . For every term si tj in the above sum, there exists at least one player who knows both si and tj . This player or one of them can compute the product si tj and share it among the players using the basic secret-sharing scheme . Since st is a linear combination of these shared values si tj , the sharing of st can be computed non-interactively. An eciency improvement is obtained if each player rst adds all terms assigned to him and then shares the sum. Note that terms of the form si ti i.e., i = j can be assigned to any player knowing the ith share. In summary, we have:

Multiplication protocol passive:

Preparation once and for all: Partition the set f i j : 1  i j  ng into n sets U1  : : :  Un such that for all i j 2 Um we have m 2 Ti  Tj .9 Precondition: Two values s = ki=1 si and t = ki=1 ti are shared. Postcondition: st is shared.

P

P

P

1. Each player pm for 1  m  n computes vm := ij2Um si tj and shares vm among all players. 2. Each player locally adds all n shares received in step 1.

It is obvious that the protocol works if P 62  t  and, as mentioned, this condition is also necessary.

5.4 Comparison with the Beaver-Wool Scheme

The secret-sharing scheme of BW98 works as follows: The secret is split by a sum sharing into l shares, where l = j j is the number of minimal qualied sets. Then each share is given to the players in one of the minimal qualied sets. While this sharing looks similar to ours in our scheme we consider the maximal non-qualied sets rather than the minimal qualied sets it diers in a crucial way: Condition 1 is required not only for the multiplication protocol, but even for the mere reconstruction of shared secrets. A qualied set can reconstruct the secret because it overlaps with any other minimal qualied set and hence knows all the shares, i.e., one must start with an access structure and corresponding secrecy structure which satises 1 in the rst place. This is the reason why the Beaver-Wool scheme cannot be enhanced to tolerate active corruption.

6 Secure MPC for a General   -Adversary The basic structure of the protocol is as described in Section 5.1. Two changes are required to tolerate also active corruption: The secret-sharing scheme and 9

Such a partition exists because of condition 1. Some of the Um may be empty.

24

Ueli Maurer

the multiplication protocol must be made robust against cheating by a set of players in including possibly the dealer. These two protocols are described in the following two subsections. 6.1 Veriable Secret Sharing

As mentioned above, a secret-sharing scheme is useless if not all players can be assumed to behave correctly. A rst problem is that some players may contribute false shares during reconstruction. This can be solved by distributing the secret in a redundant manner, allowing for error correction. Since this requires the set of shares to satisfy a certain consistency condition, a second problem arises, namely that a cheating dealer can distribute inconsistent shares. Veriable secret-sharing solves both these problems. Denition 3. A veriable secret-sharing VSS scheme for a set P of players with secrecy structure  and secure for adversary structure consists of two protocols, Share and Reconstruct, such that even if the adversary corrupts players according to , the following conditions hold: 1. If Share terminates successfully, then the Reconstruct protocol yields the same xed value for all possible adversary strategies, i.e., the dealer is committed to a single value. 2. If the dealer is honest during Share, then Reconstruct always yields his input value. 3. If the dealer is honest, then the adversary cannot obtain any information about the shared secret. We show how the secret-sharing scheme of Section 5.2 for the secrecy structure  can be extended to a VSS scheme secure in presence of a  -adversary, provided that  and satisfy the following condition: P 62  t t : 2 We rst describe the VSS sharing protocol, which only depends on  but not on , and then discuss condition 2 together with the reconstruction protocol. To assure that the dealer correctly shares a value, we only need to guarantee, independently for each of the k shares, that all honest players receiving this share obtain the same value.10 This is easily achieved as follows. For each share, say s , all players receiving that share those in T  check pairwise whether the value received from the dealer is the same. If any inconsistency is detected, the players detecting it complain using broadcast, and the dealer must broadcast s to all the players. Secrecy cannot be violated because a complaint is sent only if either the dealer is corrupted or a corrupted player received s , hence the adversary knew s already. After these checks it is guaranteed that all honest players knowing s hold the same value for s . i

i

i

i

i

i

10

i

Guaranteeing this condition independently for each of the k shares suces because the k shares are completely general and need not satisfy a consistency condition like in schemes based on polynomials. Every set of k shares uniquely determines a secret.

Secure Multi-party Computation Made Simple

25

VSS Share  :

1. Share the secret using the scheme of Section 5.2.11 2. For each share si : Each pair of players in P n Ti check over a secure channel whether their received values for si agree. If any inconsistency is detected, the players complain, using possibly simulated broadcast. 3. The dealer broadcasts all shares for which complaints were raised, and the players accept these shares. If the dealer refuses any of these broadcasts, the protocol is aborted.

Condition 2 implies that for a given collection of values received during reconstruction of a share, say s , there is only one consistent explanation for which is the correct value of s , namely that value v for which the set of diering values corresponds to a set in . For a given list of partially false values for s , if there were two possible values v and v with corresponding sets A and A in , then the set P n A  A  alone could not be qualied in the secret-sharing scheme, since otherwise the secret would be uniquely determined by those values, contradicting the assumption. But if P nA  A  is not qualied, it is in , and this contradicts condition 2. Hence the following protocol works. i

i

i

0

0

00

0

00

00

0

00

 :

VSS Reconstruct 

1. All players send all their shares bilaterally to all other players.12 2. Each player reconstructs locally each of the k shares s1  : : :  sk and adds them up to obtain the secret s = s1 +    + sk . Reconstruction of share si same for each player: Let vj for j 2 Ti be the value for si  sent by player pj . Take the unique value v such that there exists A 2  with vj = v for all j 2 Ti  A.

Note that the reconstruction is performed independently for each share s . This fact is used in the robust multiplication protocol discussed below. i

6.2

Robust Multiplication Protocol

The approach of Section 5.3 fails if active cheating occurs because a single false term s t can change the result arbitrarily. Hence we need a method for guaranteeing that a player correctly computes and shares such a term. This is achieved by assigning each term s t to all players knowing both s and t , and having i j

i j

11 12

i

j

If a player does not receive a share because the dealer is corrupted, then he can take a default share, say 0. No broadcast is required.

26

Ueli Maurer

each of these players share the value by VSS. After each of these say r players has shared s tj , the players open r 1 dierences of these values to verify that they are all equal to 0. This does not violate secrecy because if no cheater is involved, no information will be leaked. On the other hand, if at least one cheater is involved, secrecy need not be guaranteed since the adversary knew si and ti beforehand. If any of the dierences is not 0, then si and tj are reconstructed and si tj is computed openly and shared with a default sharing. Correctness is guaranteed as long as one of the involved players is honest since successful cheating requires to pass the checking phase without any complaints. This is guaranteed if the condition P 62  t  t  3 is satis ed because each term si tj is known to the players in the complement of a set in  t  . This condition is also necessary. In summary, we have: i

Multiplication protocol:

P

Precondition: Two values s = ki=1 si and t = Postcondition: st is shared by VSS.

Pki=1 ti are shared by VSS.

1. Each player pm computes all terms si tj he can i.e. those for which m 2 Ti  Tj  and shares them using VSS. 2. For each i j , let pm1  : : :  pm  be the ordered list of the players who computed si tj in step 1 where r depends on i and j . The players collectively compute13 and open the r 1 di erences of the value shared by pm1 and the value shared by pm , for i = 2 : : :  r. 3. If all these opened values are 0, then the sharing by pm1 is used as the sharing of si tj . Otherwise, si and tj are reconstructed and the sharing for the term si tj is obtained by p1 taking share si tj and all other players taking share 0. 4. The players locally compute the sum of their shares of all terms si tj , resulting in a sharing of st. r

i

7

Conclusions

Because of the simplicity of the presented protocols, it is easy to verify that their complexity is polynomial in n, j j, and jj. Although for a threshold adversary the complexity is exponential in n, for a very small number of players the protocol is very ecient and can possibly lead to the preferred protocol from a practical viewpoint. One advantage of the described protocol is that it works over any eld or ring, in particular also over the binary eld GF 2. This is signi cant in view of 13

Computing the dierence is achieved by each player computing the dierence locally.

Secure Multi-party Computation Made Simple

27

the fact that a digital circuit can easily, and without essential loss of eciency, be transformed into a circuit using only XOR and AND gates, hence into an arithmetic circuit over GF 2. In contrast, other protocols require a eld GF q of size q  n, resulting possibly in a complexity overhead for translating the digital circuit into an arithmetic circuit over GF q. A theme of general interest in secure MPC is to design protocols that are ecient in the size of the descriptions of the secrecy and the adversary structures or, more generally, the adversary specication. Obviously, this task depends on which type of description one uses, i.e., on the adversary specication language. The specication language of this paper is the list of all maximal sets of a structure. Assuming  = , a protocol that is ecient for a substantially more powerful specication language was given in CDM00 :  can be described by any linear secret-sharing scheme with secrecy structure  . It is an open problem to nd other specication languages for which ecient protocols exist.

Acknowledgments I would like to thank Matthias Fitzi and Martin Hirt for interesting discussions on secure multi-party computation which were the motivation for this work. Martin also gave much appreciated feedback on this paper.

References Bea91 BW98 BGW88 BGP89 Can00 CCD88 CDD+ 99 CDM00

D. Beaver. Secure multi-party protocols and zero-knowledge proof systems tolerating a faulty minority. Journal of Cryptology, vol. 4, no. 2, pp. 75 122, 1991. D. Beaver and A. Wool. Quorum-based multi-party computations. Advances in Cryptology | EUROCRYPT '98, Lecture Notes in Computer Science, Berlin: Springer-Verlag, vol. 1403, pp. 25 35, 1998. M. Ben-Or, S. Goldwasser, and A. Wigderson. Completeness theorems for non-cryptographic fault-tolerant distributed computation. In Proc. 20th ACM Symposium on the Theory of Computing STOC , pp. 1 10, 1988. P. Berman, J. A. Garay, and K. J. Perry. Towards optimal distributed consensus extended abstract. In Proc. 21st ACM Symposium on the Theory of Computing STOC , pp. 410 415, 1989. R. Canetti. Security and composition of multi-party cryptographic protocols. Journal of Cryptology, vol. 13, no. 1, pp. 143 202, 2000. D. Chaum, C. Crepeau, and I. Damgard. Multi-party unconditionally secure protocols extended abstract. In Proc. 20th ACM Symposium on the Theory of Computing STOC , pp. 11 19, 1988. R. Cramer, I. Damgard, S. Dziembowski, M. Hirt, and T. Rabin. Ecient multi-party computations with dishonest minority. Advances in Cryptology | EUROCRYPT '99, Lecture Notes in Computer Science, Berlin: Springer-Verlag, vol. 1592, pp. 311 326, 1999. R. Cramer, I. Damgard, and U. Maurer. General secure multi-party computation from any linear secret sharing scheme. Advances in Cryptology EUROCRYPT 2000, B. Preneel Ed., Lecture Notes in Computer Science, Berlin: Springer-Verlag, vol. 1807, pp. 316 334, 2000.

28

Ueli Maurer

FM02 FM97 FHM98

FHM99 FM98 GMW87 HM97 HM00 HM01 ISN87 LSP82 MR98 PSW00 RB89 Yao82

S. Fehr and U. Maurer. Linear VSS and Distributed Commitment Schemes Based on Secret Sharing and Pairwise Checks. Advances in Cryptology  CRYPTO '02, M. Yung Ed., Lecture Notes in Computer Science, Berlin: Springer-Verlag, vol. 2442, pp. 565580, 2002. P. Feldman and S. Micali. An optimal probabilistic protocol for synchronous Byzantine agreement. SIAM Journal on Computing, 264:873 933, Aug. 1997. M. Fitzi, M. Hirt, and U. Maurer. Trading correctness for secrecy in unconditional multi-party computation. Advances in Cryptology | CRYPTO '98, Lecture Notes in Computer Science, Berlin: Springer-Verlag, vol. 1462, pp. 121136, 1998. See corrected version at http:www.crypto.ethz.chpublications. M. Fitzi, M. Hirt, and U. Maurer. General adversaries in unconditional multi-party computation, Advances in Cryptology  ASIACRYPT '99, K.Y. Lam et al. Eds., Lecture Notes in Computer Science, Berlin: SpringerVerlag, vol. 1716, pp. 232246, 1999. M. Fitzi and U. Maurer. Ecient Byzantine agreement secure against general adversaries. Distributed Computing | DISC '98, Lecture Notes in Computer Science, Berlin: Springer-Verlag, vol. 1499, pp. 134148, 1998. O. Goldreich, S. Micali, and A. Wigderson. How to play any mental game | a completeness theorem for protocols with honest majority. In Proc. 19th ACM Symposium on the Theory of Computing STOC , pp. 218229, 1987. M. Hirt and U. Maurer. Complete characterization of adversaries tolerable in secure multi-party computation. Proc. 16th ACM Symposium on Principles of Distributed Computing PODC , pp. 2534, Aug. 1997. M. Hirt and U. Maurer. Player simulation and general adversary structures in perfect multi-party computation. Journal of Cryptology, vol. 13, no. 1, pp. 3160, 2000. M. Hirt and U. Maurer. Robustness for free in unconditional multi-party computation. Advances in Cryptology  CRYPTO '01, J. Kilian Ed., Lecture Notes in Computer Science, Berlin: Springer-Verlag, vol. 2139, pp. 101 118, 2001. M. Ito, A. Saito, and T. Nishizeki. Secret-sharing scheme realizing general access structure. Proceedings IEEE Globecom '87, pp. 99102. IEEE, 1987. L. Lamport, R. Shostak, and M. Pease. The Byzantine generals problem. ACM Transactions on Programming Languages and Systems, 43:382401, July 1982. S. Micali and P. Rogaway. Secure computation: The information theoretic case. Manuscript, 1998. Former version: Secure computation, In Advances in Cryptology | CRYPTO '91, volume 576 of Lecture Note in Computer Science, pp. 392404, Springer-Verlag, 1991. B. Ptzmann, M. Schunter, and M. Waidner. Secure Reactive Systems. IBM Research Report RZ 3206, Feb. 14, 2000. T. Rabin and M. Ben-Or. Veriable secret-sharing and multiparty protocols with honest majority. In Proc. 21st ACM Symposium on the Theory of Computing STOC , pp. 7385, 1989. A. C. Yao. Protocols for secure computations. Proc. 23rd IEEE Symposium on the Foundations of Computer Science FOCS , pp. 160164. IEEE, 1982.

Forward Secrecy in Password-Only Key Exchange Protocols Jonathan Katz1,4 , Rafail Ostrovsky2 , and Moti Yung3 1

Department of Computer Science, University of Maryland (College Park) [email protected] 2 Telcordia Technologies, Inc. [email protected] 3 Department of Computer Science, Columbia University [email protected] 4 Work done while at Columbia University

Abstract. Password-only authenticated key exchange (PAKE) protocols are designed to be secure even when users choose short, easilyguessed passwords. Security requires, in particular, that the protocol cannot be broken by an off-line dictionary attack in which an adversary enumerates all possible passwords in an attempt to determine the correct one based on previously-viewed transcripts. Recently, provably-secure protocols for PAKE were given in the idealized random oracle/ideal cipher models [2,8,19] and in the standard model based on general assumptions [11] or the DDH assumption [14]. The latter protocol (the KOY protocol ) is currently the only known practical solution based on standard assumptions. However, only a proof of basic security for this protocol has appeared. In the basic setting the adversary is assumed not to corrupt clients (thereby learning their passwords) or servers (thereby modifying the value of stored passwords). Simplifying and unifying previous work, we present a natural definition of security which incorporates the more challenging requirement of forward secrecy. We then demonstrate via an explicit attack that the KOY protocol as originally presented is not secure under this definition. This provides the first natural example showing that forward secrecy is a strictly stronger requirement for PAKE protocols. Finally, we present a slight modification of the KOY protocol which prevents the attack and — as the main technical contribution of this paper — rigorously prove that the modified protocol achieves forward secrecy.

1

Introduction

Protocols allowing mutual authentication of two parties and generation of a cryptographically-strong shared key (authenticated key exchange) underly most secure interactions on the Internet. Indeed, it is near-impossible to achieve any level of security over an unauthenticated network without mutual authentication and key-exchange protocols. The former are necessary because one needs S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 29–44, 2003. c Springer-Verlag Berlin Heidelberg 2003 

30

Jonathan Katz, Rafail Ostrovsky, and Moti Yung

to know “with whom one is communicating”, while the latter are required because cryptographic techniques (such as encryption, etc.) are useless without a shared key which must be periodically refreshed. Furthermore, high-level protocols are frequently developed and analyzed using the assumption of “authenticated channels”; this assumption cannot be realized without a secure mechanism for implementing such channels using previously-shared information. Client-server authentication requires some information to be shared between client and server or else there is nothing distinguishing the client from other parties in the network. The classical example of shared information is a high entropy, cryptographic key; this key can then be used, e.g., for message authentication or digital signatures. Indeed, the first systematic and formal treatments of authenticated key exchange [10,6,3,4,1,20] assumed that participants shared cryptographically-strong information: either a secret key [6,3,4,1,20] or public keys [10,1,20]. Under these strong setup assumptions, many protocols for the two-party case have been designed and proven secure [6,3,1,20]. If the client is ultimately a human user, however, it is unrealistic to expect the client to store (i.e., remember) a high entropy key. Instead, it is more reasonable to assume that the client stores only a low entropy password. Protocols which are secure when users share keys are often demonstrably insecure when users share passwords. For example, a challenge-response protocol in which the client sends r and the server replies with x = fK (r) (where K is the shared key) is secure only when the entropy of K is sufficiently large. When K has low entropy, an eavesdropper who monitors a single conversation r, x can determine (with high probability) the value of K, off-line, by trying all possibilities until a value K  is found such that fK  (r) = x. When a password is shared, we distinguish between the hybrid (i.e., password/public-key) model [15,12,7], in which the client and server share a password and the client additionally knows the public key of the server, and the passwordonly model [5,2,8] in which the client and server share only a password (although public information may be available to all parties in the network1 ). In both cases, it is important to design protocols which are secure against off-line dictionary attacks in which an adversary enumerates all possible passwords, one-by-one, in an attempt to determine the correct password based on previously-recorded transcripts. Consideration of such attacks is crucial if security is to be guaranteed even when users of the protocol choose passwords poorly. Password-only protocols (even when public information is assumed) have many practical advantages over protocols designed for the hybrid model. The password-only model eliminates the need for a PKI, thereby avoiding issues of user registration, key management, and key revocation. Furthermore, eliminating a PKI means that an on-line, trusted certification authority (CA) is no longer needed; note that access to the CA is often a performance bottleneck as the number of users becomes large. In the password-only model, once public information 1

The existence of such information is a typical (though not universal) assumption. For example, in Diffie-Hellman key exchange [9] the group G and a generator of G are often assumed to be public.

Forward Secrecy in Password-Only Key Exchange Protocols

31

is established new users may join at any time without informing a central authority of their presence. Finally, in the password-only model there is no global “secret key”; this eliminates the risk that compromise of a single participant will compromise the security of the entire system. Motivation for Our Work. Formal definitions of security for password-only authenticated key exchange (PAKE) are non-trivial, and are the subject of ongoing research. At least three different frameworks for this setting have been proposed [8,2,11]. Further complicating the issue is that, in all three frameworks, two levels of security can be distinguished depending on whether forward secrecy is required. Making matters worse, the various definitions of forward secrecy are subtly different from one another and hence a multitude of security notions exists; as an example, in [2] four distinct notions of forward secrecy are mentioned! One of our goals is to simplify and unify previous definitional work. It is also important to design PAKE protocols which provably achieve forward secrecy. A number of PAKE protocols achieving basic security are known. The first such protocol was given by Bellovin and Merritt [5], although they give only heuristic arguments for its security. More recently, the first formal proofs of basic security for PAKE protocols have been given in the random oracle [8,19] or ideal cipher models [2]. Subsequently, PAKE protocols were designed and proven secure in the standard model: Goldreich and Lindell [11] construct a protocol based on general assumptions which does not require public parameters, and Katz, Ostrovsky, and Yung [14] give a protocol (the KOY protocol ) based on the decisional Diffie-Hellman assumption. Subsequently, other protocols with provable security in the random oracle model have been proposed [16,17]. Only some of the above protocols are known to achieve forward secrecy.2 Forward secrecy is claimed for the protocol of [2], although no proof is given. A full proof of forward secrecy for the protocol of [8] has subsequently appeared [18]. In the standard model, Goldreich and Lindell [11] prove forward secrecy of their protocol. We point out, however, that the definitions of forward secrecy considered in these previous works are weaker than the definition presented here (see discussion in Section 2.1). Our Contributions. As mentioned above, a number of definitions have been proposed for forward secrecy of PAKE protocols. However, we feel that none of these definitions adequately capture all realistic attacks. Building on the framework of Bellare, et al. [2] and extending the definition of forward secrecy contained therein, we propose a new definition of forward secrecy in the weak corruption model. We believe our definition better captures the underlying issues than previous definitions; in fact, we show concrete examples of potentially damaging attacks which are not prevented under previous definitions of forward secrecy but which are handled by our definition. In Section 3.1, we demonstrate via an explicit attack that the KOY protocol as presented in [14] does not achieve forward secrecy with respect to our 2

Here and throughout the rest of the paper, we consider only the weak corruption model [2]. Our terminology is explained in greater detail in Section 2.1.

32

Jonathan Katz, Rafail Ostrovsky, and Moti Yung

definition. The attack represents a potentially serious threat to the protocol in practice. Of additional interest, our attack shows a natural separation between the notions of basic and forward secrecy in the PAKE setting. We suggest a modification of the KOY protocol which prevents this attack, and (as our main technical contribution) give a complete proof of forward secrecy for this modified protocol in Section 4.

2

Definitions

Due to space limitations, we assume the reader is familiar with the “oracle-based” model of Bellare, Pointcheval, and Rogaway [2] (building on [3,4]) as well as their definition of basic security. Our point of departure from their definition is with regard to forward secrecy, so we only summarize those aspects of their model necessary for understanding the present work. For further details, we refer the reader to [13]. Participants, Passwords, and Initialization. We assume for simplicity a fixed set of protocol participants (also called principals or users) each of which is either a client C ∈ Client or a server S ∈ Server, where Client and Server are disjoint. Each C ∈ Client has a password pwC . Each S ∈ Server has a vector PWS = pwS,C C∈Client which contains the passwords of each of the clients. We assume that all clients share passwords with all servers. Before the protocol is run, an initialization phase occurs during which global, public parameters are established and passwords pwC are chosen for each client. We assume that passwords for each client are chosen independently and uniformly3 at random from the set {1, . . . , N }, where N is a constant which is fixed independently of the security parameter. At the outset of the protocol, the correct passwords are stored at each server so that pwS,C = pwC for all C ∈ Client and S ∈ Server. Execution of the Protocol. In the real world, a protocol determines how principals behave in response to signals from their environment. In the model, these signals are sent by the adversary. Each principal can execute the protocol multiple times with different partners; this is modeled by allowing each principal an unlimited number of instances with which to execute the protocol. We denote instance i of user U as ΠUi . A given instance may be used only once. Each instance ΠUi has associated with it the variables stateiU , termiU , acciU , usediU , sidiU , pidiU , and skiU ; the function of these variables is as in [2]. The adversary is assumed to control all communication in the network. The adversary’s interaction with the principals (more specifically, with the various instances) is modeled via access to oracles which are described in detail below. Local state (i.e., values of state, term, etc.) is maintained for each instance with which the adversary interacts; this state is not directly visible to the adversary. The state of an instance may be updated during an oracle call, and the oracle’s output will typically depend upon this state. The oracle types are: 3

Our analysis extends to handle arbitrary distributions, including users with interdependent passwords.

Forward Secrecy in Password-Only Key Exchange Protocols

33

– Send(U, i, M ) — This sends message M to instance ΠUi . The oracle outputs the reply generated by this instance. – Execute(C, i, S, j) — This executes the protocol between instances ΠCi and ΠSj (where C ∈ Client and S ∈ Server) and outputs the transcript of this execution. This represents occasions when the adversary passively eavesdrops on a protocol execution. – Reveal(U, i) — This outputs the current value of session key skiU . – Corrupt(· · ·) — We discuss this oracle in Section 2.1. This oracle is not present in the definition of basic security, and is used to define forward secrecy. – Test(U, i) — This query is allowed only once, at any time during the adversary’s execution. A random bit b is generated; if b = 1 the adversary is given skiU , and if b = 0 the adversary is given a random session key. Correctness. As in [2]. Partnering. We say that two instances ΠUi and ΠUj  are partnered if: (1) U ∈ Client and U  ∈ Server, or U ∈ Server and U  ∈ Client; (2) sidiU = sidjU  = null; (3) pidiU = U  and pidjU  = U ; and (4) skiU = skjU  . (This slightly clarifies the notion of partnering in [2].) 2.1

Forward Secrecy

As mentioned earlier, our main departure from [2] is in our definition of forward secrecy. To completely define this notion, two orthogonal components must be specified: the nature of the Corrupt oracle and what it means for an adversary to succeed in breaking the protocol. The Corruption Model. The Corrupt oracle models corruption of participants by the adversary. Since the adversary in our setting already has the ability to impersonate (i.e., send messages on behalf of) parties in the network, “corruption” in our setting involves learning or modifying secret information stored by a participant. Three specific possibilities are: (1) Corruption of player U may allow the adversary to learn the internal state of all (active) instances of U . (2) Corruption of client C may allow the adversary to learn pwC . Finally, (3) corruption of server S may allow the adversary to modify passwords stored on S; that is, to change password pwS,C (for some C) to any desired value. As defined in [2], the strong corruption model allows attacks (1)–(3) while the weak corruption model allows attacks (2) and (3) only. However, this terminology is not universally accepted. In particular, the weak corruption models of [8,11] allow only attack (2); i.e., they do not allow the adversary to install bogus passwords on servers. Here, we focus on the weak corruption model and allow the adversary to both learn passwords of clients and to change passwords stored on servers (in other words, we allow attacks (2) and (3)). However, our precise formalization of the Corrupt oracle differs from that of [2] in that we consider each such attack separately. Formally, oracle Corrupt1 returns pwC when given C ∈ Client as input.

34

Jonathan Katz, Rafail Ostrovsky, and Moti Yung

Oracle call Corrupt2 (S, C, pw) (for S ∈ Server and C ∈ Client) sets pwS,C := pw. We emphasize that the adversary can install different passwords on different servers for the same client. In the definition of [2], an adversary who installs a password pwS,C also learns the “actual” password pwC ; we make no such assumption here. Note that in the case of a poorly-administered server, it may be easy to modify users’ passwords without learning their “actual” passwords. We choose to focus on the weak corruption model rather than the strong corruption model for two reasons. First, fully satisfactory definitions of security in the latter model have not yet appeared; in particular, we know of no protocols satisfying any reasonable definition of forward secrecy in this model. Part of the problem is that current definitions in the strong corruption model are overly restrictive; in fact, a generalization of the argument given in [2] shows that forward secrecy (under their definition) is impossible in the strong corruption case. We therefore leave this as a subject for future research. Secondly, weak corruption is more relevant in practice: it will often be easier to compromise a machine after completion of protocol execution than to compromise a machine during protocol execution. Installing bogus passwords on servers is also a realistic threat if, e.g., a server’s password file is encrypted but not otherwise protected from malicious tampering. Advantage of the Adversary. Our most significant departure from previous work is with regard to the definition of the adversary’s success. Previous definitions have a number of flaws which allow attacks on supposedly “secure” protocols; our definition aims at correcting these flaws. In any definition of forward secrecy, the adversary succeeds if it can guess the bit b used by the Test oracle when this oracle is queried on a “fresh” instance; the adversary’s advantage Adv is simply twice its success probability minus 1. Differences among the definitions arise due to different definitions of “freshness”. Note that some definition of freshness is necessary for any reasonable definition of security; if no such notion were defined the adversary could always succeed by, for example, submitting a Test query for an instance for which it had already submitted a Reveal query. In the definition of [2], an instance ΠUi is fresh 4 unless one of the following is true: (1) at some point, the adversary queried Reveal(U, i); (2) at some point, the adversary queried Reveal(U  , j) where ΠUj  and ΠUi are partnered; or (3) the adversary made a Corrupt query before the Test query and at some point the adversary queried Send(U, i, M ) for some M . This definition has been adopted for all subsequent work of which we are aware. This definition, however, considers “secure” a protocol in which revealing one client’s password enables the adversary to impersonate a different client. This is so (under the above definition) because every instance with which the adversary interacts following a Corrupt query is unfresh and no guarantees are given with regard to unfresh instances. As another example of a flaw in the definition, consider an adversary who installs password pwS,C for client C at server S, 4

In [2] this is called “fs-fresh” but since we focus on forward secrecy we use the abbreviated name.

Forward Secrecy in Password-Only Key Exchange Protocols

35

interacts with server S, and is then able to impersonate C to server S  = S. Under the above definition, such a protocol could be considered secure! In fact, an attack of this sort represents a real threat: in Section 3.1, we demonstrate precisely this type of attack against the original KOY protocol. We are more careful in our definition of freshness. Under our definition, an instance ΠUi with pidiU = U  is fresh unless one of the following is true: – The adversary queried Reveal(U, i) or Reveal(U  , j) where ΠUj  and ΠUi are partnered. – The adversary queried Corrupt1 (U ) or Corrupt1 (U  ) before the Test query and at some point the adversary queried Send(U, i, M ) for some M . – The adversary queried Corrupt2 (U, U  , pw) before the Test query and at some point the adversary queried Send(U, i, M ) for some M . Note that, in contrast with previous definitions (as explained above), exposing the password of user U no longer automatically results in instances of user U  = U being unfresh. We have not yet defined what we mean by a secure protocol. Note that a ppt adversary can always succeed by trying all passwords one-by-one in an on-line impersonation attack. Informally, we say a protocol is secure if this is the best an adversary can do. Formally, an instance ΠUi represents an on-line attack if both the following are true: (1) at some point, the adversary queried Send(U, i, M ) and (2) at some point, the adversary queried Reveal(U, i) or Test(U, i). In particular, instances with which the adversary interacts via Execute calls are not counted as on-line attacks. The number of on-line attacks represents a bound on the number of passwords the adversary could have tested in an on-line fashion. This motivates the following definition: Definition 1. Protocol P is a secure PAKE protocol achieving forward secrecy if, for all N and for all ppt adversaries A making at most Q(k) on-line attacks, there exists a negligible function ε(·) such that AdvA,P (k) ≤ Q(k)/N + ε(k). In particular, the adversary can (essentially) do no better than guess a single password during each on-line attempt. Calls to the Execute oracle, which are not included in Q(k), are of no help to the adversary in breaking the security of the protocol; this means that passive eavesdropping and off-line dictionary attacks are of (essentially) no use. Some previous definitions of security consider protocols secure as long as the adversary can do no better than guess a constant number of passwords in each on-line attempt. We believe the strengthening given by Definition 1 (in which the adversary can guess only a single password per on-line attempt) is an important one. The space of possible passwords is small, so any degradation in security should be avoided if possible. This is not to say that protocols which do not meet this definition of security should never be used; however, before using such a protocol, one should be aware of the constant implicit in the proof of security. An examination of the security proofs for some protocols [2,8,19] shows that these protocols achieve the stronger level of security given by Definition 1. However,

36

Jonathan Katz, Rafail Ostrovsky, and Moti Yung





½ ¾   



  £  

 

  

       ½   ½ ½   ½   ¾ ½

  

         ½    

       







         ¾ ¾ ¾   ½ ¾   ¾ ¾ ¾   ½    ¾ ¾               ¾     ¾ ¾ ¾ ¾ ¾

¼

¼

 ¼          ½ ½ ½   ½ ¾   ½   ËÃ  ¼    ½ ½ ½ ½

   



 

¼



  



¼

½ ½  



 

    ¼½  ½

 ÎÃ     

¼

     ¾ ¾  ¼ ¾ ¾  ¾    

Fig. 1. The KOY protocol.

security proofs for other protocols [11,17] are inconclusive, and leave open the possibility that more than one password can be guessed by the adversary per on-line attack. In at least one case [21], an explicit attack is known which allows an adversary to guess two passwords per on-line attack.

3

The KOY Protocol

Due to space limitations, we include only those details of the KOY protocol that are necessary for understanding our attack, our modification, and our proof of security. We refer the reader to [14,13] for more details. A high-level description of the protocol is given in Figure 1. During the initialization phase, public information is established as follows: for a given security parameter k, primes p, q are chosen such that |q| = k and p = 2q +1; these values define a group G in which the decisional Diffie-Hellman (DDH) assumption is

Forward Secrecy in Password-Only Key Exchange Protocols

37

believed to hold. Values g1 , g2 , h, c, d are selected at random from G, and a function H : {0, 1}∗ → Zq is chosen at random from a universal one-way hash family. The public information consists of (a description of) G, the values g1 , g2 , h, c, d, and the hash function H. As part of the initialization phase, passwords are chosen randomly for each client and stored at each server. We assume that all passwords lie in (or can be mapped in a one-to-one fashion to) G. As an example, if passwords lie in the range {1, . . . , N }, password pw can be mapped to g1pw ∈ G; this will be a one-to-one mapping for reasonable values of N . When a client Client ∈ Client wants to connect to a server Server ∈ Server, the client begins by running a key-generation algorithm for a one-time signature scheme, giving VK and SK. The client chooses random r1 ∈ Zq and computes A = g1r1 , B = g2r1 , and C = hr1 · pwC . The client then computes α = H(Client|VK|A|B|C) and sets D = (cdα )r1 . These values are sent to the server as the first message of the protocol. Upon receiving the first message Client|VK|A|B|C|D, the server first chooses random x2 , y2 , z2 , w2 ∈ Zq , computes α = H(Client|VK|A|B|C), and  sets E = g1x2 g2y2 hz2 (cdα )w2 . Additionally, a random r2 ∈ Zq is chosen and the server computes F = g1r2 , G = g2r2 , and I = hr2 · pwS,C (where pwS,C is the password stored at the server for the client named in the incoming message). The server then computes β = H(Server|E|F |G|I) and sets J = (cdβ )r2 . These values are sent to the client as the second message of the protocol. Upon receiving the second message Server|E|F |G|I|J, the client chooses random x1 , y1 , z1 , w1 ∈ Zq , computes β  = H(Server|E|F |G|I), and sets K =  g1x1 g2y1 hz1 (cdβ )w1 . The client then signs β  |K using SK. The value K and the resulting signature are sent as the final message of the protocol. At this point, the client accepts and computes the session key by first computing I  = I/pwC and then setting skC = E r1 F x1 Gy1 (I  )z1 J w1 . Upon receiving the final message K|Sig, the server checks that Sig is a valid signature of β|K under VK. If so, the server accepts and computes the session key by first computing C  = C/pwS,C and then setting skS = Ax2 B y2 (C  )z2 Dw2 K r2 . If the received signature is not valid, the server does not accept and the session key remains null. Although omitted in the above description, we assume that users always check that incoming messages are well-formed; e.g., when the server receives the first message it verifies that Client ∈ Client and that A, B, C, D ∈ G. If an ill-formed message is received, the receiving party terminates without accepting and the session key remains null. 3.1

An Attack on the KOY Protocol

Here, we demonstrate that the KOY protocol does not achieve forward secrecy under our definition by showing an explicit attack. Our results do not impact the basic security of the protocol. Fix a client Client. The attack begins by having the adversary impersonate Client to a server S using an arbitrary password, say pw = g1 . Thus, the adver-

38

Jonathan Katz, Rafail Ostrovsky, and Moti Yung

sary chooses r1 and VK, constructs a message Client|VK|A|B|C|D (using password g1 ), and sends this message to server S. The server will respond with a message S|E|F |G|I|J (which is computed using password pwS,Client = pwClient ). Next, the adversary changes the password stored at S for Client (i.e., pwS,Client ) to the value g1 using oracle call Corrupt2 (S, Client, g1 ). Finally, the adversary computes the final message of the protocol by choosing x1 , y1 , z1 , w1 and sending K|Sig. If the adversary later obtains skS (via a Reveal query), we claim the adversary can determine pwClient and hence successfully impersonate Client to a different server S  = S. To see this, note that when S computes the session key after receiving the final message of the protocol it uses pwS,Client = g1 . Thus, skS = Ax2 B y2 (C/g1 )z2 Dw2 K r2 = g1r1 x2 g2r1 y2 hr1 z2 (cdα )r1 w2 K r2 = E r1 K r2 . Since the adversary can compute E r1 , the adversary can use skS to determine the value K r2 . Now, using exhaustive search through the password dictionary, the adversary can identify pwClient as that unique value for which: F x1 Gy1 (I/pwClient )z1 J w1 = K r2 . 3.2

Modifying the Protocol to Achieve Forward Security

The attack described in the previous section succeeds precisely because the password used by the server changes in the middle of protocol execution. To ensure that this does not happen, we modify the protocol so that the password pwS,Client used to construct the server’s response is saved as part of the state information for that instance. When the server later computes the session key (for a given instance), it uses the value of pwS,Client stored as part of the state information (for that particular instance) rather than using the “actual” value of pwS,Client stored in “long term memory” (e.g., the password file). Note that if multiple server-instances are active concurrently, it is possible for different passwords to be in use for each instance. Although this modification is a simple one, it is a crucial change that must be made in any implementation of the protocol. The main technical contribution of this paper is a detailed proof of forward secrecy for this modified protocol; this is the first such proof (without random oracles/ideal ciphers) of forward secrecy in the model of [2].

4

Proof of Forward Secrecy

Although the modification given in the previous section seems to thwart known attacks (in particular the one given in Section 3.1) the rigorous proof of forward secrecy is challenging. We present such a proof here; note that this is the first such proof for any PAKE protocol under the (natural) definition of forward secrecy appearing here. Theorem 1. Under the DDH assumption, the modified KOY protocol is a secure PAKE protocol achieving forward secrecy.

Forward Secrecy in Password-Only Key Exchange Protocols

39

Proof. Due to space limitations, only a sketch of the proof is given here. A complete proof appears in the full version of this work [13]. As in [14], the adversary has access to oracles Sendi (0 ≤ i ≤ 3) representing the four different types of messages (including an “initiate” message) that can be sent. We imagine a simulator that runs the protocol for adversary A. When the adversary completes its execution and outputs b , the simulator can tell whether A succeeds by checking whether (1) a single Test query was made on an instance ΠUi ; (2) instance ΠUi is fresh; and (3) b = b. Success of the adversary is denoted by event fsSucc. We refer to the real execution of the experiment as P0 . In experiment P0 , the adversary does not succeed if any of the following occur: 1. A verification key VK used by the simulator in responding to a Send0 query is repeated. 2. The adversary forges a new, valid message/signature pair for any verification key used by the simulator in responding to a Send0 query. 3. A value β used by the simulator in responding to Send1 queries is repeated. 4. A value β used by the simulator in responding to a Send1 query (with msg-out = Server|E|F |G|I|J) is equal to a value β used by the simulator in responding to a Send2 query (with msg-in = Server |E  |F  |G |I  |J  ) and furthermore it is the case that Server|E|F |G|I = Server |E  |F  |G |I  . Since the probability that any of these events occur is negligible (assuming the security of the signature scheme and the universal one-way hash function), we have fsAdvA,P0 (k) ≤ fsAdvA,P0 (k) + ε(k) for some negligible function ε(·). In experiment P1 , upon receiving oracle query Execute(Client, i, Server, j), values C and I are chosen independently at random from G. The simulator then checks whether pwClient = pwServer,Client . If so, the session keys are computed as skiClient := skjServer := Ax2 B y2 (C/pwServer,Client )z2 Dw2 F x1 Gy1 (I/pwClient )z1 J w1 . On the other hand, if pwClient = pwServer,Client (i.e., as the result of a Corrupt oracle query), the session keys are computed as skiClient := Ax2 B y2 (C/pwClient )z2 Dw2 F x1 Gy1 (I/pwClient )z1 J w1 skjServer := Ax2 B y2 (C/pwServer,Client )z2 Dw2 F x1 Gy1 (I/pwServer,Client )z1 J w1 . (Other aspects of the Execute oracle are handled as before.) Under the DDH assumption, one can show that |fsAdvA,P0 (k) − fsAdvA,P1 (k)| ≤ ε(k) for some negligible function ε(·). Details appear in the full version. In experiment P2 , upon receiving query Execute(Client, i, Server, j), the simulator checks whether pwClient = pwServer,Client . If so, skiClient is chosen randomly from G and skjServer is set equal to skiClient . Otherwise, both skiClient and skjServer are chosen independently at random from G. Claim. |fsAdvA,P1 (k) − fsAdvA,P2 (k)| ≤ ε(k) for some negligible function ε(·).

40

Jonathan Katz, Rafail Ostrovsky, and Moti Yung Initialize(1k ) — g1 , g 2 ← G χ1 , χ2 , ξ1 , ξ2 , κ ← Zq h := g1κ ; c := g1χ1 g2χ2 ; d := g1ξ1 g2ξ2 H ← UOWH(1k ) return G, g1 , g2 , h, c, d, H Fig. 2. Modified initialization procedure.

The claim follows from the negligible statistical difference between the distributions on the adversary’s view in the two experiments. The proof when pwClient = pwServer,Client exactly parallels the proof given in [14]. When pwClient = pwServer,Client , the proof is more complicated but the techniques used are exactly the same. Details are omitted here and appear in the full version. We now introduce some notation. For a query Send1 (Server, j, msg-in), we say msg-in is previously-used if it was ever previously output by a Send0 oracle. Similarly, for a query Send2 (Client, i, msg-in), we say msg-in is previously-used if it was ever previously output by a Send1 oracle. A msg-in which is not previouslyused is called new. In experiment P3 , the simulator runs the modified initialization procedure shown in Figure 2, where the values χ1 , χ2 , ξ1 , ξ2 , κ are stored for future use. A new msg-in for a query Send2 (Client, i, Server|E|F |G|I|J) is said to be newly valid 5 only if F χ1 +βξ1 Gχ2 +βξ2 = J and I/pwClient = F κ . Otherwise, we say that it is newly invalid. A new msg-in for a query Send1 (Server, j,

Client|VK|A|B|C|D) is newly valid only if Aχ1 +αξ1 B χ2 +αξ2 = D and C/pwServer,Client = Aκ , where the value of pwServer,Client is that at the time of the Send1 query (note that pwServer,Client may change as a result of Corrupt queries). Otherwise, we say it is newly invalid. Upon receiving query Send2 (Client, i, msg-in), the simulator checks whether there has previously been any query of the form Corrupt1 (Client). If so, the Send2 query is answered as before. Otherwise, the simulator examines msg-in. If msg-in is newly invalid, the query is answered as in experiment P2 . If msg-in is newly valid, the query is answered as in experiment P2 but the simulator stores the value (∇, skiClient ) as the “session key”. If msg-in is previously-used, the simulator checks for the unique Send1 query following which msg-in was output. Say this query was Send1 (Server, j, msg-in ). If msg-in is newly invalid, the Send2 query is answered as in experiment P2 . If msg-in is newly valid, the query is answered as in experiment P2 but the simulator stores the value (∇, skiClient ) as the “session key”. def Upon receiving query Send3 (Server, j, msg-in), let Client = pidjServer . The simulator first checks when the query Send1 (Server, j, first-msg-in) was made. If this Send1 query was made after a query Corrupt1 (Client) or a query of the 5

Note that (with negligible probability) a newly valid message may not actually be a valid protocol message. This is fine as far as the proof of security is concerned.

Forward Secrecy in Password-Only Key Exchange Protocols

41

form Corrupt2 (Server, Client, ∗), the behavior of the Send3 oracle is unchanged. Otherwise, the simulator responds as in experiment P2 unless first-msg-in is newly valid. In this case, the query is answered as in experiment P2 but the simulator stores the value (∇, skjServer ) as the “session key”. If the adversary queries Test(U, i) or Reveal(U, i) before any queries of the form Corrupt1 (U ) or Corrupt2 (U, U  , ∗) (where U  = pidiU ), and the session key stored is of the form (∇, skiU ), the adversary is given ∇. If the Test or Reveal query is made after any such Corrupt query has been made, the adversary is given the value skiU . Other Test or Reveal queries are answered as in experiment P2 . Completing the description of experiment P3 , if the adversary ever receives the value ∇ in response to a Reveal or Test query, the adversary succeeds. Otherwise, the adversary succeeds, as before, by correctly guessing the bit b. Since the adversary’s view is unchanged unless it receives the value ∇ and the adversary succeeds when this is the case, we clearly have fsAdvA,P2 (k) ≤ fsAdvA,P3 (k). For the remaining transformations, the simulator will modify its actions in response to query Send2 (U ) only when this query is made before a query Corrupt1 (U ). Similarly, the simulator’s response to a query Send3 (U, i, ∗) (where pidiU = U  ) will be changed only if the query Send1 (U, i, ∗) was made before any queries Corrupt2 (U, U  , ∗) or Corrupt1 (U  ). For brevity in what follows, however, we will simply say that the simulator modifies its behavior only for Send oracle queries made “before any Corrupt queries”. In experiment P4 , whenever the simulator responds to a Send2 query it stores the values (K, β, x, y, z, w), where K = g1x g2y hz (cdβ )w . Upon receiving a query Send3 (Server, j, K|Sig) (assuming query Send1 (Server, j, first-msg-in) was made before any Corrupt queries), the simulator checks the value of first-msg-in = Client|VK|A|B|C|D. If first-msg-in is new, the query is answered as in experiment P3 . If first-msg-in is previously-used and VrfyVK (β|K, Sig) = 0, the query is answered as in experiment P3 (and the session key is not assigned a value). If first-msg-in is previously-used, VrfyVK (β|K, Sig) = 1, and the experiment is not aborted, the simulator first checks whether there exists a (unique) i such that sidiClient = sidjServer . If so, skjServer is assigned the value skiClient . Otherwise, let first-msg-out = Server|E|F |G|I|J. The simulator must have stored     values x , y  , z  , w such that K = g1x g2y hz (cdβ )w (this is true since the experiment is aborted if Sig is a valid signature on β|K that was not output by the simulator following a Send2 query). The session key is then assigned the value: 







skjServer := Ax B y (C/pw)z Dw F x Gy (I/pw)z J w . Claim. fsAdvA,P4 (k) = fsAdvA,P3 (k). The distribution on the adversary’s view is identical in experiments P3 and P4 . It is crucial here that the value pw = pwServer,Client used when responding to a Send1 query is stored as part of the state and thus the same value is used subsequently when responding to a Send3 query. If this were not the case, the value of pwServer,Client could change as the result of a Corrupt2 query sometime between the Send1 and Send3 queries.

42

Jonathan Katz, Rafail Ostrovsky, and Moti Yung

In experiment P5 , upon receiving a query Send3 (Server, j, K|Sig) (again, assuming query Send1 (Server, j, first-msg-in) was made before any Corrupt queries), the simulator checks the value of first-msg-in. If first-msg-in is newly invalid and the session key is to be assigned a value, the session key is assigned a value randomly chosen in G. Otherwise, the query is answered as in experiment P4 . Claim. fsAdvA,P5 (k) = fsAdvA,P4 (k). This step is similar to the identical step in [14], except that it is now crucial that the same value pwServer,Client is used during both the Send1 and Send3 queries for a given instance. In experiment P6 , upon receiving query Send1 (Server, j, msg-in) (and assuming this query is made before any Corrupt queries), if msg-in is newly valid the query is answered as before. Otherwise, component I is computed as hr g1N +1 , where the dictionary of legal passwords is {1, . . . , N }. Claim. Under the DDH assumption, |fsAdvA,P5 (k) − fsAdvA,P6 (k)| ≤ ε(k), for some negligible function ε(·). The proof of this claim follows [14], and depends on the non-malleability of messages in the protocol. Details appear in the full version. Note that when a query Send1 (Server, j, first-msg-in) is made before any Corrupt queries and first-msg-in is not newly valid, the simulator will not require r in order to respond to the (subsequent) query Send3 (Server, j, msg-in) in case this query is ever made. On the other hand, in contrast to the basic case when no Corrupt queries are allowed, the simulator does require r in case first-msg-in is newly valid. The reason is the following: assume the adversary queries Send1 (Server, j, Client|VK|A|B|C|D) (before any Corrupt queries have been made) where Client|VK|A|B|C|D is newly valid. Assume further that the adversary subsequently learns pwClient from a Corrupt1 query. The adversary might then query Send3 (Server, j, K|Sig) followed by Reveal(Server, j). In this case, the simulator must give the adversary the correct session key skjServer — but this will be impossible without knowledge of r. In experiment P7 , queries to the Send2 oracle are handled differently (when they are made before any Corrupt queries). Whenever msg-in is newly invalid, the session key will be assigned a value chosen randomly from G. If msg-in is previously-used, the simulator checks for the Send1 query after which msg-in was output. Say this query was Send1 (Server, j, msg-in ). If msg-in is newly valid, the Send2 query is answered as before. If msg-in is newly invalid, the session key is assigned a value chosen randomly from G. Queries Send3 (Server, j, msg-in) are also handled differently (assuming query Send1 (Server, j, first-msg-in) was made before any Corrupt queries). If first-msg-in is previously-used, VrfyVK (β|K, Sig) = 1, and there does not exist an i such that sidiClient = sidjServer , then the session key is assigned a value chosen randomly from G. Claim. fsAdvA,P7 (k) = fsAdvA,P6 (k).

Forward Secrecy in Password-Only Key Exchange Protocols

43

The claim follows from the equivalence of the distributions on the adversary’s views in the two experiments, as in [14]. Details appear in the full version. Let fsSucc1 denote the event that the adversary succeeds by receiving a value ∇ following a Test or Reveal query (this can occur only before any Corrupt queries have been made). We have: PrA,P7 [fsSucc] = PrA,P7 [fsSucc1] + PrA,P7 [fsSucc|fsSucc1] · PrA,P7 [fsSucc1]. Event fsSucc ∧ fsSucc1 can occur in one of two ways (by definition of freshness): either (1) the adversary queries Test(U, i) before any Corrupt queries and does not receive ∇ in return; or (2) the adversary queries Test(U, i) after a Corrupt query and acciU = true but the adversary has never queried Send (U, i, M ) for any , M . In either case, skiU is randomly chosen from G independent of the remainder of the experiment; therefore, we have PrA,P7 [fsSucc|fsSucc1] = 1/2. Thus, PrA,P7 [fsSucc] = 1/2 + 1/2 · PrA,P7 [fsSucc1]. We now upper bound PrA,P7 [fsSucc1]. We define experiment P7 in which the adversary succeeds only if it receives a value ∇ in response to a Reveal or Test query. Clearly PrA,P7 [fsSucc] = PrA,P7 [fsSucc1] by definition of event fsSucc1. Since the adversary cannot succeed in experiment P7 once a Corrupt query is made, the simulator aborts the experiment if this is ever the case. For this reason, whenever the simulator would previously store values (∇, sk) as a “session key” (with sk being returned only in response to a Test or Reveal query after a Corrupt query), the simulator now need store only ∇. In experiment P8 , queries to the Send1 oracle are handled differently. Now, the simulator computes I as hr g1N +1 even when msg-in is newly valid. Following a proof similar to those in [14], it follows under the DDH assumption that, for some negligible function ε(·), we have |fsAdvA,P8 (k) − fsAdvA,P7 (k)| ≤ ε(k). A key point used in the proof is that when msg-in is newly valid, the simulator no longer need worry about simulating the adversary’s view following a Corrupt query. In experiment P9 , queries to the Send0 are handled differently. Now, the simulator always computes C as hr g1N +1 . Following a proof similar to those in [14], it follows under the DDH assumption that, for some negligible function ε(·), we have |fsAdvA,P9 (k) − fsAdvA,P8 (k)| ≤ ε(k). Note that, because session keys computed during a Send2 query are always either ∇ or are chosen randomly from G, the value r is not needed by the simulator and hence we can use the simulator to break the Cramer-Shoup encryption scheme as in [14]. The adversary’s view in experiment P9 is independent of the passwords chosen by the simulator until the adversary receives ∇ in response to a Reveal or Test query (in which case the adversary succeeds). Thus, the probability that the adversary receives ∇, which is exactly PrA,P9 [fsSucc], is at most Q(k)/N . This completes the proof of the theorem.

44

Jonathan Katz, Rafail Ostrovsky, and Moti Yung

References 1. M. Bellare, R. Canetti, and H. Krawczyk. A Modular Approach to the Design and Analysis of Authentication and Key Exchange Protocols. STOC ’98. 2. M. Bellare, D. Pointcheval, and P. Rogaway. Authenticated Key Exchange Secure Against Dictionary Attacks. Eurocrypt ’00. 3. M. Bellare and P. Rogaway. Entity Authentication and Key Distribution. Crypto ’93. 4. M. Bellare and P. Rogaway. Provably-Secure Session Key Distribution: the Three Party Case. STOC ’95. 5. S.M. Bellovin and M. Merritt. Encrypted Key Exchange: Password-Based Protocols Secure Against Dictionary Attacks. IEEE Symposium on Research in Security and Privacy, IEEE, 1992, pp. 72–84. 6. R. Bird, I. Gopal, A. Herzberg, P. Janson, S. Kutten, R. Molva, and M. Yung. Systematic Design of Two-Party Authentication Protocols. Crypto ’91. 7. M. Boyarsky. Public-Key Cryptography and Password Protocols: The Multi-User Case. ACM CCCS ’99. 8. V. Boyko, P. MacKenzie, and S. Patel. Provably-Secure Password-Authenticated Key Exchange Using Diffie-Hellman. Eurocrypt ’00. 9. W. Diffie and M. Hellman. New Directions in Cryptography. IEEE Transactions on Information Theory, 22(6): 644–654 (1976). 10. W. Diffie, P. van Oorschot, and M. Wiener. Authentication and Authenticated Key Exchanges. Designs, Codes, and Cryptography, 2(2): 107–125 (1992). 11. O. Goldreich and Y. Lindell. Session-Key Generation Using Human Passwords Only. Crypto ’01. 12. S. Halevi and H. Krawczyk. Public-Key Cryptography and Password Protocols. ACM Transactions on Information and System Security, 2(3): 230–268 (1999). 13. J. Katz. Efficient Cryptographic Protocols Preventing “Man-in-the-Middle” Attacks. PhD thesis, Columbia University, 2002. 14. J. Katz, R. Ostrovsky, and M. Yung. Efficient Password-Authenticated Key Exchange Using Human-Memorable Passwords. Eurocrypt ’01. 15. T.M.A. Lomas, L. Gong, J.H. Saltzer, and R.M. Needham. Reducing Risks from Poorly-Chosen Keys. ACM Operating Systems Review, 23(5): 14–18 (1989). 16. P. MacKenzie. More Efficient Password-Authenticated Key Exchange. RSA ’01. 17. P. MacKenzie. On the Security of the SPEKE Password-Authenticated KeyExchange Protocol. Manuscript, 2001. 18. P. MacKenzie. Personal communication. April, 2002. 19. P. MacKenzie, S. Patel, and R. Swaminathan. Password-Authenticated Key Exchange Based on RSA. Asiacrypt ’00. 20. V. Shoup. On Formal Models for Secure Key Exchange. Available at http://eprint.iacr.org/1999/012. 21. T. Wu. The Secure Remote Password Protocol. Proceedings of the Internet Society Symposium on Network and Distributed System Security, 1998, pp. 97–111.

Weak Forward Security in Mediated RSA Gene Tsudik Department of Information and Computer Science, University of California, Irvine [email protected]

Abstract. Mediated RSA (mRSA) [1] is a simple and practical method of splitting RSA private keys between the user and the Security Mediator (SEM). Neither the user nor the SEM can cheat each other since a signature or a decryption must involve both parties. mRSA allows fast and fine-grained control (revocation) of users’ security priviliges. Forward security is an important and desirable feature for signature schemes. Despite some notable recent results, no forward-secure RSA variant has been developed. In this paper (abstract), we show how weak forward security can be efficiently obtained with mediated RSA. We consider several methods, based on both multiplicative and additive mRSA and discuss their respective merits.

1

Forward Security

Forward security is a timely and active research topic which has received some attention in the recent research literature. The purpose of forward security is to mitigate an important problem in ordinary public key signatures: the inability to preserve the validity of past signatures following a compromise of one’s private key. In other words, if a forward-secure signature scheme is employed, an adversary who discovers the private key of a user is unable to forge user’s signatures from earlier times (pre-dating the compromise). The notion of forward security was introduced by Anderson [2]. Since then, a number of schemes were proposed. Some are generic, i.e., applicable to any signature scheme [3], while others target (and modify) a particular signature scheme to achieve forward security [4,5]. In this paper we concentrate on weak forward security in a mediated signature setting. Informally, weak forward security means that the adversary is unable to forge past signatures if she compromises only one (of the two) share-holders of the private key. Specifically, we propose, discuss and analyze two simple schemes built on top of mediated RSA (mRSA), a 2-out-of-2 threshold RSA scheme. The paper is organized as follows: the next section provides an overview of mRSA. Then, Section 3 describes the forward secure additive mRSA and discusses its security and efficiency features. Section 4 presents another scheme based on multiplicative mRSA. This scheme is more flexible but slightly less efficient. The paper ends with the summary and some directions for future work. 

This work was supported by DARPA contract F30602-99-1-0530.

S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 45–54, 2003. c Springer-Verlag Berlin Heidelberg 2003 

46

2

Gene Tsudik

Mediated RSA

Mediated RSA (mRSA) involves a special entity, called a SEM (SEcurity Mediator) which is an on-line semi-trusted server. To sign or decrypt a message, Alice must first obtain a message-specific token from the SEM. Without this token Alice can not use her private key. To revoke Alice’s ability to sign or decrypt, the administrator instructs the SEM to stop issuing tokens for Alice’s public key. At that instant, Alice’s signature and/or decryption capabilities are revoked. For scalability reasons, a single SEM serves many users. One of the mRSA’s advantages is its transparency: SEM’s presence is invisible to other users: in signature mode, mRSA yields standard RSA signatures, while in decryption mode, mRSA accepts plain RSA-encrypted messages. The main idea behind mRSA is the splitting of an RSA private key into two parts as in threshold RSA [6]. One part is given to a user while the other is given to a SEM. If the user and the SEM cooperate, they employ their respective half-keys in a way that is functionally equivalent to (and indistinguishable from) standard RSA. The fact that the private key is not held in its entirety by any one party is transparent to the outside, i.e., to the those who use the corresponding public key. Also, knowledge of a half-key cannot be used to derive the entire private key. Therefore, neither the user nor the SEM can decrypt or sign a message without mutual consent. We now provide an overview of mRSA functions. The variant described below is the additive mRSA (+mRSA) as presented by Boneh, et al. in [1]. There is also a multiplicative mRSA variant – denoted *mRSA – where the private key is computed as the product of the two shares. (See Appendix for the description). Muliplicative mRSA was first introduced in the Yaksha system [7] and later discussed in [8]. Algorithm +mRSA.key (executed by CA) Let k (even) be the security parameter 1. 2. 3. 4. 5. 6. 7. 8.

Generate random k/2-bit primes: p, q n ← pq r ∗ e ← Zφ(n) d ← 1/e mod φ(n) r du ← Zn − {0} dsem ← (d − du ) mod φ(n) SK ← d P K ← (n, e)

After computing the above values, the CA securely communicates dsem to the SEM and du – to the user. (A detailed description of this procedure can be found in [1].) The user’s public key P K is released, as usual, in a public key certificate.

Weak Forward Security in Mediated RSA

47

Protocol +mRSA.sign (executed by User and SEM) 1. USER: h ← H(m) where H() is a suitable hash function (e.g., SHA-based HMAC) and |H()| < k. 2. USER: send h to SEM. 3. In parallel: 3.1 SEM: (a) If USER revoked return (ERROR) (b) P Ssem ← hdsem mod n (c) send P Ssem to USER 3.2 USER: (a) P Su ← hdu mod n 4. USER: h ← (P Ssem ∗ P Su )e mod n 5. USER: If h = h then return (ERROR) 6. S ← (P Ssem ∗ P Su ) mod n 7. USER: return (h,S)

The signature verification (+mRSA.ver) algorithm is not provided as it is identical to that in plain RSA.

3

Forward Secure +mRSA

The main idea in forward-secure additive mRSA (FS+mRSA) is for both SEM and user to evolve their private key shares in parallel. The evolution is very simple: each party logically multiplies its share by e. We say “logically” since no actual muliplication is performed; instead, each party merely maintains a counter (i) which is the index of the current time period. As in all other forward secure schemes, there is a maximum number T past which the shares are not evolved. At any given time, the current private key is: di = d0 ∗ ei and d0 = d ∗ e−T The user’s and SEM’s respective key shares, at a given interval are: di,u = (d0,u ) ∗ ei where d0,u = du ∗ e−T mod φ(n) and:

di,sem = (d0,sem ) ∗ ei where d0,sem = dsem ∗ e−T mod φ(n)

The i-th private key evolution can be thus rewritten as: di = (d0,u ) ∗ ei + (d0,sem ) ∗ ei = d0 ∗ ei = d ∗ ei−T In actuality, both user and SEM always maintain d0,u and d0,sem , which are their respective initial shares. However, when they apply their respective shares (to sign a message) they use the current evolution. The reason for not actually computing di,u/sem is because neither the SEM nor the user knows φ(n) and thus cannot compute values of the form: ( (d0,u/sem ) ∗ ei ) mod φ(n) Recall that p, q and, consequently, φ(n) are known only to the CA.

48

Gene Tsudik

The flavor of forward security offered by our approach is weak. Here ’weak’ means that, throughout the lifetime of the public key (T periods), the adversary is allowed to compromise only one of the parties’ secrets, i.e., only di,u or di,sem but not both. Although the above may be viewed as a drawback, we claim that weak forward security is appropriate for the mRSA setting, since the security of mRSA is based on the non-compromise of both key shares. More specifically, the SEM is an entity more physically secure and more trusted than a regular user. Hence, it makes sense to consider what it takes for mRSA to be forward secure primarily with respect to the user’s private key share. 3.1

FS+mRSA in Detail

Like most forward-secure signature methods, FS+mRSA is composed of the following four algorithms: FS+mRSA.key, FS+mRSA.sign FS+mRSA.ver and FS+mRSA.update.The purpose of the first three is obvious,while FS+mRSA.update is the secret key share update algorithm executed by both user and SEM. We do not specify FS+mRSA.update since it is trivial: as mentioned above, it does not actually evolve each party’s key share: it merely increments the interval counter. Algorithm FS+mRSA.key (executed by CA) Let (t, T ) be the length of the update interval and the max. number of update intervals, respectively. 1-7. 8. 9. 10.

Identical to +mRSA.key P K ← (t, T, n, e) d0,u ← du ∗ e−T mod φ(n) d0,sem ← dsem ∗ e−T mod φ(n)

Protocol FS+mRSA.sign (i,m) i (0 ≤ i ≤ T ) is the current interval index and m is the input message 1. USER: h ← Hi (m) where Hi () is a suitable hash function (e.g., SHA-based HMAC) indexed with the current interval. (|Hi ()| < k) 2. USER: send m to SEM. 3. In parallel: 3.1. SEM: (a) If USER revoked return (ERROR) (b) h ← Hi (m) i

4. 5. 6. 7.

(c) P Ssem ← (hd0,sem )e mod n (d) send P Ssem to USER 3.2. USER: i (a) P Su ← (hd0,u )e mod n T −i  USER: h ← (P Ssem ∗ P Su )e∗e mod n  USER: If h = h then return (ERROR) S ← (P Ssem ∗ P Su ) mod n USER: return (h,S)

We note that, in steps 3.1.a, 3.2.b and 4, two exponentiations are performed.

Weak Forward Security in Mediated RSA

49

Algorithm FS+mRSA.ver (i,S,m,e,n) i (0 ≤ i ≤ T ) is the claimed interval index, S is the purported signature on a message m, and (e, n) is the public key of the signer 1. if (i < 0) or (i > T ) return (ERROR) 2. h ← Hi (m) T −i

3. h ← S e∗e 4. If h = h then return (0) 5. return (1)

From the descriptions of FS+mRSA.sign and FS+mRSA.ver it is clear that the present scheme is correct, i.e., signature verification succeeds iff a valid signature is provided: S ← hd = hdi,u +di,sem = hdu ∗e i

and S e∗e 3.2

T −i

i−T

= (hd∗e

−T

)e∗e

∗ei +dsem ∗e−T ∗ei

T −i

= hd∗e

i−T

= hed = h

Efficiency

In [1], the efficiency of mRSA is shown to be roughly equivalent to unoptimized RSA, i.e., RSA without using the Chinese Remainder Theorem (CRT). The efficiency of FS+mRSA is only slightly lower than that of mRSA. The only difference in signing is the extra exponentiation with ei performed by both the user and the SEM in parallel. In general, an additional exponentiation with ei is also needed for verifying FS+mRSA signatures. However, we observe that, if the user’s public exponent is small (e.g., 3), the curent public key ei = 3i+1 is likely to be smaller than the modulus n for many values of i. For example, if e = 3 and k = 1024, |ei | < k and ei < n for 0 ≤ i ≤ 592. In that case, ei can be stored as a single k − bit number and only one exponentiation would be required to verify an FS+mRSA signature. The extra storage due to forward security in FS+mRSA is negligible. Since key shares are only logically evolved, the only extra information maintained by all signers and SEM-s (on top of what is already required by mRSA) is the index of the current time interval. 3.3

Security Considerations

In all security aspects (other than forward security) the proposed scheme is essentially equivalent to plain RSA as argued in [1]. Similarly, the forward security property of FS+mRSA is based on the difficulty of computing roots in a composite-order group which is also the foundation of the RSA cryptosystem. While this extended abstract does not contain a proof of this claim, we provide the following informal argument:

50

Gene Tsudik

Assume that the adversary compromises the user at an interval j and, as a result, learns d0,u 1 . In order to violate forward security, it suffices for the adversary to generate a single new signature of the form (where i < j and h = H(m) for some new message m): i

i

S = hdi = hd0 ∗e = hd0,sem ∗e

+d0,u ∗ei

i

i

= (hd0,sem ∗e ∗ hd0,u ∗e )

Computing hd0,u ∗e (modn) is trivial. However, computing hd0,sem ∗e (modn) seems to require taking e-th (cube) roots (modn) since, in the current interval j (j > i), the SEM is using as its key share (d0,sem ∗ ej ) j and only “produces” values of the form: hd0,sem ∗e mod n. i

i

All forward-secure signature schemes proposed thus far rely on the secure deletion of old secret keys. This is not always a realistic assumption, especially when secure (tamper-resistant) hardware is not readily avaialable. In this aspect, FS+mRSA offers an advantage since the user’s secret key share is not actually evolved and no deletion of prior secrets is assumed. While the compromise of the user’s current key share yields all user’s key shares for all prior intervals, no past-dated signatures can be produced since the SEM’s key share evolves separately. We also note that this property is symmetric: if a SEM’s key share is ever compromised, forward security is preserved as long as the user’s share remains secure. There are, however, two types of attacks unique to FS+mRSA. We refer to the first type as a future-dating attack. In it, an adversary obtains a valid signature from the user (m, S) under the current public key (e, n, i). He then takes advantage of the private key structure to construct a valid signature (S’) on the same message m dated in some future interval j (i < j ≤ T ). This can be easily done by computing: S = Se

j−i

i−T

= (h(du +dsem )∗e

)e

j−i

= hd∗e

j−T

We note that this attack does not compromise the forward security property of the signature scheme. However, it does pose a problem for FS+mRSA. Fortunately, there is an easy fix. It involves hashing the index of the current time interval together with the message. This is already specified in the initial step in protocol FS+mRSA.sign. (In other words, instead of h = H(m) we can compute h = H(m, i), or, better yet, h = HMACi (m). This essentially rules out the future-dating attack.) The second attack type is an oracle attack. In it, an adversary, masquarading as the user, sends signature requests to the SEM during the time interval i. This is easy to do since the communication channel between the user and the SEM is neither private nor authentic. The adversary collects a number of “halfsignatures” of the form (m, P Ssem ) where: P S i,sem = hdsem ∗e 1

i−T

This is possible because dj,u is never actually computed, but composed, when needed, as d0,u ∗ ej .

Weak Forward Security in Mediated RSA

51

Suppose that at a later interval j (i < j ≤ T ), the adversary actually compromises the user’s secret dj,u . Although the adversary can not compute “new” signatures from prior time intervals, he can use the previously acquired halfsignatures to forge signatures from period i: S  = (P Si,sem )du ∗e

i−T

= (hdsem ∗e

i−T

)dsem ∗e

i−T

= hd∗e

i−T

One simple way of coping with the oracle attack is to require the user-SEM communication channel to be authentic. This is not much of a burden in practice due to widely-deployed and available tools such as IPSec [9] and SSL [10]. An alternative is to require mRSA-based authentication of the signature request messages flowing from the user to the SEM. This can be accomplished as follows. When sending a signature request to the SEM, the user computes, in addition: ¯ = H(h) ¯ ¯ = H() is a suitable (cryptographically strong) hash h where H() ¯ < k. He then computes: function such that |H()| ¯ d0,u mod n P ¯Su ← h The user sends P ¯Su along with h in the signature request to the SEM. The SEM, before computing its half-signature (P Ssem ), verifies P ¯Su by computing: ¯ and comparing: h¯ = H(h) T d h¯ and (P ¯Su 0,sem )e∗e mod n

Since these two values match only if the user originated the signature request, oracle attacks can be thereby avoided.

4

Forward Secure *mRSA

We now construct another forward-secure scheme based on the multiplicative mRSA variant. Only the key generation and signing algorithms are shown; the verification algorithm is identical to that in FS+mRSA. Algorithm FS*mRSA.key 1-7. Identical to *mRSA.key (see Appendix) 8. P K ← (t, T, n, e) 9. d0,sem ← dsem ∗ e−T mod φ(n)

The main difference with respect to F S + mRSA is the unilateral update feature. In the present scheme, only the SEM’s share is evolved whereas the user’s share remains the same throughout the lifetime of the key. This is a desireable feature since it saves the user one exponentiation over FS+mRSA. However, this does not significantly influence the overall performance since, unlike FS+mRSA, the two parties cannot compute their half-signatures in parallel.

52

Gene Tsudik Protocol FS*mRSA.sign 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

USER: h ← Hi (m) USER: send m to SEM SEM: If USER revoked return (ERROR) SEM: h ← Hi (m) SEM: P Ssem ← hdi,sem mod n SEM: send P Ssem to USER du e∗eT −i USER: h ← (P Ssem ) mod n  USER: If h = h then return (ERROR) du S ← (P Ssem ) mod n USER: return (h,S)

The correctness of FS*mRSA is evident from the verification procedure: (S)e∗e

T −i

= ((P Ssem )du )e∗e

((hdsem ∗e

i−T

)du )e∗e

T −i

T −i

= ((hdi,sem )du )e∗e

= (hd∗e

i−T

)e∗e

T −i

T −i

=

=h

Just like FS+mRSA, this scheme is vulnerable to both future-dating and oracle attacks. Fortunately, the exact countermeasures described in Section 3.3 are equally applicable here. Another trivial variation of this scheme entails only the user (but not the SEM) evolving its key share. This is a less attractive option since it is much more likely that the user, rather than the SEM, succumbs to eventual compromise. Finally, it is also possible to have both parties evolving the key (just as in FS+mRSA). The main difference here would be that signature verification would require an extra exponentiation with e2i rather than ei .

5

A Final Observation

A crucial (but purposely ignored above) detail in the construction of FS+mRSA and FS*mRSA is the use of the current period index i in the hashing of the input message. The intended purpose of the index is as a hedge against possible attacks against the key evolution scheme. However, a closer look indicates that the inclusion of the index in the hash is in and of itself sufficient to provide the same weak forward security that we hope to attain with key evolution. In this case, key evolution as described above can be dropped completely to be replaced by the simple hashing of the period index. (This would also result in a much more efficient scheme.)

6

Summary

We described two methods of obtaining efficient (yet weak) forward security with mediated RSA. These methods work with both multiplicative and additive mRSA variants. The degree of forward security is weak since we assume that only the user or the SEM (but not both) are compromised by the adversary.

Weak Forward Security in Mediated RSA

53

However, this assumption is in line with the mRSA notion of security which is based on the inability to compromise both parties. Since, aside from signatures, mRSA can be used for encryption, the natural issue to consider is whether FS+mRSA and FS*mRSA schemes are useful for forward-secure encryption.

Acknowledgements Many thanks to Giuseppe Ateniese for pointing out an attack on the previous version of FS+mRSA as well to anonymous referees for their comments.

References 1. D. Boneh, X. Ding, G. Tsudik, and B. Wong, “Instanteneous revocation of security capabilities,” in Proceeding of USENIX Security Symposium 2001, Aug. 2001. 2. R. Anderson, “Invited lecture at the acm conference on computer and communication security (ccs’97) ,” 1997. 3. H. Krawczyk, “Simple forward-secure signatures from any signature scheme,” in ACM Conference on Computer and Communication Security (CCS’00), 2000. 4. G. Itkis and L. Reyzin, “Forward-secure signatures with optimal signing and verifying,” in CRYPTO’01, 2001. 5. M. Bellare and S. Miner, “A forward-secure digital signature scheme,” in CRYPTO’99, 1999. 6. P. Gemmel, “An introduction to threshold cryptography,” RSA CryptoBytes, vol. 2, no. 7, 1997. 7. R. Ganesan, “Augmenting kerberos with public-key cryptography,” in Symposium on Network and Distributed Systems Security (T. Mayfield, ed.), (San Diego, California), Internet Society, Feb. 1995. 8. P. MacKenzie and M. K. Reiter, “Networked cryptographic devices resilient to capture,” in Proceedings of the 2001 IEEE Symposium on Security and Privacy, pp. 12–25, May 2001. 9. S. Kent and R. Atkinson, “RFC 2401: Security architecture for the internet protocol,” Nov 1998. 10. “The openssl project web page,” http://www.openssl.org.

A

Multiplicative mRSA – *mRSA

The *mRSA variant is largely similar to its additive counterpart. The only substantive difference has to do with parallelizing private key operations by the user and the SEM. In +mRSA, both parties can perform exponentiations with their respective private key shares in parallel. In contrast, *mRSA prescribes serial operation.

54

Gene Tsudik Algorithm *mRSA.key (executed by CA) Let k (even) be the security parameter 1. 2. 3. 4. 5. 6. 7. 8.

Generate random k/2-bit primes: p, q n ← pq r ∗ e ← Zφ(n) d ← 1/e mod φ(n) r ∗ du ← Zφ(n) dsem ← (d/du ) mod φ(n) SK ← (d, n) P K ← (n, e)

As in additive mRSA, CA securely communicates dsem to the SEM and du – to the user. The user’s public key P K is released as part of a public key certificate. Protocol *mRSA.sign (executed by User and SEM) 1. 2. 3. 4. 5. 6. 7. 8. 9.

USER: h ← H(m) USER: send h to SEM SEM: If USER revoked return (ERROR) SEM: P Ssem ← hdsem mod n SEM: send P Ssem to USER du e ) mod n USER: h ← (P Ssem USER: If h = h then return (ERROR) du USER: S ← (P Ssem ) mod n USER: return (h,S)

On the Power of Claw-Free Permutations Yevgeniy Dodis1 and Leonid Reyzin2 1

New York University Computer Science 251 Mercer Street New York NY 10012 USA [email protected] 2 Boston University Computer Science 111 Cummington Street Boston MA 02215 USA [email protected]

Abstract. The popular random-oracle-based signature schemes, such as Probabilistic Signature Scheme  (PSS) and  Full Domain Hash (FDH), output a signature of the form f −1 (y), pub , where y somehow depends on the message signed (and pub) and f is some public trapdoor permutation (typically RSA). Interestingly, all these signature schemes can be proven asymptotically secure for an arbitrary trapdoor permutation f , but their exact security seems to be significantly better for special trapdoor permutations like RSA. This leads to two natural questions: (1) can the asymptotic security analysis be improved with general trapdoor permutations?; and, if not, (2) what general cryptographic assumption on f — enjoyed by specific functions like RSA — is “responsible” for the improved security? We answer both these questions. First, we show that if f is a “blackbox” trapdoor permutation, then the poor exact security is unavoidable. More specifically, the “security loss” for general trapdoor permutations is Ω(qhash ), where qhash is the number of random oracle queries made by the adversary (which could be quite large). On the other hand, we show that all the security benefits of the RSA-based variants come into effect once f comes from a family of claw-free permutation pairs. Our results significantly narrow the current “gap” between general trapdoor permutations and RSA to the “gap” between trapdoor permutations and claw-free permutations. Additionally, they can be viewed as the first security/efficiency separation between these basic cryptographic primitives. In other words, while it was already believed that certain cryptographic objects can be built from claw-free permutations but not from general trapdoor permutations, we show that certain important schemes (like FDH and PSS) provably work with either, but enjoy a much better tradeoff between security and efficiency when deployed with claw-free permutations.

1

Introduction

FDH-like Signature Schemes. In 1993, Bellare and Rogaway [BR93] formalized the well-known “hash-and-sign” paradigm for digital signature schemes by usS. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 55–73, 2003. c Springer-Verlag Berlin Heidelberg 2003 

56

Yevgeniy Dodis and Leonid Reyzin

ing the random oracle model. Specifically, they showed that if f is a trapdoor permutation and RO is a random function from {0, 1}∗ to the domain of f , then signing a message m via f −1 (RO(m)) is secure. This signature scheme was subsequently called “Full-Domain-Hash” or FDH. In 1996, Bellare and Rogaway [BR96] pointed out that no tight security reduction from breaking FDH to inverting f was known. The best known security reduction lost a factor of qhash +qsig in security (qhash and qsig represent the number of queries the forger makes to RO and to the signing oracle, respectively). This meant that the inverter could invert f with much lower probability than the probability of forgery. This in turn required one to make a stronger assumption on f , potentially increasing key size and losing efficiency. To overcome this problem in case the trapdoor permutation is RSA (or Rabin), [BR96] proposed to hash the message with a random seed and format the result in a particular way (to fit it into the domain) before putting it through the permutation. The resulting scheme was thus probabilistic even when RO was fixed; it was termed PSS for “Probabilistic Signature Scheme.” The security reduction for PSS (analyzed with RSA) is essentially lossless. A natural question thus arose: was PSS necessary? In other words, could it be that a lossless security reduction for RSA-FDH was simply overlooked? In 2000, Coron [Cor00] found a better reduction for RSA-FDH that lost a factor of qsig (instead of (qhash + qsig )), suggesting that perhaps further improvements were possible. However, in 2002 Coron [Cor02] answered the question by showing that any black-box reduction for RSA-FDH had to lose a factor of at least qsig , thus justifying the necessity of PSS. In the same paper, he also introduced a scheme called Full-Domain Hash”), which signs m by   PFDH (for “Probabilistic computing RSA−1 (RO(mr)), r for a random r. This scheme is essentially PSS without the complicated formatting (and hence with slightly longer outputs), but with the same tight security. From Generic Assumption to RSA. While in 1993 [BR93] FDH was introduced to work with any trapdoor permutation, in 1996 [BR96] PSS, and in 2002 [Cor02] PFDH were considered only for RSA. Moreover, in 2002 [Cor00] the improved security reduction for FDH was only shown with RSA as well. This shift from generic assumptions to specific ones, while motivated by practical applications of the constructions, obscured what it was exactly about RSA that made PSS and PFDH reductions nearly tight, and accounted for the better security reduction for FDH. It is beneficial to consider which properties of RSA are crucial here, and which are merely incidental. Among other potential insights to be gained from such a consideration, is the possibility of using FDH, PSS or PFDH with other permutations. To emphasize this point and to avoid further confusion, we will denote by FDH, PFDH and PSS the signature schemes above considered with general trapdoor permutation f , and by RSA-FDH, RSA-PFDH, RSA-PSS the specific variants with f being RSA. Our Contribution. We consider the question of identifying the general cryptographic assumptions that make the aforementioned efficient security reductions

On the Power of Claw-Free Permutations

57

possible. Because all these schemes can be easily proven asymptotically secure with any trapdoor permutation f , it is natural to consider whether trapdoorness of f is the only security assumption necessary for a tight security reduction. The answer is “no”: we show that a tight security reduction is impossible for FDH, PFDH, PSS, and in fact any scheme that consists of applying f −1 to the result of applying the random oracle to the message (possibly formatted with some randomness), if the scheme is to be analyzed with a general “black-box” trapdoor permutation f . Moreover, any black-box security reduction for such schemes has to lose a factor of qhash with a generic black-box trapdoor permutation f : thus, the current security analysis for generic FDH, PFDH and PSS cannot be improved. We also show that the general cryptographic assumption that makes the security proof for the schemes PSS/PFDH tight, and the improved security proof for FDH [Cor00] work, is the assumption of claw-free permutations. In other words, while all three schemes are asymptotically secure with any trapdoor permutation f , the exact security is dramatically improved once f comes from a family of claw-free permutations (and the bounds are exactly the same as one gets with RSA)! We remark that from a technical point, the proof of security with general claw-free permutations is going to be almost completely identical to the corresponding proof with RSA. Indeed, we are not claiming that we found a new proof. Instead, our goal is to find an elegant general assumption on f so that essentially the same (in fact, conceptually simpler!) proof works1 . Claw-Free vs. Trapdoor. Our results also shed new light on the relationship between claw-free and trapdoor permutations. So far, it was already believed that the existence of claw-free permutations is a strictly stronger complexity assumption than the existence of trapdoor permutations. For example, it is known how to construct collision-resistant hash functions (CRHF) [Dam87] and (non-interactive) trapdoor commitments [KR00] based on claw-free permutations, but no such constructions from trapdoor permutations seem likely. In fact, Simon [Sim98] showed a black-box separation between the existence of oneway permutations and the existence of CRHF’s, and his result seems to extend to trapdoor permutations. Coupled with the above-mentioned construction of CRHF from claw-free permutations, we get a plausible separation between the existence of claw-free permutations and trapdoor permutations. Our results provide another, quite different way to separate claw-free permutations from trapdoor ones. Namely, instead of showing that something constructible with claw-free permutations (e.g., CRHF) is not constructible with trapdoor ones, we show that claw-free permutations can be provably more efficient (or, equivalently, more secure) for some constructions (e.g., those of FDH1

A good analogy could be to look at the famous Cauchy-Schwartz inequality in mathematics stating a · b ≥ a, b. This inequality was originally proved for the case of real vectors under Euclidean norm. However, once one generalizes this proof to arbitrary Hilbert spaces, the proof actually becomes more transparent, since one does not get distracted by the specifics of real numbers, concentrating only on the essential properties of inner product spaces.

58

Yevgeniy Dodis and Leonid Reyzin

like signature schemes) than trapdoor ones. In other words, a stronger assumption provides better exact security/efficiency than a weaker one, even though both of them work asymptotically. To the best of our knowledge, this is the first separation of the above form, where a stronger primitive is provably shown to improve security of a scheme already working with a slightly weaker primitive. A Word on Black-Box Lower Bounds. We briefly put our black-box separation in relation to existing black-box lower bounds. Originating with the work of Impagliazzo and Rudich [IR89], several works (e.g., [Sim98,GMR01]) showed impossibility of constructing one primitive from another. In contrast, several other works (e.g. [KST99,GT00,GGK02]) showed the efficiency limitations of constructing one cryptographic primitive from another. On the other hand, the recent work of Coron [Cor02] showed that the existing security analysis of a certain useful scheme cannot be improved, if the reduction accesses the adversary in a black-box way. In our work, we show that the existing security analysis cannot be improved when based on a general complexity assumption, even though it can be improved in (quite general, but still) special cases.

2

Definitions

We let PPT stand for probabilistic polynomial time, and negl(k) refer to some negligible function in the a parameter k. The random oracle model assumes the existence of a publicly accessible truly random function RO : {0, 1}∗ → {0, 1}. Using trivial encoding tricks, however, we can always assume that RO(·) will return as many truly random bits as we need in a given application. 2.1

Trapdoor and Claw-Free Permutations

Definition 1. A collection of permutations F = {fi : Di → Di | i ∈ I} over some index set I ⊆ {0, 1}∗ is said to be a family of trapdoor permutations if: – There is an efficient sampling algorithm TD-Gen(1k ) which outputs a random index i ∈ {0, 1}k ∩ I and a trapdoor information TK. – There is an efficient sampling algorithm which, on input i, outputs a random x ∈ Di . We write x ← Di as a shorthand for running this algorithm. – Each fi is efficiently computable given index i and input x ∈ Di . – Each fi is efficiently invertible given the trapdoor information TK and output y ∈ Di . Namely, using TK one can efficiently compute (unique) x = fi−1 (y). – For any probabilistic algorithm A, define the advantage AdvF A (k) of A as Pr[x = x | (i, TK) ← TD-Gen(1k ), x ← Di , y = fi (x), x ← A(i, y)] If A runs in time at most t(k) and AdvF A (k) ≥ ε(k), then A is said to (t(k), ε(k))-break F. F is said to be (t(k), ε(k))-secure if no adversary A can (t(k), ε(k))-break it. In the asymptotic setting, we require that the the advantage of any PPT A is negligible in k. Put differently, fi is hard to invert without the trapdoor TK.

On the Power of Claw-Free Permutations

59

A classical example of a trapdoor permutation family is RSA, where TD-Gen(1k ) picks two random k/4-bit primes p and q, sets n = pq, ϕ(n) = (p − 1)(q − 1), picks random e ∈ Z∗ϕ(n) , sets d = e−1 mod ϕ(n) and outputs i = (n, e), TK = d. Here Di = Z∗n , fi (x) = xe mod n, fi−1 (y) = y d mod n. The RSA assumption states that this F is indeed a trapdoor permutation family. Remark 1. When things are clear from the context, we will abuse the notation (in order to “simplify” it) and write: f for fi , D or Df for Di , A(f, . . .) for A(i, . . .), f ← F for (i ← I ∩ {0, 1}k ; f = fi ), and (f, f −1 ) ← TD-Gen(1k ) for (i, TK) ← TD-Gen(1k ). Also, we will sometimes say that f by itself is a “trapdoor permutation”. Definition 2. For an index set I ⊆ {0, 1}∗ , a collection of pairs of functions C = {(fi : Di → Di , gi : Ei → Di ) | i ∈ I} is a family of claw-free permutations if: – There is an efficient sampling algorithm CF-Gen(1k ) which outputs a random index i ∈ {0, 1}k ∩ I and a trapdoor information TK. – There are efficient sampling algorithms which, on input i, output a random x ∈ Di and a random z ∈ Ei . We write x ← Di , z ← Ei as a shorthand. – Each fi (resp. gi ) is efficiently computable given index i and input x ∈ Di (resp. z ∈ Ei ). – Each fi is a permutation which is efficiently invertible given the trapdoor information TK and output y ∈ Di . Namely, using TK one can efficiently compute (unique) x = fi−1 (y). – Each gi induces a uniform distribution over Di when z ← Ei and y = gi (z) is computed. – For any probabilistic algorithm B, define the advantage of B as AdvCB (k) = Pr[fi (x) = gi (z) | (i, TK) ← CF-Gen(1k ), (x, z) ← B(i)] If B runs in time at most t(k) and AdvCB (k) ≥ ε(k), then B is said to (t(k), ε(k))-break C. C is said to be (t(k), ε(k))-secure if no adversary B can (t(k), ε(k))-break it. In the asymptotic setting, we require that the the advantage of any PPT B is negligible in k. Put differently, it is hard to find a “claw” (x, z) (meaning fi (x) = gi (z)) without the trapdoor TK. We remark that the usual definition in fact assumes that Ei = Di , each gi is also a permutation, and the generation algorithm also outputs a trapdoor TK for gi . We do not need this extra functionality for our application, which is why we use our slightly more general definition. We also remark that a claw-free permutation family C = {(fi , gi )} immedidef

ately implies that the corresponding family F = {fi } is a trapdoor permutation family. Indeed, an inverter A for F immediately implies a claw-finder B for C who feeds A a random challenge y = gi (z), where z ← Ei : finding x = fi−1 (y) then implies that fi (x) = gi (z). Moreover, the reduction is tight: AdvCB = AdvF A.

60

Yevgeniy Dodis and Leonid Reyzin

We will say that the resulting trapdoor permutation family F = F(C) is induced by C. Several examples of claw-free permutations will be given in Section 5. We briefly mention just one example based on RSA. CF-Gen(1k ) runs TD-Gen(1k ) to get (n, e, d), also picks a random y ∈ Z∗n , sets i = (n, e, y), TK = d and fi (x) = xe mod n, gi (z) = yz e mod n. Finding a claw (x, z) implies y = (x/z)e mod n, which implies inverting RSA on a random input y. Thus, this family is claw-free under the RSA assumption. Notice, the induced trapdoor permutation family is exactly the regular RSA family. Remark 2. When things are clear from the context, we will abuse the notation (in order to “simplify” it) and write: f /g for fi /gi , D/E or Df /Eg for Di /Ei , A(f, g, . . .) for A(i, . . .), (f, g) ← C for (i ← I ∩ {0, 1}k ; f = fi ; g = gi ), and (f, f −1 , g) ← CF-Gen(1k ) for (i, TK) ← CF-Gen(1k ). Also, we will sometimes say that f by itself is a “claw-free permutation”, and (f, g) is a “claw-free pair”. 2.2

Full Domain Hash and Related Signature Schemes

We first define the notion of a signature scheme and its security, and then describe the specific schemes considered in this paper. Syntax. A signature scheme consists of three efficient algorithms: S = (Sig-Gen, Sig, Ver). The algorithm Sig-Gen(1k ), where k is the security parameter, outputs a pair of keys (SK, VK). SK is the signing key, which is kept secret, and VK is the verification key which is made public. The randomized signing algorithm Sig takes as input a key SK and a message m from the associated message space M, internally flips some coins and outputs a signature σ; we write σ ← SigSK (m). We will usually omit SK and write σ ← Sig(m). The deterministic verification algorithm Ver takes as input the message m, the signature σ, the public key VK, and outputs the answer a which is either succeed (signature is valid) or fail (signature is invalid). We require that Ver(m, Sig(m)) = succeed, for any m ∈ M. In case a signature scheme (like most of the schemes considered in this paper) is build in the random oracle model, we allows both Sig and Ver to use the random oracle RO. Security of Signatures. The security of signatures addresses two issues: what we want to achieve (security goal) and what are the capabilities of the adversary (attack model). In this paper we will talk about the the most common security goal: existential unforgeability [GMR88], denoted by UF. This means that any PPT adversary C should have a negligible probability of generating a valid signature of a “new” message. To clarify the meaning of “new”, we will consider the following two attack models. In the no message attack (NMA), C gets no help besides VK. In the chosen message attack (CMA), in addition to VK, the adversary C gets full access to the signing oracle Sig, i.e. C is allowed to query the signing oracle to obtain valid signatures σ1 , . . . , σn of arbitrary messages m1 , . . . , mn adaptively chosen by A (notice, NMA corresponds to n = 0). C is

On the Power of Claw-Free Permutations

61

considered successful only if it forges a valid signature σ of a message m not queried to signing oracle: m ∈ {m1 . . . mn }. We denote the resulting asymptotic security notions by UF-NMA and UF-CMA, respectively. Quantitatively, we define AdvSC (k) as Pr[VerVK (m, σ) = succeed | (SK, VK) ← Sig-Gen(1k ), (m, σ) ← C SigSK (·) (VK)] (where m should not be queried to the signing oracle Sig(·)). Of course, in the random oracle model the adversary is also given oracle access to RO. Definition 3. The adversary C is said (t(k), qhash (k), qsig (k), ε(k))-break S, if C runs is time at most t(k), makes at most qhash (k) hash queries to RO, qsig (k) signing queries to Sig(·), and AdvSC (k) ≥ ε(k). If no adversary C can (t(k), qhash (k), qsig (k), ε(k))-break S, then S is said to be (t(k), qhash (k), qsig (k), ε(k))-secure. In asymptotic terms, S is UF-CMA-secure if AdvSC (k) = negl(k) for any PPT C, and UF-NMA-secure if AdvSC (k) = negl(k) for any PPT C which does not make any signing queries. FDH-like Schemes. The random oracle based signatures we will consider all have the following simple form. The verification key will be the description of some trapdoor permutation f , the secret key is the corresponding inverse f −1 . To sign a message m, the signer first transforms m into a pair (y, pub) ← T (m) (where pub could be empty). This transformation T only utilizes the random oracle (and possibly some fresh randomness), but not anything related to f or f −1 . It also has the property that one can easily verify the validity of the triple (m, y, pub). Then the signer computes x = f −1 (y) and returns σ = (x, pub). On the verifying side, one first computes y = f (x) and then verifies the validity of the triple (m, y, pub). Of course, for the resulting signature to be secure, the transformation T should satisfy some additional properties. Intuitively, the value y should be random (since f is hard to invert on random inputs) for every m, and also “independent” for different m’s (more or less, this way the inverse of y(m) should not give any information about the inverse of y(m )). Transformations T satisfying the above informally stated properties do not seem to exist in the standard model, but are easy to come up with in the random oracle model. Below we describe three popular signature schemes of the above form: Full Domain Hash (FDH [BR93]), Probabilistic Full Domain Hash (PFDH [Cor02]), and Probabilistic Signature Scheme (PSS [BR96]). We remark that another family of similar schemes is described in [MR02]. These signatures utilize the “swap” method, and are designed for the purpose of improving the exact security of several Fiat-Shamir based signature schemes [FS86,GQ88,OS90,Mic94]. However, one can observe that the resulting signature schemes can be all viewed as less efficient and more complicated variants of the PFDH scheme, so we do not describe them. In the following, f is a trapdoor permutation with domain D, public key is f and secret key is f −1 . FDH: Sig(m) returns σ = f −1 (RO(m)), and Ver(m, σ) checks if f (σ) = RO(m) (we assume that RO returns a random element of D, by implicitly

62

Yevgeniy Dodis and Leonid Reyzin

running the corresponding sampling algorithm with the randomness returned by RO). PFDH: This signature is parameterized by the length  parameter k0 . Sig(m)  picks a random r ∈ {0, 1}k0 and returns σ = f −1 (RO(mr)), r , and Ver(m, σ) checks if f (σ) = RO(mr) (again, we assume that RO returns a random element of D). Notice that FDH is a special case with |r| = k0 = 0. In general, we will see that the length of “salt” r plays a crucial role in the exact security of PFDH. PSS: This signature is parameterized by two length parameters k0 and k1 . For convenience, we will assume that it takes between n − 1 and n bits to encode an element of D, so that every (n − 1)-bit number is a valid element of D (this is not crucial, but makes the description simpler). We also syntactically split the random oracle RO into three independent random oracles H : {0, 1}∗ → {0, 1}k1 , G1 : {0, 1}k1 → {0, 1}k0 and G2 : {0, 1}k1 → {0, 1}n−k0 −k1 −1 . Then, Sig(m) picks a random salt r ∈ {0, 1}k0 , computes w = H(mr), r∗ = G1 (w) ⊕ r and returns σ = f −1 (0wr∗ G2 (w)). The verification Ver(m, σ) computes y = f (σ) ∈ D, splits y = bwr∗ γ, recovers r = G1 (w) ⊕ r∗ , and accepts if H(mr) = w, G2 (w) = γ and b = 0. We remark that all these schemes makes sense for arbitrary trapdoor permutation families. To emphasize a specific family F, we sometimes write F-FDH, F-PFDH, F-PSS. In particular, when F = RSA, we get 3 specific schemes RSAFDH, RSA-PFDH, RSA-PSS. Security of FDH-like Signatures. All the FDH-like signatures above can be shown asymptotically UF-CMA-secure for arbitrary trapdoor permutation family F. Unfortunately, in terms of exact security, the situation is not entirely satisfactory. Specifically, if F is (t , ε )-secure, then one can show (t, qhash , qsig , ε)-security of any of the above signature schemes where roughly t ∼ t and ε ∼ ε qhash . Put differently, in all three schemes above, the “generic” analysis loses a very large factor qhash in terms of exact security. Intuitively, the security reduction has to “guess” which of the qhash random oracle queries made by the hypothetical signature breaker is “relevant” for the final forgery, and respond to this query in a manner that will help inverting the trapdoor permutation f on a challenge y. In case this guess is unsuccessful, the forgery of the breaker is useless. Unfortunately, we will show in Section 4 that this large security loss is inevitable when dealing with arbitrary trapdoor permutations. On the other hand, the situation is much better for the RSA-based examples of the above three schemes. Specifically, for RSA-FDH we one can get t ∼ t and ε = O(ε qsig ) [Cor00] (and [Cor02] showed that this analysis cannot be improved for RSA). Namely, even though the reduction is still not tight (i.e., ε  ε ), the security loss is only a factor qsig  qhash . On the other hand, RSA-PFDH and RSAPSS are tight, i.e. t ∼ t and ε ∼ ε (provided the salt length is somewhat larger than the very moderate quantity log qsig [Cor02]), which dramatically improves on the the generic security loss of qhash .

On the Power of Claw-Free Permutations

63

To summarize, generic bounds for FDH, PFDH and PSS are much worse than the specific bounds when F = RSA. One of the main objectives of this paper was to try finding a general reason for this gap. Namely, to find a general condition of F, so that all the benefits of RSA come into effect with any F satisfying this condition. As we show next in Section 3, the needed condition is that F is induced by some family C of claw-free permutations. Indeed, we saw in Section 2.1 that RSA could be viewed as being induced by the some natural claw-free family, which explains much tighter exact security.

3

Claw-Free Permutations Yield Improved Security

Why claw-free permutations Seem Useful. The precise reason why claw-free permutations are useful for FDH-like schemes will be obvious from the proof we present later. Here, however, we give some preliminary observations why clawfreeness seems to be relevant. First, assume we are not working in the random oracle model. The most basic signature scheme that comes to mind is σ = f −1 (m), where f is a trapdoor permutation. Unfortunately, it is trivially forgeable since every σ is a valid signature of m = f (σ). The next fix would be to utilize some function g and to output σ = f −1 (g(m)). Notice, finding a forgery (m, σ) for this signature is equivalent to finding m and σ such that f (σ) = g(m), which exactly amounts to finding a claw (σ, m) for the function pair (f, g). Thus, the above signature scheme is UF-NMA-secure iff (f, g) comes from a pair of claw-free permutations! In fact, the first UF-CMA signature scheme [GMR88] was based on clawfree permutations (and a non-trivial extension of the simple observation above), before more general UF-CMA constructions were obtained [BM88,NY89,Rom90]. Alternatively, let us return to the random oracle model and consider the FDH scheme (in fact, even more general PFDH scheme). The adversary C successfully forges a signature of some message if it can come up with σ and τ = mr, such that f (σ) = RO(τ ). In other words, C has to find a claw (σ, τ ) for the function pair (f, RO)! Of course, the family {(f, RO)} is not a regular family of claw-free permutations, since the random oracle is not a regular function2 . In the security proof, however, we may (and will in a second) simulate the random oracle by picking a random z and setting RO(τ ) = g(z). In this case, any forgery σ, τ by C will result in a claw (σ, z) for a “regular” claw-free pair (f, g). Our Result. We show that all the security (and efficiency) benefits of RSA-FDH, RSA-PFDH and RSA-PSS over using general trapdoor permutation family F come into effect once F is induced by a family C of claw-free permutations. In 2

We could extend the notion of claw-free permutations to the random oracle model, where we allow the function g (as well as the adversary) to depend on the random oracle. In this setting, for any trapdoor permutation f , the security of PFDH indeed implies that the pair (f, RO) results in a family of such “oracle claw-free permutations” (again, with a large “security loss” qhash ). This is to be contrasted with the regular model, where the existence of trapdoor permutations is unlikely to imply the existence of claw-free permutations.

64

Yevgeniy Dodis and Leonid Reyzin

particular, the security loss for FDH becomes only O(qsig ), while the reductions for PFDH and PSS are essentially tight (once the salt length is at least log qsig ). For concreteness of the discussion, we concentrate on a representative case of PFDH. Similar discussion holds for FDH and PSS. The theorem below contrasts our proof with claw-free permutations with a much looser (yet inevitably so) proof with general trapdoor permutations. Theorem 1 (Security of PFDH). (a) Assume C is a claw-free permutation which is (t , ε )-secure, and let F = F(C) be the induced trapdoor permutation family. Then F-PFDH with salt length3 k0 ≥ log qsig is (t, qhash , qsig , ε)-secure, where4 t = t − (qhash + qsig + 1) · poly(k) and ε = ε /(1 − qsig 2−k0 ), so that ε ∼ ε (up to a small constant factor) when k0 > log qsig . (b) In contrast, if F is a general (t , ε )-secure family of trapdoor permutations, then for any salt length k0 , F-PFDH is only (t, qhash , qsig , ε)-secure, where t = t − (qhash + qsig + 1) · poly(k) and ε = ε (qhash + 1). Proof. We start with more interesting part (a). Let C be the forger for F-PFDH which (t, qhash , qsig , ε)-breaks it. We construct a claw-finder B for C. B gets the function pair (f, g) as an input. It makes f the public key for PFDH and gives it to C, keeping g for itself. It also prepares qsig random elements r1 . . . rqsig ∈ {0, 1}k0 — these will be the salts of the messages it will sign for C. We call this initial list L (this list will shrink as we move along). To respond to a hash query m r , we distinguish three cases. First, if the value is already defined, we return it. Else, if r ∈ L, B picks and remembers a random x , and returns RO(m r ) = f (x ) to C. Finally, if r ∈ L, B picks and remembers a random z  , and returns RO(m r ) = g(z  ) to C. If the forger makes a signature query mi , we pick the next element ri from the current list, and see if RO(mi ri ) is defined. If so, it is equal to f (xi ) for some xi , so we return xi , ri  as the signature of mi . Else, we pick a random xi , define RO(mi xi ) = f (xi ) and return xi , ri  to C. In either case, we remove ri from the list L. Eventually, (with probability ε) C will output a forgery x, r of some message m. Without loss of generality we assume that C asked the hash query mr before (if not, B can do it for C; this increases the number of hash queries by one). If the answer was g(z), we get that f (x) = g(z), so B outputs the claw (x, z). Otherwise (the answer was f (x)), we did not learn anything, so B fails. We see that the probability ε that B finds a claw is ε Pr(E), where E is the event that the forgery corresponded to g(z) rather than to f (x), so that B does not fail. It remains to show that Pr(E) ≥ 1 − qsig 2−k0 . We notice, however, that the only way that B will fail is if the value r was still in the list L at the time the hash query mr was asked. But at this point C has no information about at 3

4

As shown in [Cor02], the analysis can be extended even to k0 < log qsig , but the reduction stops being tight. Here poly(k) is a fixed polynomial depending on the time it takes to evaluate f and g in C.

On the Power of Claw-Free Permutations

65

most qsig totally random elements in L (remember, an element is discarded from L after each signing query). So the probability that r ∈ L is at most qsig 2−k0 , completing the proof. Finally, we briefly sketch the proof of (b). This proof is essentially from [BR93], and is given mainly for the purposes of contrasting it with the proof of (a) above. From the signature forger C, we need to construct an inverter A for F. A picks a random index  ∈ {1 . . . qhash + 1}, hoping that the -th hash query will be on a new value (not defined in a previous hash query or signature query), and that C will forge a signature based on it (note that if C is successful, such  has to exist). For all hash queries j except for the -th one, A responds by picking a random x and returning f (x ) (unless the value is already defined, in which case this value is returned). For the -th one, A responds with its challenge y (unless the hash value is already defined through a previous hash or signing query, in which case A fails). To answer a signing query mi , A picks a random ri and xi and defines RO(mi ri ) = f (xi ) (unless it was already defined, in which case it uses the corresponding answer). It then returns xi , ri  as the signature of mi . Finally, C returns a forgery x, r for some message m with probability ε. If the -th hash query happens to be exactly mr, x is the correct preimage of the challenge y. Otherwise, A fails. It is easy to see that A’s simulation of the random oracle is perfect and reveals no information about . Thus, A correctly guesses  with probability 1/(qhash + 1), obtaining a total probability of ε/(qhash + 1) of inverting y. Notice, the proof of part (a) is indeed identical to the the corresponding proof for RSA [Cor02], except we abstract away all the specifics about RSA. As before, the fact that k0 > log qhash ensures that we can tell apart the hash queries related to qsig signing queries from those maybe related to the forgery. But we see the crucial way the proof uses the claw-freeness of C: the “signing-related” queries get answered with f (x), while the “forging-related” queries get answered with g(z) (where x and z are random to ensure a random answer). In particular, there is no need to guess in advance a single “forging-related” query which actually happens to be the one we need: no matter which of these queries will result in the forgery x, one still gets a claw (x, z) such that f (x) = g(z). This should be contrasted with the standard proof of part (b), where we have to guess this query in advance in order to embed our challenge y in the answer. As we show in the next section, the security loss of qhash is optimal for general trapdoor permutations, so the above “guessing” argument cannot be improved.

4

Trapdoor Permutations Cannot Yield Better Security

In this section we explain our “black-box” model and the limitations of proving tighter security results for FDH-like schemes based on general trapdoor permutations. Specifically, our argument will show that the security loss of Ω(qhash ) is inevitable for FDH, PFDH, PSS, showing the tightness of the kind of analysis we used in part (b) of Theorem 1. In order tounify our argument, we consider any signature scheme of the form f −1 (y), pub , where (y, pub) ← T (m) is obtained

66

Yevgeniy Dodis and Leonid Reyzin

from the message m using a constant (in our schemes, one or two) number of random oracle calls, possibly some additional randomness, but without utilizing f or f −1 . The only thing we require from T in our proof is that for any distinct messages m1 . . . mq , setting yj , pubj  ← T (mj ) will result in all distinct yj ’s with all but negligible probability (over the choices of the random oracle, for any q polynomial in the security parameter). In the following, we will call any signature scheme S (of the above form) utilizing such “collision-resistant” mapping T legal. Certainly, FDH, PFDH and PSS are all legal. Assume now that one claims to have proven a statement of the form: “if F is (t , ε )-secure, then some particular legal S is (t, qhash , qsig , ε)-secure, for any trapdoor permutation family F.” We remark that our lower bound will apply even for proving much weaker UF-NMA security (i.e., disallowing the adversary to make any signing queries), so will assume throughout that qsig = 0 and denote q = qhash . A natural proof for such a statement will give a reduction R from any adversary C which (t, q, 0, ε)-breaks the unforgeability of S in the random oracle model, to a forger A which (t , ε )-breaks the one-wayness of f in the standard model. Intuitively, this reduction R (which we sometimes identify with A since the goal of R is to construct A) is black-box if it only utilizes the facts that: (1) f is a trapdoor permutation, but the details of f ’s implementation are unimportant; (2) C (t, q, 0, ε)-breaks S whenever it is given oracle access to a true random oracle (which has to be “simulated” by R), but the details how C does it are unimportant. In other words, there are two objects that a “natural” reduction R (or inverter A) utilizes in a “black-box” manner: the trapdoor permutation f and the forger C. We explain our modeling of each separately. 4.1

Modeling Black-Box Trapdoor Permutation Family F

Following previous work [GT00,GGK02], we model black-box access to a trapdoor permutation F by three oracles (G, F, F −1 ), available to all the participants of the system. For simplicity, we will assume that the domain of all our functions is D = {0, 1}k , both the index i and the trapdoor TK also range over {0, 1}k , and are in one-to-one correspondence with each other. F is the forward evaluation oracle, which takes the index i of our trapdoor permutation fi and the input x, and simply returns the value of fi (x). G takes the trapdoor value TK and returns the index i of the function fi whose trapdoor is TK (informally, knowing the trapdoor one also knows the function, but not vice versa). Finally, F −1 takes the trapdoor TK and the value y and returns fi−1 (y), where i = G(TK). Intuitively, any choice of the oracles G, F, F −1 will yield a trapdoor permutation as long as: (a) (b) (c) (d) (e)

G(·) is a permutation over {0, 1}k . F (i, ·) is a permutation over {0, 1}k for every index i. F −1 (TK, ·) is a permutation over {0, 1}k for every trapdoor TK. F −1 (TK, F (G(TK), x)) = x, for any x, TK ∈ {0, 1}k . For any A, let AdvF A (k) be Pr[x = x | x, TK ← {0, 1}k , i = G(TK), y = F (i, x), x ← AG,F,F

−1

(i, y)]

On the Power of Claw-Free Permutations

67

Then we want to require that for any A making polynomial in k number of queries to G, F, F −1 , we have AdvF A (k) = negl(k). For exact security, we say that F is (qG , qF , qF −1 , ε)-secure, if AdvF A ≤ ε, for any A making at most qG queries to G, qF queries to F and qF −1 queries to F −1 5 . Put differently, we simply rewrote Definition 1, except the efficient computation of fi and fi−1 (the latter with the trapdoor) are replaced by oracle access to the corresponding oracles, and the notion of “polynomial time” became “polynomial query complexity”6 . In particular, any choice of (G, F, F −1 ) satisfying conditions (a)-(e) above forms a valid “black-box” trapdoor permutation family. And, therefore, our black-box reduction from forger C to inverter A for F should work with any such black-box trapdoor permutation. We choose a specific very natural black-box trapdoor permutation for this purpose. G is simply a random permutation, and so is F (i, ·) (for any i, and independently for different i’s). Finally, F −1 (TK, y) is defined in an obvious way so as to satisfy (d) (in particular, it is also a random permutation for any value of TK). With respect to this family, we can compute the expected advantage of any inverter A (taken over the random choice of G, F, F −1 ). For that, assume that A(i, y) makes qG queries to G, qF queries to F and qF −1 to F −1 . In fact, for future def use7 we will also allow A to make qF˜ −1 queries to the new oracle F˜ −1 (i , y  ) = F −1 (G−1 (i ), y  ), with the obvious restriction that this oracle cannot be called on input (i, y). Without loss of generality, let us assume that A never makes a call to F −1 (TK , ·) before first inquiring about G(TK ). In this case, since F −1 (TK , ·) is totally random for every TK , there is no need for A to ever call the inverse oracle F −1 (TK , y  ) more than once: unless G(TK ) = i, such call is useless for inverting y, and if the equality holds, calling F −1 (TK , y) immediately inverts y. So we may assume that qF −1 = 0, and A wins if it ever makes a call G(·) with the “correct” trapdoor TK = G−1 (i), and addition to when it correctly inverts y. Then, naturally extending the definition of AdvF A to also account for querying F˜ −1 on inputs different from (i, y), we show Lemma 1.

E[AdvF A (k)] ≤

qG 2k

Hence, with all but 2−Ω(k) probability,

qF . 2k −qF˜ −1 F AdvA (k) = 2−Ω(k) ,

+

for any PPQ A.

Proof. The first term qG /2k is the probability that A calls G on TK = G−1 (i). Assuming that this did not happen, the best strategy of A is to perform qF˜ −1 arbitrary queries to F˜ −1 (i, ·) (with no second input equal to y). This will eliminate qF˜ −1 values of x as possible preimages of y, so that there is no need to query F (i, x) for these values of x. Finally, the probability that qF queries to F will hit one of (2k − qF˜ −1 ) equally likely possibilities for the needed F −1 (TK, y) is at most the second term. 5

6 7

In the black-box model, time is less important, since the efficiency conditions for certain functionalities are replaced by having oracle access to these functionalities. So query complexity is a more natural complexity measure in this setting. For this section, we let PPQ stand for “probabilistic polynomial query complexity”. Intuitively, this oracle will correspond to the forgeries returned to A by C.

68

Yevgeniy Dodis and Leonid Reyzin

Intuitively, the above result says that a truly random family of permutations forms a family of trapdoor permutations with very high probability, since there is no way to invert a random permutation other than by sheer luck.

4.2

Modeling Black-Box Forger C

Let us now turn our attention to the black-box way the reduction R can utilize some signature forger C when constructing the inverter A. (From now on, we identify R and A.) Recall, R is given an index i for F and a random element y to invert. It is natural to assume that is sets the same public key i for C, and then simply runs C(i) one or more times (possibly occasionally “rewinding” the state of C, but always leaving i as the public key for S). Of course, R has to simulate for C the q = qhash random oracle queries, since there is no random oracle in the “world of A”8 . Finally, somewhere in the latter simulation, A will utilize the challenge y, so that the forgery returned by C will help A to invert y. We see that the only thing about C that this kind of reduction is concerned about is the upper bound q on the number of hash queries made by C, and the forged signature returned by C. In other words, from R’s perspective, there are q + 1 rounds of interaction between between R and C: q responses to C’s hash queries, followed by a forgery (m, σ, pub) returned by C. Hence, it is very natural to measure the complexity of the reduction (or the inverter A) as the number of rounds of interaction it makes with C 9 . We will call this complexity qR . On the other hand, R should succeed in inverting y with the claimed probability ε for any (q, ε)-valid forger C. This means that C is guaranteed to output a forgery to S with probability ε by making at most q hash queries to R, provided R answers them in a manner “indistinguishable” for a true random oracle. On the other hand, it is not important (and the success of R should depend on it) how C managed to obtain the forgery, why it asks some particular hash query, how long it takes C to decide which next hash query to ask, etc. Therefore, in particular, R should succeed even when we give (as we will do in our proof below) to C oracle access to the F˜ −1 oracle, which can invert any y˜ that C wishes (this allows C to trivially forge any signature it wants). Also, we will assume that C can choose randomness in a manner not controllable by R (wlog, C chooses all the randomness it needs at the very beginning of its run). This requires a bit of justification. Indeed, it seems natural to give our reduction the ability to run C with different random tapes. But consider a deterministic C with some particular fixed random tape. In this case, R should not be able to “know” this fixed tape given only oracle access to C, but should still work with this C. But now taking an average over all such C’s with a fixed tape, we effectively get that R should work with a forger who chooses its random tape at the beginning, without R “knowing” this choice. 8 9

Recall, to get a stronger result we assume that C makes no signing queries. Of course, we should also remember the quantities qG and qF , having to do with the black-box access to a trapdoor permutation. We will return to those later.

On the Power of Claw-Free Permutations

69

Summary. We are almost done with our modeling, which we now summarize. C is allowed to: (1) choose some arbitrarily large random tape at the beginning (possibly of exponential size since R should work even with computationally unbounded C); (2) have oracle access to F˜ −1 , allowing it to forge arbitrary signatures; (3) make at most q hash queries to R. On the other hand, provided R answers these queries in a manner indistinguishable from the random oracle, C has to output a forgery (m, x, pub) (i.e., (m, F (i, x), pub) should pass the verification test) with probability at least ε. Similarly, R can: (1) interact with C for at most qR rounds; (2) call F at most qF times; (3) call G at most qG times; (4) rewind C to any prior point in C’s run; (5) start a new copy of C where C will choose fresh randomness. Finally, R has to: (1) “properly” answer hash queries of C; (2) invert the challenge y with probability at least ε . Our objective is to prove an upper bound on ε as a function of qR , qF , qG , q, ε in the black-box model outlined above. We do it in the next section. 4.3

Our Bound

Theorem 2. For any legal signature scheme S analyzed in the black-box model with general trapdoor permutations, we have

qG qF qR ε  · (1) ε ≤ + k +4· 2k 2 − (2qR /q) q q In particular, for any PPQ reduction R running C at most a constant number of times (i.e., qR = O(q)), we have ε = O(ε/q), so the reduction loses at least a factor Ω(qhash ) in security. Proof. We describe a particular (q, ε)-legal forger C for S. Recall, to sign m, our signature first gets (y, pub) ← T (m), and then returns (x, pub), where x = F˜ −1 (i, y). We also assumed that the transformation T is such that: (1) it calls the random oracle at most a constant number of times (in the proof, we assume this number is 1; the large number will only affect constant 4 in Equation (1)); and (2) when invoked with distinct mj ’s, with all but negligible probability all the yj ’s are distinct. First, C chooses a truly random function H from q/2-tuples of elements of {0, 1}k to a random index  ∈ {1 . . . q/2}. This function will control the index of a forgery that C will compute later. Next, C uses q hash query calls to obtain (yj , pubj ) ← T (mj ) (if needed, using some additional randomness prepared separately from the function H) for q arbitrary, but distinct and fixed messages m1 . . . mq . If any of the answers yj are the same, C rejects, since we assumed that with a true random oracle such collision does not happen with all but negligible probability (i.e., that S is legal). Otherwise, if all of them are distinct, C computes  = H(y1 . . . yq/2 ), and then, with probability ε, outputs a forged signature (F˜ −1 (i, y ), pub ) of m . Clearly, this C is (q, ε)-legal, so our reduction R should be able to invert y with this C, with probability at least ε . Before arguing that this ε must satisfy

70

Yevgeniy Dodis and Leonid Reyzin

Equation (1), we observe the following. The number of times R has to interact with C between any two successive distinct forgeries is at least q/2. Indeed, unless any of the values y1 . . . yq/2 change, C will return exactly the same forgery to R, since the function H is fixed by now. So R has to either start a new copy of C and wait for q steps, or rewind one of the current copies of C to at least some query j ≤ q/2, somehow change the value yj by returning a different random oracle answer, and then run C for another q − j ≥ q/2 steps. In particular, R def

never sees more than N = 2qR /q different forgeries (in fact, the actual number is roughly εN , but this is not important). Let us now estimate the success probability of R. Let E denote the event that C ever returns the inverse of y to R, i.e. it happens that y = y for one of the forgeries returned by C. Clearly, Pr(R succeeds) ≤ Pr(E) + Pr(R succeeds | def

E) = p1 + p2 . We start with estimating p2 . Notice, since E did not happen, we get that C never called F˜ −1 (i, y). Thus, combining R and C into one “superinverter” A , we get that A made qG calls to G, qF calls to F and at most N calls F . to F˜ −1 on inputs different from (i, y). By Lemma 1, we get that p2 ≤ q2Gk + 2kq−N As for p1 , recall that the number of different forgeries that C can return is at most the number of times it evaluates H on a different q/2-tuple of values, which in turn is at most N , as we just argued. Indeed, the input to H predetermines the forgery that C can return, so these inputs should be different for different forgeries, and it takes R at least q/2 steps to “change” the input to H. Take any one of these (at most) N times with the input to H being y1 . . . yq/2 . Since C will never proceed with a forgery if at least two of the yj ’s are the same, at most one of the first q/2 values of yj can be equal to y. Since H is truly random function, the probability that it will return  such that y = y is at most 2/q. Moreover, even if this event happened, the probability it inverts this y is ε, giving an overall probability of at most 2ε/q of inverting y, each time C might output a new forgery. By the union bound, the overall probability of the event E is p1 ≤ N · 2ε/q. Combining the bounds we got for p1 and p2 , we get Equation (1). Intuitively, the argument above said that the reduction must guess the “relevant” hash query where it can give the answer dependent on the challenge y. And it can do it for only one query since the signature S is legal and there is no “structure” to a random trapdoor permutation which would allow to use random but “related” values of y for the other hash queries. So this guess is correct with probability only 1/q. Finally, we remark that the term qR /q in Equation (1) can be roughly interpreted as the number of times our reduction ran C. Not surprisingly, running C from scratch qR /q times will improve the chances of success by a factor proportional to qR /q, which is indeed the behavior we see in Equation (1). For qR /q = O(1), however, we see that we lose the factor Ω(q).

5

Some Constructions of Claw-Free Permutations

Since we know so few number-theoretic constructions of trapdoor permutations (essentially RSA, Rabin and Paillier [Pai99]), we know very few constructions

On the Power of Claw-Free Permutations

71

of claw-free permutations as well. Luckily, every current trapdoor permutation we know in fact yields some natural claw-free permutation. On the other hand, we mentioned that this implication from trapdoor to claw-free permutations is very unlikely to hold in general. Therefore, in this section we give several general conditions on trapdoor permutations (enjoyed by currently known trapdoor permutations), which suffice to imply the existence of claw-free permutations (in fact, via very efficient constructions). These conditions can be viewed as narrowing the gap between the general claw-free permutations and the very specific ones based on trapdoor permutations like RSA. We also point out at the end of this section that that not all known claw-free permutations actually follow the general constructions we present below. Using Homomorphic Trapdoor Permutations. This is the most natural generalization of the RSA-based construction presented in Section 2.1. Assume we have a family F of trapdoor permutations with two group operations + and  so that each f ∈ F is homomorphic with respect to these operations: f (a + b) = f (a)  f (b). We can construct the following claw-free permutation family C out of F. CF-Gen(1k ) runs (f, f −1 ) ← TC-Gen(1k ), also picks a random y ∈ D, sets gy (b) = y  f (b), and outputs (f, f −1 , gy ). Now finding a claw (a, b) implies that f (a) = y  f (b) which means that f (a − b) = y, which means that a − b = f −1 (y), so we manage to invert a trapdoor permutation f on a random point y. Using Random-Self-Reducible Trapdoor Permutations. This is a further generalization of the previous construction. Assume, there are efficient functions I and O which satisfy the following conditions. For any output value y ∈ D, picking a random b and applying O(y, b) results in a random point z ∈ D (notice, O(y, ·) does not have to be a permutation or to be invertible). Then, if one finds out the value a = f −1 (z), applying I(y, a, b) will result in finding the correct value x = f −1 (y). So one effectively reduces the worst-case task of inverting y to an average-case task of inverting a random z. We say that such f is randomself-reducible (RSR) if O and I satisfying the above conditions exist. Notice, def

homomorphic f is RSR via z = O(y, b) = y  f (b) (which is actually f (x + b)). def

Then a = f −1 (z) = x + b, so we can define I(y, a, b) = a − b. We can construct the following claw-free permutation family C out of any RSR trapdoor permutation family F. CF-Gen(1k ) runs (f, f −1 ) ← TC-Gen(1k ), also picks a random y ∈ D, sets gy (b) = O(y, b), and outputs (f, f −1 , gy ). Now, finding a claw (a, b) implies that f (a) = O(y, b) which means that I(y, a, b) = f −1 (y) = x. Thus, we inverted a trapdoor permutation f on a random point y. An Ad Hoc Claw-Free Permutation. Any of the above constructions can be applied to trapdoor permutations like RSA, Rabin and Paillier. However, we know of some ad hoc constructions of claw-free permutations which do not follow the above methodology. One such example (based on factoring Blum-Willams integers) is the original claw-free permutation family of [GMR88]. Here n = pq,

72

Yevgeniy Dodis and Leonid Reyzin

where p ≡ 3 mod 4, q ≡ 7 mod 8, and QR(n) stands for the group of quadratic residues modulo n. Then we set our domain D = QR(n), and f (a) = a2 mod n, g(b) = 4b2 mod n. If f (a) = g(b) for a, b ∈ QR(n), then (a − 2b) is divisible by p or q, but not n, which allows one to factor n.

References ACM89. Proceedings of the Twenty First Annual ACM Symposium on Theory of Computing, Seattle, Washington, 15–17 May 1989. BM88. Mihir Bellare and Silvio Micali. How to sign given any trapdoor function. In Goldwasser [Gol88], pages 200–215. BR93. Mihir Bellare and Phillip Rogaway. Random oracles are practical: A paradigm for designing efficient protocols. In Proceedings of the 1st ACM Conference on Computer and Communication Security, pages 62–73, November 1993. Revised version available from http://www.cs.ucsd.edu/˜mihir/. BR96. Mihir Bellare and Phillip Rogaway. The exact security of digital signatures: How to sign with RSA and Rabin. In Ueli Maurer, editor, Advances in Cryptology—EUROCRYPT 96, volume 1070 of Lecture Notes in Computer Science, pages 399–416. Springer-Verlag, 12–16 May 1996. Revised version appears in http://www-cse.ucsd.edu/users/mihir/papers/crypto-papers.html. Cor00. Jean-S´ebastian Coron. On the exact security of full domain hash. In Mihir Bellare, editor, Advances in Cryptology—CRYPTO 2000, volume 1880 of Lecture Notes in Computer Science, pages 229–235. Springer-Verlag, 20– 24 August 2000. Cor02. Jean-S´ebastian Coron. Optimal security proofs for PSS and other signature schemes. In Lars Knudsen, editor, Advances in Cryptology—EUROCRYPT 2002, volume 2332 of Lecture Notes in Computer Science, pages 272–287. Springer-Verlag, 28 April–2 May 2002. Dam87. Ivan Damg˚ ard. Collision-free hash functions and public-key signature schemes. In David Chaum and Wyn L. Price, editors, Advances in Cryptology—EUROCRYPT 87, volume 304 of Lecture Notes in Computer Science. Springer-Verlag, 1988, 13–15 April 1987. FS86. Amos Fiat and Adi Shamir. How to prove yourself: Practical solutions to identification and signature problems. In Andrew M. Odlyzko, editor, Advances in Cryptology—CRYPTO ’86, volume 263 of Lecture Notes in Computer Science, pages 186–194. Springer-Verlag, 1987, 11–15 August 1986. GGK02. Rosario Gennaro, Yael Gertner, and Jonathan Katz. Bounds on the efficiency of encryption and digital signatures. Technical Report 2002-22, DIMACS: Center for Discrete Mathematics and Theoretical Computer Science, 2002. Available from http://dimacs.rutgers.edu/TechnicalReports/2002.html. GMR88. Shafi Goldwasser, Silvio Micali, and Ronald L. Rivest. A digital signature scheme secure against adaptive chosen-message attacks. SIAM Journal on Computing, 17(2):281–308, April 1988. GMR01. Yael Gertner, Tal Malkin, and Omer Reingold. On the impossibility of basing trapdoor functions on trapdoor predicates. In 42nd Annual Symposium on Foundations of Computer Science, Las Vegas, Nevada, October 2001. IEEE.

On the Power of Claw-Free Permutations Gol88.

GQ88.

GT00.

IR89. KR00.

KST99.

Mic94.

MR02. NY89. OS90.

Pai99.

Rom90.

Sim98.

73

Shafi Goldwasser, editor. Advances in Cryptology—CRYPTO ’88, volume 403 of Lecture Notes in Computer Science. Springer-Verlag, 1990, 21–25 August 1988. Louis Claude Guillou and Jean-Jacques Quisquater. A “paradoxical” indentity-based signature scheme resulting from zero-knowledge. In Goldwasser [Gol88], pages 216–231. Rosario Gennaro and Luca Trevisan. Lower bounds on the efficiency of generic cryptographic constructions. In 41st Annual Symposium on Foundations of Computer Science, Redondo Beach, California, November 2000. IEEE. Russell Impagliazzo and Steven Rudich. Limits on the provable consequences of one-way permutations. In ACM [ACM89], pages 44–61. Hugo Krawczyk and Tal Rabin. Chameleon signatures. In Network and Distributed System Security Symposium, pages 143–154. The Internet Society, 2000. Jeong Han Kim, Daniel R. Simon, and Prasad Tetali. Limits on the efficiency of one-way permutation-based hash functions. In 40th Annual Symposium on Foundations of Computer Science, New York, October 1999. IEEE. Silvio Micali. A secure and efficient digital signature algorithm. Technical Report MIT/LCS/TM-501, Massachusetts Institute of Technology, Cambridge, MA, March 1994. Silvio Micali and Leonid Reyzin. Improving the exact security of digital signature schemes. Journal of Cryptology, 15:1–18, 2002. Moni Naor and Moti Yung. Universal one-way hash functions and their cryptographic applications. In ACM [ACM89], pages 33–43. Heidroon Ong and Claus P. Schnorr. Fast signature generation with a Fiat Shamir-like scheme. In I. B. Damg˚ ard, editor, Advances in Cryptology— EUROCRYPT 90, volume 473 of Lecture Notes in Computer Science, pages 432–440. Springer-Verlag, 1991, 21–24 May 1990. Pascal Paillier. Public-key cryptosystems based on composite degree residuosity classes. In Jacques Stern, editor, Advances in Cryptology— EUROCRYPT ’99, volume 1592 of Lecture Notes in Computer Science. Springer-Verlag, 2–6 May 1999. John Rompel. One-way functions are necessary and sufficient for secure signatures. In Proceedings of the Twenty Second Annual ACM Symposium on Theory of Computing, pages 387–394, Baltimore, Maryland, 14–16 May 1990. Daniel R. Simon. Finding collisions on a one-way street: Can secure hash functions be based on general assumptions. In Kaisa Nyberg, editor, Advances in Cryptology—EUROCRYPT 98, volume 1403 of Lecture Notes in Computer Science. Springer-Verlag, May 31–June 4 1998.

Equivocable and Extractable Commitment Schemes Giovanni Di Crescenzo Telcordia Technologies Inc., Morristown, NJ, 07960, USA [email protected] Copyright 2002. Telcordia Technologies, Inc. All Rights Reserved.

Abstract. We investigate commitment schemes with special security properties, such as equivocability and extractability, motivated by their applicability to highly secure commitment schemes, such as non-malleable or universally-composable commitment schemes. In the public random string model, we present constructions of noninteractive commitment schemes (namely, both the commitment phase and the decommitment phase consist of a single message sent from committer to receiver) that are both equivocable and extractable. One of our constructions uses necessary and sufficient assumptions (thus improving over previous constructions). We combine these constructions with the non-malleability construction paradigm of [8] and obtain, in the public random string model, a noninteractive commitment scheme that is non-malleable with respect to commitment. The assumptions used for this scheme are more general than those used in previous constructions.

1

Introduction

A fundamental cryptographic primitive is that of commitment schemes. A commitment scheme involves two probabilistic polynomial-time players: the committer and the receiver. Very informally, it consists of two stages, a commitment stage and a de-commitment stage. In the commitment stage, the committer, given a secret input x, engages in a protocol with the receiver. At the end of this stage, the receiver still does not have any information about the value of x (i.e. x is computationally hidden), and at the same time, the committer can subsequently (i.e., during the de-commitment stage) open only one value of x. Several variants of commitment schemes have been considered in the literature, such as trapdoor commitment schemes [4], equivocable commitment schemes [1,8], extractable commitment schemes [6], non-malleable (with respect to commitment) commitment schemes [10], non-malleable (with respect to decommitment) commitment schemes [8], and universally-composable commitment schemes [5]. All these variants have been crucial as well for the construction (and their improved efficiency) of a large variety of cryptographic primitives and protocols with strong security properties, such as zero-knowledge proofs, encryption S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 74–87, 2003. c Springer-Verlag Berlin Heidelberg 2003 

Equivocable and Extractable Commitment Schemes

75

schemes, digital election schemes, auction protocols, electronic cash systems, as well as more elaborated variants of commitment schemes. Hence, constructions for multiple variants of this protocol are crucial for the implementation of a variety of cryptographic primitives. In this paper we focus on commitment schemes that are simultaneously equivocable and extractable, motivated by their applications to the construction of non-malleable and universally-composable commitment schemes. Informally, a commitment scheme is equivocable if it can be simulated as a fake commitment that can be opened as all possible values; a commitment scheme is extractable if it implies knowledge of the committed value at commitment stage. We present two constructions of non-interactive commitment schemes in the public random string model that are both equivocable and extractable. One of our construction is based on equivocable commitment schemes and non-interactive zero-knowledge proofs of knowledge for all polynomial-time relations. The other construction is based on the existence of equivocable commitment schemes and of extractable commitment schemes. The latter assumption is clearly necessary and sufficient. This improves over the only previous construction of a commitment scheme that is both equivocable and extractable, given in [5], and based on the seemingly stronger assumption of the existence of collision-free hash functions and of chosen-ciphertext-secure public-key encryption schemes. We show an important application of our constructions: a non-interactive and non-malleable (with respect to commitment) commitment scheme based on any extractable commitment scheme. This improves over the only previous construction of a commitment scheme with this flavor of non-malleability [9] which was based on the seemingly stronger assumption of the existence of chosen-ciphertext-secure public-key encryption schemes. This result is obtained by combining our constructions with the construction paradigm given in [8]. We note that the scheme in [8], by itself, does not solve this problem since their scheme is non-malleable with respect to decommitment, a probably weaker notions (although still useful enough for most applications). We believe our results have interesting applications to the construction of universally-composable commitments [5] as well. However, we believe that, before discussing those applications, more efforts are necessary in order to present an appropriate definition of universally-composable commitments. All our results are in the public random string model. Organization of the Paper. In Section 2 we review all cryptographic primitives considered in this paper. In Section 3 we present our constructions of equivocable and extractable commitment schemes. In Section 4 we apply our main construction to design a non-interactive commitment scheme that is nonmalleable with respect to commitment.

2

Definitions

We review the definitions of cryptographic primitives of interest in this paper. First we consider non-interactive zero-knowledge (nizk) proofs, then (ordinary)

76

Giovanni Di Crescenzo

commitment schemes, and the properties of equivocability, extractability and non-malleability for them. All primitives are considered in the public-random string model, which we now review as well. The Public Random String Model. We will consider a distributed model known as the public-random-string model [3,2], introduced in order to construct non-interactive zero-knowledge proofs (i.e., zero-knowledge proofs which consist of a single message sent from a prover to a verifier). In this model, all parties share a public reference string which is assumed to be uniformly distributed. Furthermore, this model can be anonymous in a strong sense: parties do not need to have any knowledge of other parties’ identities, or of the network topology. Nizk Proofs of Membership. Nizk proofs of membership were first studied in [3,2]. A nizk proof system of membership for a certain language L is a pair of algorithms, a prover and a verifier, the latter running in polynomial time, such that the prover, on input string x and a public random string, can compute a proof that convinces the verifier that the statement ‘x ∈ L’ is true, without revealing anything else. Such proof systems satisfy three requirements: completeness, soundness and zero-knowledge. Completeness states that if the input x is in the language L, with high probability, the string computed by the prover makes the verifier accept. Soundness states that the probability that any prover, given the reference string, can compute an x not in L and a string that makes the verifier accept, is very small. Zero-knowledge is formalized by saying that there exists an efficient algorithm that generates a pair which has distribution indistinguishable from the reference string and the proof in a real execution of the proof system (see [2] for a formal definition). Nizk Proofs of Knowledge. Nizk proofs of knowledge were introduced in [7]. A nizk proof system of knowledge for a certain relation R is a pair of algorithms, a prover and a verifier, the latter running in polynomial time, such that the prover, on input string x and a public random string, can compute a proof that convinces the verifier that ‘he knows y such that R(x, y)’. It is defined as a nizk proof system of membership for language dom R that satisfies the following additional requirement, called validity: there exists an efficient extractor that is able to prepare a reference string (together with some additional information) and extract a witness for the common input from any accepting proof given by the prover; moreover, this happens with essentially the same probability that such prover makes the verifier accept. (See [7] for a formal definition.) Commitment Schemes. In the rest of the paper we will consider non-interactive commitment schemes in the public random string model. (In these schemes committer and receiver share a public random string, and both the commitment phase and the commitment phase consist of a single message sent from committer to receiver.) A non-interactive bit commitment scheme (A,B) in the public random string model is a two-phase protocol between two probabilistic polynomial time parties A and B, called the committer and the receiver, respectively, such that the following is true. In the first phase (the commitment phase), given the public random string σ, A commits to a bit b by computing a pair of keys (com, dec) and sending com (the commitment key) to B. In the second phase

Equivocable and Extractable Commitment Schemes

77

(the decommitment phase) A reveals the bit b and the key dec (the decommitment key) to B. Now B checks whether the decommitment key is valid; if not, B outputs a special string ⊥, meaning that he rejects the decommitment from A; otherwise, B can efficiently compute the string s revealed by A. (A,B) has to satisfy three requirements: correctness, hiding and binding. The correctness requirement states that the probability that algorithm B, on input σ, com, dec, can compute the committed bit b after the reference string σ has been uniformly chosen, and after algorithm A, on input σ, b, has returned pair (com, dec), is at least 1 − , for some negligible function . The hiding requirement states that given just σ and the commitment key com, no polynomial-time receiver B can distinguish with probability more than negligible whether the pair (σ, com) is a commitment to 0 or to 1. The binding requirement states that after σ has been uniformly chosen, for any algorithm A returning three strings com , dec0 , dec1 , the probability that decb is a valid decommitment for σ, com as bit b, for b = 0, 1, is negligible. Equivocable Commitment Schemes. Equivocable bit-commitment schemes have been first discussed in [1], and first constructed in [8]. Informally speaking, a non-interactive bit-commitment scheme is equivocable if it satisfies the following additional requirement. There exists an efficient simulator which outputs a transcript leading to a faked commitment such that: (a) the commitment can be decommitted both as 0 and as 1, (the equivocability property) and (b) the simulated transcript is indistinguishable from a real execution (the indistinguisbability property). We now formally define the equivocability property for bit-commitment schemes in the public random string model. Definition 1. (Non-interactive equivocable bit commitment) Let (A,B) be a non-interactive bit-commitment scheme in the public random string model. We say that (A,B) is equivocable if there exists an efficient probabilistic algorithm M which, on input 1n , outputs a 4-tuple (σ  , com , dec0 , dec1 ), satisfying the following: 1. For c = 0, 1, it holds that B(σ  , com , decc ) = c. 2. Let a be the constant defining the length of the common random string; then, for b = 0, 1, the families of random variables a

A0 = {σ ← {0, 1}n ; (com, dec) ← A(σ, b) : (σ, com, dec)}, A1 = { (σ  , com , dec0 , dec1 ) ← M (1n ) : (σ  , com , decb )} are computationally indistinguishable. Extractable Commitment Schemes. Extractable commitment schemes have been first introduced and constructed in [6]. Informally speaking, a commitment scheme is extractable if there exists an efficient extractor algorithm that is able to prepare a reference string (together with some additional information) and extract the value of the committed bit from any valid commitment key sent by the committer; here, by a valid commitment key we mean a commitment key for which there exists a decommitment key that would allow the receiver to obtain the committed bit. A formal definition follows.

78

Giovanni Di Crescenzo

Definition 2. (Non-Interactive extractable string commitment) Let (A,B) be a commitment scheme. We say that (A,B) is extractable if there exists a pair E = (E0 , E1 ) of probabilistic polynomial time algorithms, called the extractor, such that for all algorithms A , for all b ∈ {0, 1}, for all constants c and all sufficiently large n, it holds that |p0 − p1 | ≤ n−c , where p0 , p1 are, resp.,

Prob[ (σ, aux) ← E0 (1n ); (com, dec) ← A (σ); b ← E1 (σ, aux, com) : B(σ, com, dec) = b ]; Prob [ σ ← {0, 1}∗ ; (com, dec) ← A (σ, b) : B(σ, com, dec) = b ].

Non-malleable Commitment Schemes.Non-malleable commitment schemes have been first studied by [10] in the interactive model and by [8] in the noninteractive model. Informally speaking, a non-malleable bit-commitment scheme is a bit-commitment scheme which is secure against the following type of adversary. Given a commitment com1 to a bit b, the adversary tries to compute a commitment com2 which, later, given the decommitment of com1 , can be decommitted as a bit d. The adversary succeeds if the distribution of bit d is significantly different according to the value of bit b. The commitment scheme is non-malleable if such adversary succeeds only with negligible probability. This flavor of non-malleability is also called non-malleability with respect to decommitment. A stronger type of non-malleability requires that the adversary succeeds only with negligible probability even in creating a commitment key to a related value, without necessarily caring whether the adversary is able to reveal such value or not. Such flavor of non-malleability is also called non-malleability with respect to commitment. We define (and we study in this paper) the latter flavor of non-malleable commitment schemes in the public random string model. By D we denote an efficiently sampleable distribution over {0, 1}∗ . By R we denote a relation approximator, that is, an efficient probabilistic algorithm that, given two strings, returns a binary output (intuitively, algorithm R is supposed to measure the correlation between the two input strings). By Adv we denote a non-uniform probabilistic polynomial time algorithm, called the adversary, and by Sim we denote a probabilistic polynomial time algorithm, called the adversary simulator. Now, consider two experiments: an a-posteriori experiment, and an a-priori one. In the a-posteriori experiment, given a commitment com1 to a string s, the adversary Adv tries to compute a commitment com2 = com1 which can be opened only as a string t, having some correlation with string s. While doing this, Adv is allowed to use a history function h and additional history information h(s) on the string s. In the a-priori experiment, the adversary simulator Sim, tries to compute a string t, having some correlation with an unknown string s, given only some history about s and the distribution from which it is drawn. As for Adv, while doing this, Adv is allowed to use a history function h and additional history information h(s) on the string s. We consider a non-malleable commitment scheme as a commitment scheme in which for any distribution D and a relation approximator R, and any efficent adversary Adv, there exists an adversary simulator Sim which succeeds “almost as well” as Adv in returning strings which make R evaluate to 1.

Equivocable and Extractable Commitment Schemes

79

Definition 3. (Non-interactive non-malleable (with respect to commitment) string commitment) Let (A,B) be a non-interactive string commitment scheme in the public random string model. We say that (A,B) is a non-interactive nonmalleable (with respect to commitment) string commitment scheme in the public random string model if there exists a constant a such that the following holds. For every efficient (non-uniform) algorithm Adv, there exists an efficient (non-uniform) adversary simulator Sim, such that for every efficiently samplable distribution D, for every probabilistic polynomial time predicate R, for all positive integers k, it holds that if p(Adv, D, R) is noticeable then the quantity p(Adv, D, R) − p (Sim, D, R) is negligible, where the probabilities p(Adv, D, R) and p (Sim, D, R) are defined as a

p(Adv, D, R) = Prob[ σ ← {0, 1}n ; s ← Dk ; (com1 , dec1 ) ← A(σ, s); (com2 , aux) ← Adv(D, R, k, h, h(s), σ, com1 ) : ∃t, dec2 s.t. B(σ, com2 , dec2 ) = t ∧ com2 = com1 ∧ R(s, t) = 1 ]. p (Sim, D, R) = Prob[ σ ← {0, 1}n ; s ← Dk ; t ← Sim(D, R, k, h, h(s), σ) : R(s, t) = 1 a

3

].

Our Equivocable and Extractable Commitment Schemes

In this section we present our main constructions for non-interactive commitment schemes that are both equivocable and extractable. Formally, we obtain the following Theorem 1. In the public random string model, if there exist a non-interactive equivocable commitment scheme and a non-interactive extractable commitment scheme then there exists (constructively) a non-interactive scheme that is both equivocable and extractable. We note that our assumption for the construction of a non-interactive commitment scheme that is both equivocable and extractable is sufficient (as we will show later) and clearly necessary. We also note that most likely this assumption cannot be simplified to, for example, assuming the only existence of an equivocable commitment scheme, since in [8] the existence of equivocable commitment scheme is shown to be equivalent to the existence of any one-way function and evidence is given in [6] against the possibility of construction extractable commitments from the existence of one-way functions only. In the rest of this section we prove Theorem 1. We present two different constructions of equivocable and extractable commitments; the first does not exactly proves the above theorem but we believe that it might be of independent interest.

80

3.1

Giovanni Di Crescenzo

A First Construction

The first construction combines two tools: equivocable commitments, and noninteractive zero-knowledge proofs of knowledge for arbitrary polynomial-time relations. In this construction, the sender would first commit to a bit by using the sender algorithm of the equivocable commitment scheme and then prove in non-interactive zero-knowledge that he knows either a decommitment key as 0 or a decommitment key as 1 or both decommitment keys (i.e., both the one as 0 and the one as 1). In order to decommit, the sender only needs to send the decommitment key computed according to the equivocable commitment scheme. To convince ourselves that the binding property is satisfied, we note that in a real execution of the protocol, the sender knows both decommitment keys only with exponentially small probability. To convince ourselves that the equivocability property is satisfied, we note that a simulator, given some private information about the random string, can generate a fake commitment key together with two valid decommitment keys using the simulator for the equivocable commitment scheme and then compute a non-interactive zero-knowledge proof using the two decommitment keys as a witness. Note that by the witness-indistinguishability of the non-interactive zero-knowledge proof, the simulated commitment has distribution independent by the value that may be decommitted later (which may be 0 or 1 because of the properties of the equivocable commitment used). To convince ourselves that the extractability property is satisfied, we note that an extractor can be constructed by using the extractor for the non-interactive zero-knowledge proof of knowledge used and then the algorithm of the receiver in the equivocable commitment used. We now proceed with a more formal description of this scheme, which we call (Sen,Rec), and a proof of its properties. By (Alice,Bob) we denote an equivocable commitment scheme and by S its simulator. Let R be the relation defined as {((σ, com); (dec, dec )) | (Bob(σ, com, dec) = 0) ∨ (Bob(σ, com, dec) = 1) ∨ ((Bob(σ, com, dec) = 0) ∧ (Bob(σ, com, dec ) = 1))}. By (P,V) we denote a non-interactive zero-knowledge proof of knowledge for relation R and by E its extractor. (We note that it is enough for (P,V) to be witness-indistinguishable, rather than zero-knowledge). We now formally describe algorithms Sen, Rec, the associated simulator Sim and the associated extractor Ext. The Algorithm Sen. Input to Sen: a security parameter 1n and the public random string σ. Instructions for Sen: 1. Write σ as (σ1 , σ2 ); 2. set (com, dec) = Alice(σ1 , 1n ) and dec = ∅; 3. set π be the proof, computed using σ2 as a reference string, and returned by P to prove the knowledge of (dec, dec ) such that ((σ1 , com), (dec, dec )) ∈ R; 4. return: comkey = (com, π) and deckey = dec.

Equivocable and Extractable Commitment Schemes

81

The Algorithm Rec. Input to Rec: the public random string σ and pair (comkey, deckey). Instructions for Rec: 1. Write σ as (σ1 , σ2 ), comkey as (com, π) and deckey as dec; 2. verify that π is accepted by V, using σ2 as a reference string, as a proof of knowledge of a witness that (σ1 , com) is in domR; 3. verify that Bob(σ1 , com, dec) = b for some bit b; 4. if all the verifications are satisfied then return: b else return: ⊥. The Algorithm Sim. Input to Sim: a security parameter 1n ; Instructions for Sim: 1. Set (σ  , com , dec0 , dec1 ) = S(1n ); 2. set σ1 = σ  and uniformly choose σ2 ; 3. compute π  , using algorithm P, σ2 as a reference string, and (dec0 , dec1 ) as a witness, as a proof of knowledge of a witness that (σ1 , com ) is in domR; 4. let σ = (σ1 , σ2 ) and comkey  = (com , π  ); 5. return: (σ, comkey  , dec0 , dec1 ); The Algorithm Ext=(Ext0 ,Ext1 ). Instructions for Ext0 : On input a security parameter 1n , let (σ2 , aux) = E(1n ), uniformly choose σ1 , let σ = (σ1 , σ2 ) and return: (σ, aux). Instructions for Ext1 : On input (σ, aux) and a commitment key comkey, write comkey as comkey = (com, π), σ as (σ1 , σ2 ), let dec = E1 (aux, σ2 , π) and b =Bob(σ1 , com, dec). If b ∈ {0, 1} (i.e., b =⊥), then return: b. We now note that (Sen,Rec) satisfies the properties of correctness, hiding, binding, equivocability and extractability. Correctness, Hiding, Binding and Extractability. These properties are simply verified to hold. In particular, the correctness and binding properties follow from the analogue properties of the equivocable commitment scheme used. For the hiding property, we first recall a result from [11] saying that any zero-knowledge proof is also witness-indistinguishable. Then this property follows from the analogue property of the equivocable commitment scheme used and from the witnessindistinguishability of the proof of knowledge used. The extractability property directly follows from the analogue property of the proof of knowledge and the correctness property of the commitment scheme used. Equivocability. In order to prove this property, we need to show two claims: an equivocability and an indistinguishability claim. For the equivocability claim, we note that because of the equivocability property of (Alice,Bob), we have that simulator S can open its commitment key com both as 0 and as 1; therefore, also simulator Sim can open its commitment key comkey  both as 0 and as 1. For the indistinguishability claim, we first consider the differencies between a real execution of the scheme (Sen,Rec) and the output of the simulator Sim: (1) the commitment key is the output of algorithm Alice in the former case, and is the output of simulator S in the latter; (2) the non-interactive zero-knowledge proof

82

Giovanni Di Crescenzo

of knowledge is computed using algorithm P in both cases, using as witness a valid decommitment key as some bit b in the former case, and using as witness two valid decommitment keys for the commitment key returned by S in the latter case. Note that by a standard hybrid argument, an efficient distinguisher of a real execution of the scheme (Sen,Rec) and the output of the simulator Sim would either notice difference (1) or notice difference (2). Then we observe that an efficient distinguisher of difference (1) violates the indistinguishability property of the equivocable commitment scheme (Alice,Bob), and an efficient distinguisher of the difference (2) violates the witness-indistinguishability property of the zeroknowledge proof used. 3.2

A Second Construction

The second construction uses the tools of equivocable commitments and extractable commitments in order to obtain a commitment scheme which is both equivocable and extractable. In this construction, the sender would first commit to a bit by using the sender algorithm of the equivocable commitment scheme and then commit, using the extractable commitment scheme, to two strings randomly permuted, one being the decommitment key of the first commitment key and one being a random string of the same length. To convince ourselves that the equivocability property is satisfied, we note that a simulator can run the simulator of the equivocable commitment scheme used and thus compute two valid decommitment keys. Then the simulator will commit to these two decommitment keys using the extractable commitment scheme (rather than using one valid decommitment key and one random string). The indistinguishability of the simulated commitment from a real execution of the commitment follows then from the hiding property of the extractable commitment scheme. To convince ourselves that the extractability property is satisfied, note that an extractor can be constructed by using the extractor of the extractable commitment scheme used in order to obtain the two committed strings and then verify that exactly one of them is a valid decommitment key for the first commitment. We now proceed with a more formal description of this scheme, which we call (Sen,Rec), and a proof of its properties. By (Aq,Bq) we denote an equivocable commitment scheme and by S its simulator; by (Ax,Bx) we denote an extractable commitment scheme and by E its extractor. We now formally describe algorithms Sen, Rec, the associated simulator Sim and the associated extractor Ext. The Algorithm Sen. Input to Sen: a security parameter 1n , the public random string σ and bit b. Instructions for Sen: 1. Write σ as (σ1 , σ2 , σ3 ); 2. set (eqcom, eqdec) = Aq(σ1 , b); 3. uniformly choose r ∈ {0, 1}|eqdec| and c ∈ {0, 1}; 4. set (excom0 , exdec0 ) = Ax(σ2 , eqdec) and (excom1 , exdec1 ) = Ax(σ3 , r); 5. return: comkey = (eqcom, excomc , excom1−c ) and deckey = (dec, exdec0 ).

Equivocable and Extractable Commitment Schemes

83

The Algorithm Rec. Input to Rec: the public random string σ and pair (comkey, deckey). Instructions for Rec: 1. Write σ as (σ1 , σ2 , σ3 ) and comkey as (eqcom, excom1, excom2) and deckey as (eqdec, exdec); 2. set s1 =Bx(σ2 , excom1, exdec) and s2 =Bx(σ3 , excom2, exdec); 3. verify that Bx(σ2 , eqcom, s1) = b or Bx(σ3 , com, s2) = b for some bit b (i.e., b =⊥); 4. if this verification is satisfied then return: b else return: ⊥. The Algorithm Sim. Input to Sim: a security parameter 1n ; Instructions for Sim: 1. Set (σ  , eqcom , eqdec0 , eqdec1 ) = S(1n ); 2. set σ1 = σ  and uniformly choose σ2 , σ3 ; 3. set (excom0 , exdec0 ) = Ax(σ2 , eqdec0 ) and (excom1 , exdec1 ) = Ax(σ3 , eqdec1 ); 4. let comkey  = (eqcom , excom0 , excom1 ); 5. let dec0 = (eqdec0 , exdec0 ); 6. let dec1 = (eqdec1 , exdec1 ); 7. return: (σ, comkey  , dec0 , dec1 ); The Algorithm Ext=(Ext0 ,Ext1 ). Instructions for Ext0 : On input a security parameter 1n , let (σ2 , aux2 ) = E(1n ) and (σ3 , aux3 ) = E(1n ), uniformly choose σ1 , let σ = (σ1 , σ2 , σ3 ), aux = (aux2 , aux3 ) and return: (σ, aux). Instructions for Ext1 : On input (σ, aux) and a commitment key comkey, write comkey as comkey = (eqcom, excom1, excom2), σ as (σ1 , σ2 , σ3 ), aux as (aux2 , aux3 ); let s2 = E1 (aux2 , σ2 , excom1), s3 = E1 (aux3 , σ3 , excom2), verify that Bx(σ2 , eqcom, s2) = b or Bx(σ3 , com, s3) = b for some bit b. If there exists such a bit b (i.e., b =⊥), then return: b else return: ⊥. We now see that (Sen,Rec) satisfies the properties of correctness, hiding, binding, equivocability and extractability. Correctness, Hiding, Binding and Extractability. These properties are simply verified to hold. In particular, they all follow directly from the analogue properties of the equivocable and extractable commitment schemes used. Equivocability. In order to prove this property, we need to show two claims: an equivocability and an indistinguishability claim. For the equivocability claim, we note that because of the equivocability property of (Aq,Bq), we have that simulator S can open its commitment key eqcom both as 0 and as 1; therefore, simulator Sim can compute both decommitment keys for eqcom and commit to both using the extractable commitment scheme, and open one of those to decommit as 0 and the other to decommit as 1. For the indistinguishability claim, we first consider the differences between a real execution of the scheme (Sen,Rec) and the output of the simulator Sim: (1) the first commitment key is the output

84

Giovanni Di Crescenzo

of algorithm Aq in the former case, and is the output of simulator S in the latter; (2) the remaining two commitment keys are computed using algorithm Ax in both cases, where in the former the committed values are a random string and a decommitment key for the first commitment key computed using algorithm Aq, and in the latter the committed values are both valid decommitment keys for the commitment key returned by S in the latter case. Note that by a standard hybrid argument, an efficient distinguisher of a real execution of the scheme (Sen,Rec) and the output of the simulator Sim would either notice difference (1) or notice difference (2). Then we observe that an efficient distinguisher of difference (1) violates the indistinguishability property of the equivocable commitment scheme (Aq,Bq), and an efficient distinguisher of the difference (2) violates the hiding property of the extractable commitment scheme used.

4

A Non-interactive and Non-malleable Commitment Scheme

In this section we present a non-interactive commitment schemes that is nonmalleable with respect to our commitment. This scheme is obtained by combining our main construction of a commitment scheme that is both equivocable and extractable with the construction paradigm of [10]. Formally, we obtain the following Theorem 2. In the public random string model, if there exists a non-interactive extractable commitment scheme then there exists a non-interactive commitment scheme that is non-malleable with respect to commitment. In the rest of this section we prove Theorem 2. Our construction follows the basic paradigm from [8] for obtaining a non-interactive commitment scheme that is non-malleable (with respect to decommitment). Specifically, in [8], such a scheme was constructed by carefully combining an arbitrary construction of an equivocable commitment scheme with some unduplicatable authentication technique (previously, a different unduplicatable authentication technique had been used in [10] to obtain a non-malleable commitment in the interactive setting). Our technique consists of carefully combining any construction of a commitment scheme that is both equivocable and extractable with the unduplicatable authentication technique in [8]. The difference between our result and the result in [8] is in the flavor of non-malleability achieved. The scheme in [8] achieves non-malleability “with respect to decommitment” (a definition first used by [8]), while ours achieves non-malleability ”with respect to commitment” (the definition used by [10]). This difference in the terminology was first proposed by [12]. We note that while non-malleability with respect to commitment is provably stronger than non-malleability with respect to decommitment, the latter seems enough for all practical applications. We now proceed with a more formal description of this scheme, which we call (Sen,Rec), and a proof of its properties. By (A,B) we denote an ordinary non-interactive commitment scheme, by G we denote a pseudo-random generator and by MAC a private-key authentication scheme. By (Aqx,Bqx) we denote a non-interactive commitment scheme that is both equivocable and extractable,

Equivocable and Extractable Commitment Schemes

85

by S its simulator and by E=(E0 ,E1 ) its extractor. We now formally describe algorithms Sen, Rec and the associated simulator Sim. The Algorithm Sen. Input to Sen: a security parameter 1n , the public random string σ and bit b. Instructions for Sen: 1. Write σ as (σ1 , σ2 ); 2. uniformly choose seed ∈ {0, 1}n and set (com, dec) = A(σ1 , seed); 3. let m = |com| and write com as com1 ◦ · · · ◦ comm , for comi ∈ {0, 1}; 4. write σ2 as ((σ2,1,0 , σ2,1,1 ), . . . , (σ2,m,0 , σ2,m,1 )); 5. for i = 1, . . . , m, set (qxcomi , qxdeci ) = Aqx(σ2,i,comi , b); 6. set qxcom = (qxcom1 , . . . , qxcomm ), k = G(seed) and tag =MACk (qxcom). 7. set comkey = (com, qxcom, tag) and deckey = (seed, dec, qxdec1 , . . . , qxdecm ). 8. return: (comkey, deckey). The Algorithm Rec. Input to Rec: the public random string σ and pair (comkey, deckey). Instructions for Rec: 1. Write σ as (σ1 , σ2 ) and comkey as (com, qxcom, tag); 2. write deckey as (seed, dec, qxdec1 , . . . , qxdecm ); 3. verify that seed =B(σ1 , com, dec); 4. verify that tag =MACk (qxcom), for k = G(seed); 5. verify that Bqx(σ2,i,comi , qxcomi , qxdeci ) = b for i = 1, . . . , m and some b ∈ {0, 1}; 6. if all verifications are satisfied then return: b else return: ⊥. The Algorithm Sim. Input to Sim: a security parameter 1n ; Instructions for Sim: 1. Uniformly choose σ1 and seed ∈ {0, 1}n ; 2. set (com, dec) = A(σ1 , seed); 3. let m = |com| and write com as com1 ◦ · · · ◦ comm , for comi ∈ {0, 1}; 4. for i = 1, . . . , m,  set (σ2,i,com , qxcomi , qxdeci0 , qxdeci1 ) = S(1n ); i  set (auxi , σ2,i,1−comi ) = E0 (1n );     , σ2,1,1 ), . . . , (σ2,m,0 , σ2,m,1 )); 5. write σ2 as ((σ2,1,0 6. set qxcom = (qxcom1 , . . . , qxcomm ), k = G(s) and tag =MACk (qxcom). 7. set σ  = (σ1 , σ2 ) and comkey = (com, qxcom, tag); 8. send (σ  , comkey) to Adv; 9. get comkey  from Adv; 10. write comkey  as comkey  = (com , qxcom , tag  ); 11. let i be the first index such that comi = comi ;  , qxcomi ); 12. let d =E1 (auxi , σ2,i,1−com i 13. return: d.

86

Giovanni Di Crescenzo

It is not hard to see that (Sen,Rec) satisfies the properties of correctness, hiding, binding, equivocability and extractability. We only concentrate on showing that (Sen,Rec) is also non-malleable with respect to commitment. Non-malleability with Respect to Commitment. The proof of this property is obtained as a significantly simplified version of the proof that the commitment scheme in [8] is non-malleable with respect to decommitment. We need to show that (Sen,Rec) is non-malleable with respect to commitment. Formally, this means that there for all constants c, and all sufficiently large n, for any efficiently samplable distribution D, any probabilistic poly-time predicate R and any efficient adversary Adv, if p(Adv, D, R) is noticeable then the proposed simulator Sim satisfies |p(Adv, D, R) − p (Sim, D, R)| < n−c . For the rest of the proof, we assume that the quantity p(Adv, D, R) is noticeable. We start then by considering the event in which the adversary Adv is successful in making R evaluate to 1, and it creates a commitment com to its seed that has the same binary expansion as the commitment com by Sen to its seed. Exactly as in [8], we can show that the probability of such event is negligible, assuming that the atomic commitment scheme used satisfies the hiding property (informally speaking, this follows from the fact that if Adv copies the commitment key to the seed, then it is able to compute a valid authentication tag only with exponentially small probability). Therefore, from now on we can assume that com = com. Under this assumption, we now would like to show that the constructed adversary simulator Sim well approximates the probability of Adv in making predicate R evaluate to 1. More formally, we define q(Adv, Sim, D, R) as the probability that adversary Adv returns a commitment to a value d such that R(b, d) = 1, while interacting with algorithm Sim rather than with algorithm Sen, and where b is chosen according to distribution D. First, we observe that Adv is not able to efficiently distinguish the case in which it is interacting with a real committer Sen or the case in which it is interacting with the output of algorithm Sim (or otherwise, he can be used to contradict either the indistinguishability of the equivocable commitment scheme used or the extractability of the extractable commitment scheme used). Formally, this implies that |p(Adv, D, R) − q(Adv, Sim, D, R)| < n−c . Then we note that the probability that Adv makes the equality R(b, d) = 1 true when interacting with Sen is the same (up to negligible additive factors) as the probability that Adv makes the equality R(b, d) = 1 when interacting with Sim, where b is independently chosen according to distribution D. Now, since (Sen,Rec) is extractable (here we use the fact that the atomic scheme used is not only equivocable, as in [8], but also extractable), Sim can extract some bit d from the commitment key returned by Adv and return it. Note that assuming that com = com, by construction of Sim, the bit d returned by algorithm Sim is exactly the same bit committed by Adv. Formally, this implies that |p (Sim, D, R) − q(Adv, Sim, D, R)| < n−c , from which the proof follows.

Equivocable and Extractable Commitment Schemes

87

References 1. D. Beaver, Adaptive Zero-Knowledge and Computational Equivocation, in Proc. of FOCS 96. 2. M. Blum, A. De Santis, S. Micali, and G. Persiano, Non-Interactive ZeroKnowledge, SIAM Journal of Computing, vol. 20, no. 6, Dec 1991, pp. 1084–1118. 3. M. Blum, P. Feldman, and S. Micali, Non-Interactive Zero-Knowledge and Applications, Proc. of STOC 88. 4. G. Brassard, C. Cr´epeau, and D. Chaum, Minimum Disclosure Proofs of Knowledge, Journal of Computer and System Sciences, vol. 37, no. 2, pp. 156–189. 5. R. Canetti and R. Fischlin, Universally-Composable Commitment, in Proc. of CRYPTO 2001. 6. A. De Santis, G. Di Crescenzo and G. Persiano, Necessary and Sufficient Assumptions for Non-Interactive Zero-Knowledge Proofs of Knowledge for all NP relations, in Proc. of ICALP 2000. 7. A. De Santis and G. Persiano, Zero-Knowledge Proofs of Knowledge without Interaction, in Proc. of FOCS 92. 8. G. Di Crescenzo, Y. Ishai, and R. Ostrovsky, Non-Interactive and Non-Malleable Commitment, in Proc. of STOC 98. 9. G. Di Crescenzo, J. Katz, R. Ostrovsky and A. Smith, Efficient and Non-Interactive Non-Malleable Commitment, in Proc. of EUROCRYPT 2001. 10. D. Dolev, C. Dwork, and M. Naor, Non-Malleable Cryptography, in SIAM Journal on Computing, 2000. 11. U. Feige and A. Shamir, Witness-Indistinguishable and Witness-Hiding Protocols, in Proc. of STOC 90. 12. D. Fischlin and M. Fischlin, Efficient Non-Malleable Commitment Schemes, in Proc. of CRYPTO 2000. 13. S. Goldwasser, S. Micali, and C. Rackoff, The Knowledge Complexity of Interactive Proof-Systems, SIAM Journal on Computing, vol. 18, n. 1, 1989. 14. M. Naor, Bit Commitment using Pseudorandomness, in Proc. of CRYPTO 91.

An Improved Pseudorandom Generator Based on Hardness of Factoring Nenad Dedi´c1 , Leonid Reyzin1 , and Salil Vadhan2 1

Boston University Computer Science 111 Cummington St., Boston MA 02215, USA {nenad,reyzin}@cs.bu.edu 2 Harvard University DEAS 33 Oxford St., Cambridge MA 02138, USA [email protected]

Abstract. We present a simple to implement and efficient pseudorandom generator based on the factoring assumption. It outputs more than pn/2 pseudorandom bits per p exponentiations, each with the same base and an exponent shorter than n/2 bits. Our generator is based on results by H˚ astad, Schrift and Shamir [HSS93], but unlike their generator and its improvement by Goldreich and Rosen [GR00], it does not use hashing or extractors, and is thus simpler and somewhat more efficient. In addition, we present a general technique that can be used to speed up pseudorandom generators based on iterating one-way permutations. We construct our generator by applying this technique to results of [HSS93]. We also show how the generator given by Gennaro [Gen00] can be simply derived from results of Patel and Sundaram [PS98] using our technique.

1 1.1

Introduction Background

Blum and Micali [BM84] and Yao [Yao82] introduced the notion of a pseudorandom generator secure against all polynomial-time adversaries. Since then, multiple constructions have been proposed. We do not survey them all here; rather, we focus on the ones that are of particular relevance to our work. Almost all of the proposed constructions (starting with discrete-logarithmbased one of [BM84]) consisted of repeatedly applying some one-way function and outputting its hardcore bits. The first generator based on the factoring assumption was proposed by Blum, Blum and Shub [BBS86]. It iterated modular squaring with an n-bit modulus, extracted one bit of output per iteration, and was originally proven secure based on the quadratic residuosity assumption. This was later improved by [ACGS88], who showed that that only the factoring assumption is needed and that O(log n) bits could be extracted per iteration. H˚ astad, Schrift and Shamir [HSS93] demonstrated that discrete logarithm modulo a product of two primes hides n/2 + O(log n) bits, based only on the factoring assumption. They suggested how to construct a pseudorandom generator based on this result by repeatedly exponentiating a fixed base g to an S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 88–101, 2003. c Springer-Verlag Berlin Heidelberg 2003 

An Improved Pseudorandom Generator Based on Hardness of Factoring

89

n-bit exponent; however, their construction required the use of hash function families to obtain uniformly pseudorandom bits (because the result of modular exponentiation could not be used in the next iteration of the generator, as it was not indistinguishable from a uniformly distributed n-bit string, nor from a uniformly distributed value in ZN ). This resulted in a loss of Θ(log2 n) bits per iteration, thus giving a generator that output n/2 − Θ(log2 n) pseudorandom bits per fixed-base modular exponentiation and one hashing. Note that having the same base for each modular exponentiation is important, because it allows for precomputation (as further described in Section 4). Goldreich and Rosen [GR00] further improved this generator by demonstrating, in particular, that one can use exponents of length n/2 instead of full-length exponents. However, families of hash functions (or, rather, extractors) were still necessary, thus resulting in a loss of efficiency of each iteration, and the number of bits obtained. Namely, their generator obtains n/2 − O(log2 n) pseudorandom bits per a fixed-base modular exponentiation (with half-length exponent) and one application of extractor. 1.2

Our Contributions

We improve the pseudorandom generators of [HSS93] and [GR00] by removing the need for hashing or extractors. Thus, our generator obtains n/2 + O(log n) bits of randomness per half of a fixed-base modular exponentiation (to be precise, our exponent is n/2 − O(log n) bits long). The resulting construction is thus simpler and faster than the ones of [HSS93] and [GR00]. Our generator is quite similar to the one of Gennaro [Gen00]: it also essentially repeatedly raises a fixed base to a short exponent, outputs some bits of the result, and uses the rest as an exponent for the next iteration. The main difference is that Gennaro’s generator, while more efficient than ours, works modulo a prime and requires the nonstandard assumption that discrete logarithms with short exponents are hard. Our generator, on the other hand, requires only the assumption that factoring products of safe primes is hard. As explained later in the text, that is a trivial consequence of the standard factoring assumption, if safe primes are frequent. Gennaro’s generator also requires the assumption that safe primes are frequent. We present a more general technique, which abstracts and clarifies the analysis of our generator and those of [Gen00,GR00]. The technique is a modification of the Blum-Micali paradigm [BM84] for constructing pseudorandom generators by iterating a one-way permutation and outputting hardcore bits. We observe that, whenever the one-way permutation’s hardcore bits are a substring of its input1 (as is the case for most natural one-way permutations), then the pseudorandom generator construction can be modified so that a substring of the input to 1

In fact, it is not required that the hardcore bits be precisely a substring; as long as the input can be fully decomposed into two independent parts, one of which is hardcore bits, our approach can be applied. For example, if the hardcore bits are a projection of the input onto a subspace of Z2n , and the “non-hardcore” bits are a projection onto the orthogonal subspace, then we are fine.

90

Nenad Dedi´c, Leonid Reyzin, and Salil Vadhan

the one-way permutation remains fixed in all iterations. This can potentially improve efficiency through precomputation. This general technique already yields most of the efficiency gains in our generator and those of [Gen00,GR00], and also applies to other pseudorandom generators (such as [BBS86]). In particular, it explains how Gennaro’s generator is obtained from Patel’s and Sundaram’s [PS98], providing a simpler proof than the one in [Gen00].

2

An Efficient Pseudorandom Generator

Let: – |x| denote the length of a string x, and xi denote the i-th bit of x; – P, Q of equal length be safe primes (P = 2P  + 1, Q = 2Q + 1, P  and Q are prime)2 ; – N = P Q and g be a random element of the group QRN of quadratic residues mod N ; log2 N ! = n; / QRN be two quadratic non-residues, such that JN (s) = 1 and JN (s) = – s, s ∈ −1; – m = m(n) = n/2! + O(log n); c = c(n) = n − m; – p(·) be a polynomial.

Construction 1

INPUT y ∈ ZN FOR i = 0 TO p(n) OUTPUT ym . . . y1 LET y ← g yn ...ym+3 · sym+2 · sym+1 mod N

On an input y ∈ ZN , this generator outputs m pseudorandom bits per one modular exponentiation with a c-bit exponent. Construction 1 can be optimized, because it uses the same fixed base g in every iteration. Thus, we can precompute some powers of g to speed up exponentiation. Contrary to popular belief, the “naive” precomputation strategy c−2 is not very good (it would require roughly of computing g, g 2 , g 4 , g 8 , . . . , g 2 (c − 2)/2 multiplications per iteration). It is better to use the technique of Lim and Lee [LL94]. As a simple example, consider precomputing just three values: (c−2)/2 , and their product. Then one can perform the square-and-multiply g, g 2 algorithm “in parallel,” looking simultaneously at two positions of the bit string that represents the exponent: i and i + (c − 2)/2. This would result in an algorithm that takes (c − 2)/2 squarings and fewer than (c − 2)/2 multiplications. a 2a 3a c−2−a Generalizing this idea, we precompute the values g, g 2 , g 2 , g 2 , . . . , g 2 for some a (e.g., a = (c − 2)/2 or a = (c − 2)/4), as well as the products of all 2

We can relax the requirements on P and Q and ask only that (P − 1)/2 and (Q − 1)/2 be relatively prime; we would then have to appropriately modify our security assumption and to pick g specifically to be a generator of QRN , because a random g may not be a generator.

An Improved Pseudorandom Generator Based on Hardness of Factoring

91

subsets of these values. Then we can perform the square-and-multiply algorithm “in parallel” by looking at (c − 2)/a positions in the exponent at once, thus obtaining an algorithm that takes (c − 2)/a squarings and multiplications. In particular, this algorithm would outperform the naive precomputation strategy when a = (c − 2)/4, while precomputing just 15 values instead of c − 2. In fact, if one has space to precompute c − 2 values, then this algorithm will require just (c − 2)/ log(c − 2) multiplications and squarings. Finally, it should be noted that this technique can also be used to speed up the pseudorandom generators given in [GR00], [HSS93] and [Gen00]. A further small optimization is possible if we choose P to be 3 modulo 8, and Q to be 7 modulo 8 (N is then called a Williams integer). In that case we / QRN and JN (−1) = −1, JN (2) = 1. can fix s = −1 and s = 2, because −1, 2 ∈ The last line of the generator then becomes: LET y ← g yn ...ym+3 · (−1)ym+2 · 2ym+1 mod N. In Section 4, we prove that Construction 1 indeed is a pseudorandom generator, using the techniques from Section 3, under the following assumption: The first assumption is that products of safe primes are hard to factor: Assumption 1 Let Nn denote the set of all n-bit products of equally sized safe primes: Nn = {P Q | |P Q| = n, |P | = |Q|, P and Q are safe primes }. Let A be a probabilistic polynomial-time algorithm. There is no constant c > 0 such that for all sufficiently large n: P r[A(P · Q) = P ] >

1 nc

where N = P · Q is chosen uniformly from Nn . To clarify the relation of the previous assumption, and the standard factoring assumption, let us take a look at: Assumption 2 For sufficiently large k, a polynomial in k fraction of integers between 2k and 2k+1 are safe primes. Clearly, Assumption 1 follows from the standard factoring assumption and Assumption 2. However, in the rest of the paper, we will use Assumption 1 specifically, because it is possible that it holds independently of Assumption 2.

3

General Construction and Proof Technique

In this section, we describe a general technique that we use to prove the pseudorandomness of the generator given in Construction 1. This general technique can also be used to speed up other pseudorandom generators, because it allows one

92

Nenad Dedi´c, Leonid Reyzin, and Salil Vadhan

to keep some bits of the seed the same in every iteration, thus permitting precomputation. In particular, we demonstrate that it is possible to keep the bits at hardcore positions intact, throughout the execution of the generator. This technique can be used to prove pseudorandomness of Gennaro’s generator from [Gen00] (see Section A). 3.1

Background

We recall without proof two well-known facts (see, e.g. [Gol01]). The first one says that one-way permutations plus hard-core bits give one a PRG with fixed expansion. The second one says that from a PRG with fixed expansion one can obtain a PRG with arbitrary (polynomial) expansion by iterating: dropping some bits of the output and using the remaining ones as a seed for the next iteration. Let f : {0, 1}n → {0, 1}n be a one-way permutation and H : {0, 1}n → {0, 1}m be hardcore for f . Let ◦ denote concatenation of strings. Lemma 1. G1 : {0, 1}n → {0, 1}n+m : G1 (x) = f (x) ◦ H(x) def

is a pseudorandom generator with an n-bit seed x. Lemma 2. Let G1 : {0, 1}n → {0, 1}n+m (for m polynomial in n). Let G : {0, 1}n → {0, 1}mp(n) , where p is a polynomial, be the following algorithm with an n-bit input x0 : FOR i = 0 TO p(n) LET xi+1 ◦ hi ← G1 (xi ), where |xi+1 | = n and |hi | = m OUTPUT hi If G1 is a pseudorandom generator, then G is a pseudorandom generator. Traditionally (e.g., [BM84,Yao82]), the Lemmata 1 and 2 are combined in the following well-known construction. FOR i = 0 TO p(n) LET xi+1 ← f (xi ) OUTPUT H(xi )

Construction 2 Visually, G(x0 ): x0

f

f

f

−→ x1 −→ x2 −→ · · · output: H(x0 ) H(x1 ) · · ·

The following summarizes this section: Lemma 3. Let f : {0, 1}n → {0, 1}n be a one-way permutation and the function H : {0, 1}n → {0, 1}m be hardcore for f . Let p(·) be a polynomial. Then Construction 2 is a pseudorandom generator.

An Improved Pseudorandom Generator Based on Hardness of Factoring

3.2

93

New General Constructions

There is no reason that the above two theorems have to be combined specifically as in Construction 2. Below we propose an alternative way to combine them. Let: – [·, ·] denote some bijective mapping, efficiently computable in both directions (direct and inverse), between pairs of strings h, r from {0, 1}m × {0, 1}n−m and strings x from {0, 1}n : [h, r] ↔ x (such mapping can be as simple as concatenation, but there is no reason to restrict ourselves to it) – US denotes the uniform distribution on set S – D1 ∼ D2 denote that D1 is computationally indistinguishable from D2 – D1 # D2 denote that D1 is statistically close to D2 Suppose the n-bit value x represents the pair of values [h, r], where |h| = m and |r| = n − m. Suppose the hard-core function simply selects the first element of this pair: H(x) = h. Then f can be viewed as: f

hi ri −→ hi+1 ri+1 and G1 provided by Lemma 1 as G1 (xi ) = G1 ([hi , ri ]) = xi+1 ◦ hi = [hi+1 , ri+1 ] ◦ hi (i.e., we simply split the (n + m)-bit output of G1 into two strings, xi+1 and hi , and further split xi+1 into two strings hi+1 and ri+1 ): G

1 hi ri −→ hi+1 ri+1 hi

Consider now a function G1 : G1 (xi ) = [hi , ri+1 ] ◦ hi+1 : G

1 hi ri −→ hi ri+1 hi+1

Clearly, since G1 is a PRG, so is G1 (because if the output G1 could be distinguished from uniform, then so could G1 , by applying the appropriate bijections and swapping strings hi and hi+1 ). Applying Lemma 2 to G1 , we get the following PRG G with an n-bit seed x0 = [h0 , r0 ]: FOR i = 0 TO p(n) LET [hi+1 , ri+1 ] ← f ([h0 , ri ]) OUTPUT hi+1

Construction 3 Visually, G (x): [h0 , r0 ]

G

G

G

1 1 1 −→ [h0 , r1 ] −→ [h0 , r2 ] −→ ··· output: h1 h2 ···

The potential benefit of this construction is that part of the input to f (namely, h0 ) remains the same for every iteration. For many f (such as the one in the next section) this results in increased efficiency, because some precomputation is possible.

94

Nenad Dedi´c, Leonid Reyzin, and Salil Vadhan

Suppose further (in addition to everything assumed in Construction 3) that there exists an efficient algorithm R, that, given f ([h, r]) and h, computes f ([0m , r]) (for any h, r), and such that R(Un ×Um ) = Un . Let G1 (r) = f ([0m , r]). Then G1 is a PRG (because otherwise G1 ([h, r]) = f ([h, r]) ◦ h could be distinguished from random by computing R(G1 ([h, r])) = G1 (r)). Represent the n-bit output of G1 (ri ) as [hi+1 , ri+1 ]. Then we can apply Lemma 2 to G1 , to get the following PRG G with an (n − m)-bit seed r0 : FOR i = 0 TO p(n) LET [hi+1 , ri+1 ] ← f ([0m , ri ]) OUTPUT hi+1

Construction 4 Visually, G (x): r0

G

G

G

1 1 1 −→ r1 −→ r2 −→ ··· output: h1 h2 · · ·

This construction has the further benefit that the fixed part of the input to f is always 0m (rather than a random value h0 ). Thus, even more precomputation may be possible (as shown in the next section). Moreover, the seed for this construction is only n − m, rather than n, bits long. The summary of this section is given in the following: Lemma 4. Let f : {0, 1}n → {0, 1}n be a one-way permutation and the function H : {0, 1}n → {0, 1}m be hardcore for f . Let p(·) be a polynomial. Let [·, ·] denote some bijective mapping, efficiently computable in both directions (direct and inverse), between pairs of strings h, r from {0, 1}m × {0, 1}n−m and strings x from {0, 1}n : [h, r] ↔ x, such that H([h, r]) = h. Then Construction 3 is a pseudorandom generator. Lemma 5. Let (in addition to the conditions stated in Lemma 4) there exist an efficient algorithm R, that, given f ([h, r]) and h, computes f ([0m , r]) for any h, r. Let R(Un × Um ) = Un . Then, Construction 4 is a pseudorandom generator. 3.3

Generalizing to Other Distributions

It is not necessary to restrict Lemmata 1 and 2 only to binary strings. Indeed, we can slightly generalize the notion of a pseudorandom generator to allow domains and ranges other than just {0, 1}k . Definition 1. Let D and R be two efficiently samplable sets of binary strings. A deterministic polynomial-time algorithm A is called a (D → R)-pseudorandom generator if it holds that A(UD ) is indistinguishable from UR . Given this definition, we can reformulate Lemmata 1 and 2 as follows. Let D, R ⊂ {0, 1}n be efficiently samplable and let f : D → R be a bijection. Let H : D → {0, 1}m be hardcore for f . Theorem 1. G1 (x) = f (x) ◦ H(x) is a (D → R × {0, 1}m )-pseudorandom generator. def

An Improved Pseudorandom Generator Based on Hardness of Factoring

95

Suppose further that f is a permutation of D. Theorem 2. If G1 is a (D → D × {0, 1}m )-pseudorandom generator, then a (D → {0, 1}mp(n) )-pseudorandom generator can be obtained by iterating G1 like in Lemma 2 (but with xi ∈ D). Construction 3 can also be formulated in terms of more general sets. Namely, suppose that the uniform distribution on D can be decomposed into a direct product of two independent parts: uniform on hardcore bits, and the appropriate distribution on the rest. Then we can swap the hardcore bits as in Construction 3, because both hi and hi+1 are uniform and independent from ri+1 anyway. Of course, we do not need true uniformity and independence, but only enough to fool a polynomial-time adversary. More precisely, let D ⊂ {0, 1}n be an efficiently samplable domain; let G1 be a (D → D × {0, 1}m )-pseudorandom generator that maps [h, r] ∈ D to [h , r ] ◦ h. Let V be the projection of D onto its non-hardcore bits: for r ∈ {0, 1}n−m , def Pr V = r = Prx∈D (∃h) x = [h, r] . Lemma 6. If the uniform distribution UD is indistinguishable from [U{0,1}m , V ], then G1 is a ([U{0,1}m , V ] → [U{0,1}m , V ] × {0, 1}m )−pseudorandom generator, and can be iterated as in Construction 3 to obtain a (D → {0, 1}mp(n) )pseudorandom generator. Lemmata 4 and 5 (and consequently Constructions 3 and 4) can be reformulated for more general sets D, R, just like Construction 3.

4

Proof of Correctness of Our Generator

We use the idea described in Section 3 to prove that the algorithm given in Construction 1 is a (ZN → {0, 1}mp(n) )-pseudorandom generator, under Assumption 1. To be precise, Section 3 was written in terms of one-way functions, while Construction 1 first of all selects a one-way function from a family by picking N and g. Of course, the generic constructions of Section 3 still apply (with appropriate technical modifications), because Lemmata 1 and 2 still hold (see [Gol01]). 4.1

Establishing a Single Iteration

We start by recalling Theorem 5 of [HSS93]. Let N be a random n-bit Blum integer, and let g be a random quadratic residue in Z∗N . Theorem 3 ([HSS93]). If Assumption 1 holds, then h : {0, . . . , ordN (g)−1} → {0, 1}m , h(x) = xm . . . x1 is hardcore for f : {0, . . . , ordN (g)−1} → QRN , where f (x) = g x mod N 3 . 3

The usual definitions of one-way function and hardcore function require that their domain be efficiently samplable. However, that is not the case with the domain of f and h, since ordN (g) = φ(N )/4, which may not be revealed. This will not be a problem, because {0, . . . , N/4} is efficiently samplable, and the uniform distribution on that domain is statistically close to the uniform distribution on {0, . . . , ordN (g) − 1}.

96

Nenad Dedi´c, Leonid Reyzin, and Salil Vadhan

Using Theorem 1, a direct consequence of the previous theorem is: Corollary 1. If Assumption 1 holds, then distributions (g x mod N, xm . . . x1 ) and (g x mod N, r), for uniformly randomly chosen x < ordN (g), r ∈ {0, 1}m , are indistinguishable. Note that this does not automatically give a pseudorandom generator suitable for iteration, because it is not of the form (D → D × {0, 1}m ). In [HSS93], universal hash functions are used to achieve this. However, by slightly modifying the function f (x) = g x , it is possible to obtain a simple pseudorandom generator that permits iteration. The intuitive idea is to try to make f a bijection of the entire ZN . To do so, we first make sure that g generates the entire QRN by picking N as a product of two safe primes: then a random g ∈ QR(N ) is overwhelmingly likely to generate QRN 4 . We then observe that QRN has 4 cosets: s · QRN , s · QRN , s · s · QRN and itself, QRN . So we can add another two bits to the domain of f , and use these bits to pick which coset we map to, thereby covering the whole Z∗N . These modifications do not reduce the number of hardcore bits from Theorem 3. The range of f is thus Z∗N , which is very close to ZN . The domain is also very close to ZN : it is {0, 1, . . . , φ(N )/4 − 1} × {0, 1} × {0, 1}, which is essentially the same as Z∗N . Thus, even though our modified f is not a permutation of ZN , it is almost one. ) m be defined Theorem 4. Let h : {0, 1, . . . , φ(N 4 − 1} × {0, 1} × {0, 1} → {0, 1} as: h (x, b1 , b2 ) = xm . . . x1 . ) Let f  : {0, 1, . . . , φ(N − 1} × {0, 1} × {0, 1} → Z∗N be as follows: 4

f  (x, b1 , b2 ) = g x · sb1 · sb2 ) (x ∈ {0, 1, . . . , φ(N − 1}, b1 , b2 ∈ {0, 1}, all arithmetic is in Z∗N ). 4 If Assumption 1 holds, then F  = (f  , h ) is a ({0, 1, . . . , φ(N )/4−1}×{0, 1}× {0, 1} → Z∗N × {0, 1}m )-pseudorandom generator.

Proof. The proof is by a simple reduction to Corollary 1. The details are straightforward and are given in Appendix B. 4

For N = P Q with safe P, Q, the subgroup QRN of quadratic residues in Z∗N is of order φ(N )/4 = P  Q and is cyclic. QRN is cyclic, because QRN ∼ = QRP ×QRQ , and QRP and QRQ are cyclic of distinct prime orders P  and Q , respectively. Hence, the order of almost any element of QRN is P  Q , and therefore almost any element of QRN is a generator (unless it is 1 modulo P or Q). Furthermore, such N are clearly Blum integers and Corollary 1 holds for them if Assumption 2 holds. Finally, as noted in Section 2, we can relax the requirement on P and Q and only ask that gcd((P − 1)/2, (Q − 1)/2) = 1; then QRN is still cyclic by the same argument and N is still a Blum integer; however, a random g is not necessarily a generator, and we will simply have to choose g specifically to be a generator.

An Improved Pseudorandom Generator Based on Hardness of Factoring

97

Now some care is needed to modify the generator F  , so that it may be iterated, since its domain and range are not as required by the constructions of Section 3. What makes iteration possible is that there are efficiently computable ) “almost bijections” between {0, 1, . . . , φ(N 4 − 1} × {0, 1} × {0, 1} and ZN , as well ∗ as between ZN and ZN . Hence, we can use ZN as the domain and as the range of f , and that should not make significant difference. The following is merely a formalization of this statement: Corollary 2. Let F : ZN → ZN × {0, 1}m be as follows: F (x) = (g xn ...xm+3 xm ...x1 · sxm+2 · sxm+1 , xm . . . x1 ) (all arithmetic is in Z∗N ). If Assumption 1 holds, then F is a (ZN → ZN × {0, 1}m )-pseudorandom generator. ) Proof. Let F  (x, b1 , b2 ) be as in Theorem 4. Let D = {0, . . . , φ(N − 1} and 4 m m

 

  D = {0, . . . , Nn . . . Nm+3 1 . . . 1}. Also, let N = Nn . . . Nm+3 11 1 . . . 1. We will prove the following:

F (UZN ) # F (UZN ) = F  (UD × U{0,1} × U{0,1} ) # F  (UD × U{0,1} × U{0,1} ) ∼ ∼ UZ∗N × {0, 1}m # UZN × {0, 1}m . – F (UZN ) # F (UZN ) because UZN # UZN – F (UZN ) = F  (UD × U{0,1} × U{0,1} ), because F  (x, b1 , b2 ) = F (xn−2 . . . xm+1 b1 b2 xm . . . x1 ) – F  (UD × U{0,1} × U{0,1} ) # F  (UD × U{0,1} × U{0,1} ) because UD # UD √ (since |D| = N4 + O( N )) – F  (UD × U{0,1} × U{0,1} ) ∼ UZ∗N × {0, 1}m by Theorem 4 – UZ∗N × {0, 1}m # UZN × {0, 1}m because UZ∗N # UZN . 4.2

Iterating

By Theorem 2, F can be iterated to yield polynomial expansion: in each iteration, we output h(x), and use g xn ...x3 · sx2 · sx1 as the next input for F . Better yet we show that Construction 4 can be used to give a more efficient generator with polynomial expansion. There are two conditions that need to be satisfied for that: there has to be a pairing [·, ·] with properties described in Lemma 6, and F has to exhibit the self-reducibility property described in Construction 4. In the following simple lemma, we define a pairing [·, ·] such that [h, r] %→ h is hardcore for F , and demonstrate that the condition stated in Lemma 6 holds for [·, ·].

98

Nenad Dedi´c, Leonid Reyzin, and Salil Vadhan

Lemma 7. Let Um denote the uniform distribution on {0, 1}m . Let V be the projection of UZN onto the non-hardcore bits. Let [·, ·] : {0, 1}m × {0, 1}c ↔ {0, 1}n such that [h, r] = rc . . . r1 hm . . . h1 . Then, [Um , V ] ∼ UZN . Proof. We prove that [Um , V ] # UZN via a straightforward statistical distance argument. See Appendix B. Next, we show that F is self-reducible in the sense of Construction 4. To do that, we only need to provide the reduction algorithm R. We define it simply as R(y, h) = y · g −h . Clearly, R is efficiently computable, R(UZN × Um ) = UZN , and a simple calculation verifies that R(F  ([h, r]), h) = F  ([0, r]), where F  : ZN → ZN , F  (x) = g xn ...x3 · sx2 · sx1 (F  is the one-way function that we use to obtain F ). Based on the observation about self-reducibility and Lemma 7, we can apply Construction 4 to F , thus obtaining the following pseudorandom generator: INPUT y ∈ ZN FOR i = 0 TO p(n) m

Construction 5

  LET e ← yn . . . ym+3 0 . . . 0 LET y ← g e · sym+2 · sym+1 OUTPUT ym . . . y1

The output distribution of this generator is the same as that of Construction m m 1. That is because, in Construction 5, g e = (g 2 )yn ...ym+3 , but g and g 2 are m distributed identically because g %→ g 2 is a permutation over QRN , so instead of g e , we can write g yn ...ym+3 .

Acknowledgment We are grateful to Igor Shparlinski for suggesting a relaxation of the constraints on the modulus N .

References ACGS88. W. Alexi, B. Chor, O. Goldreich, and C. Schnorr. RSA and Rabin functions: Certain parts are as hard as the whole. SIAM Journal on Computing, 17(2):194–209, April 1988. BBS86. L. Blum, M. Blum, and M. Shub. A simple unpredictable pseudo-random number generator. SIAM Journal on Computing, 15(2):364–383, May 1986. BM84. M. Blum and S. Micali. How to generate cryptographically strong sequences of pseudo-random bits. SIAM Journal on Computing, 13(4):850– 863, November 1984.

An Improved Pseudorandom Generator Based on Hardness of Factoring Gen00.

Gol01. GR00.

HSS93.

LL94.

PS98.

Ros01.

Yao82.

A

99

Rosario Gennaro. An improved pseudo-random generator based on discrete log. In Mihir Bellare, editor, Advances in Cryptology—CRYPTO 2000, volume 1880 of Lecture Notes in Computer Science, pages 469–481. SpringerVerlag, 20–24 August 2000. Oded Goldreich. Foundations of Cryptography: Basic Tools. Cambridge University Press, 2001. Oded Goldreich and Vered Rosen. On the security of modular exponentiation with application to the construction of pseudorandom generators. Technical Report 2000/064, Cryptology e-print archive, http://eprint.iacr.org, 2000. Prior version appears in [Ros01]. J. H˚ astad, A. W. Schrift, and A. Shamir. The discrete logarithm modulo a composite hides O(n) bits. Journal of Computer and System Sciences, 47:376–404, 1993. Chae Hoon Lim and Pil Joong Lee. More flexible exponentiation with precomputation. In Yvo G. Desmedt, editor, Advances in Cryptology— CRYPTO ’94, volume 839 of Lecture Notes in Computer Science, pages 95–107. Springer-Verlag, 21–25 August 1994. S. Patel and G. Sundaram. An efficient discrete log pseudo random generator. In Hugo Krawczyk, editor, Advances in Cryptology—CRYPTO ’98, volume 1462 of Lecture Notes in Computer Science, pages 304–317. SpringerVerlag, 23–27 August 1998. Vered Rosen. On the security of modular exponentiation with application to the construction of pseudorandom generators. Technical Report TR01-007, ECCC (Electronic Colloquium on Computational Complexity, http://www.eccc.uni-trier.de/eccc), 2001. A. C. Yao. Theory and application of trapdoor functions. In 23rd Annual Symposium on Foundations of Computer Science, pages 80–91, Chicago, Illinois, 3–5 November 1982. IEEE.

A Simple Proof of Gennaro’s Generator

The technique of Section 3 can also be used to simplify the proof of pseudorandomness of the generator given in [Gen00]: Let p be an n-bit safe prime, g a generator of Z∗p , and c = ω(log n). INPUT y ∈ Zp n−c+2 LET γ ← g 2 FOR i = 0 TO p(n) LET x ← y

  (= g xn ...xn−c+2 0 . . . 0 x1 ) n−c

LET y ← γ xn ...xn−c+2 · g x1 OUTPUT yn−c+1 . . . y2

Informally, this generator is based on the assumption that, for n, p and g as above, exponentiation x %→ g x mod p is one-way even if we restrict its domain only to short ω(log n)-bit exponents.

100

Nenad Dedi´c, Leonid Reyzin, and Salil Vadhan

In [PS98], it is shown that, if the above assumption holds (also known as Discrete Logarithm with Short Exponent, or DLSE), then the full-length-exponent exponentiation in Z∗p has n − ω(log n) hardcore bits. More precisely: Theorem 5. Let p be an n-bit safe prime and let g be a generator of Z∗p . Define f : Zp−1 → Zp∗ as f (x) = g x mod p. Fix some c = ω(log n). If DLSE holds, then H(x) = xn−c . . . x2 is hardcore for f . The proof of this theorem can be found in [PS98]. The proof of pseudorandomness of the above generator is similar to the proof in Section 4. First, similarly to Corollary 2, it can be shown that extending both the domain and the range of f to Zp does not hurt pseudorandomness. More precisely, for f  : Zp → Zp such that f  (x) = g x mod p, it holds that x %→ (f  (x), H(x)) is a (Zp → Zp ×{0, 1}m )-pseudorandom generator5 (m = n−c−1). By applying Construction 4 to f  , we obtain exactly the generator given above. So we only need to demonstrate that the two conditions for Construction 4 are fulfilled. Let us first define the pairing described in Lemma 6. Let [·, ·] : {0, 1}m × {0, 1}n−m ↔ {0, 1}n such that [h, r] = rn−m . . . r2 hm . . . h1 r1 . Obviously [h, r] %→ h is hardcore for f (it is the same as H). Let V be the projection of UZp onto bits that are not hardcore for f (1, m + 2, . . . , n). Then, an argument similar to the one in Lemma 7 shows that [U{0,1}m , V ] ∼ Zp We still have to give the reduction algorithm R required for Construction 4. But, R(y, h) is simply y · g −2h . It is easy to check that R([h, r], h) = [h, 0] and that R(UZp × Um ) = UZp .

B

Proofs

) Proof of Theorem 4 Consider the function f : {0, 1, . . . , φ(N − 1} → Z∗N , 4 x f (x) = g . If Assumption 1 holds, then by Corollary 1, x %→ (g x , xm . . . x1 ) ) is a ({0, 1, . . . , φ(N − 1} → QRN × {0, 1}m )-pseudorandom generator. This is 4 φ(N ) because ordN (g) = 4 . Suppose that a PPT A distinguishes, with non-negligible probability the distribution F  (U{0,1,..., φ(N ) −1} × U{0,1} × U{0,1} ) against UZ∗N × {0, 1}m . Then, we 4 can build a PPT B that distinguishes, with non-negligible probability, the dis) tribution (g x , xm . . . x1 ) against (g x , r) (r ∈R {0, 1}m , x ∈R {0, . . . , φ(N − 1}): 4 ∗ m on input (y, h) ∈ ZN × {0, 1} , B generates random b1 ∈R {0, 1} and b2 ∈R {0, 1} and outputs A(y · sb1 · sb2 , h). Clearly, if (y, h) is distributed according to UQRN × U{0,1}m , then (y · sb1 · sb2 , h) is distributed according to UZ∗N × U{0,1}m . Similarly, if (y, h) is distributed according to (g x , xm . . . x1 ) (x is a random ) − 1}), then (y · sb1 · sb2 , h) is distributed according to variable on {0, . . . , φ(N 4  ' & F (U{0,1,..., φ(N ) −1} × U{0,1} × U{0,1} ). 4

5

More precisely, the output of this generator is indistinguishable from H(UZ∗p ). But, it is not hard to see that H(UZ∗p ) U{0,1}m .

An Improved Pseudorandom Generator Based on Hardness of Factoring

101

  Proof of Lemma 7 Let M = Nn . . . Nm+1 . Obviously, Prr∈V r = s = for s < M and |s| = c.   . . 0. Then, Prr∈V r = M = Let M  = Nn . . . Nm+1 0  . 



So, Prh∈Um ,

r∈V

m [h, r] = j =

1 N,

2m N ,

N −M  . N

for j < M  . Also, Prh∈Um ,

 r∈V

[h, r] =



 j = NN−M 2m , for j ≥ M . Finally, the statistical distance between UZN and [Um , V ] is m M  +2  −1 

   Prx∈UZN x = j − Prh∈Um ,

 r∈V

  [h, r] = j  =

j=0

=

m M  +2  −1 

j=M 



   Prx∈UZN x = j − Prh∈Um ,

 r∈V

m M  +2  −1 

j=M 

m   M  +2  −1    Prx∈UZN x = j  + Prh∈Um ,

  [h, r] = j  ≤  r∈V

  [h, r] = j  ≤

j=M 

2m 2m (N − M  ) = + N N 2m 2m + N − M  = negl (n). = N ≤

' &

Intrusion-Resilient Signatures: Generic Constructions, or Defeating Strong Adversary with Minimal Assumptions Gene Itkis Boston University Computer Science Dept. 111 Cummington St. Boston, MA 02215, USA [email protected]

Abstract. Signer-Base Intrusion-Resilient (SiBIR) signature schemes were defined in [IR02]. In this model, as in the case of forward security, time is divided into predefined time periods (e.g., days); each signature includes the number of the time period in which it was generated; while the public key remains the same, the secret keys evolve with time. In addition, in SiBIR model, the user has two modules, signer and home base: the former generates all signatures on its own, and the latter is needed only to help update the signer’s key from one time period to the next. The main strength of the intrusion-resilient schemes, is that they remain secure even after arbitrarily many compromises of both modules, as long as the compromises are not simultaneous. Moreover, even if the intruder does compromise both modules simultaneously, she will still be unable to generate any signatures for the previous time periods (i.e., the forward security is guaranteed even in the case of simultaneous exposures). This paper provides the first generic implementation, called gSiBIR, of the intrusion-resilient signature schemes: it can be based on any ordinary signature scheme used as a black-box. gSiBIR is also the first SiBIR scheme secure against fully-adaptive adversary and does not require random oracle. Our construction does require one-way (and cryptographic hash) functions. Another contribution of this paper is a new mechanism extending treebased constructions such as gSiBIR and that of [BM99] to avoid the limit on the total number of periods (required by [IR02] and many forwardsecure ones). This mechanism is based on explicit use of prefixless (or selfdelimiting) encodings. Applied to the generic forward-secure singature constructions of [BM99,MMM02], it extends the first and yields modest but noticable improvements to the second. With this mechanism, gSiBIR becomes the first generic intrusion-resilient signature scheme with no limit on the number of periods.

1

Introduction

Key exposures appear to be inevitable. Limiting their negative impacts is extremely important. A number of approaches for dealing with this problem have S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 102–118, 2003. c Springer-Verlag Berlin Heidelberg 2003 

Generic Intrusion-Resilient Signatures

103

been explored; some of their benefits have recently been combined in the following new approach. Intrusion-Resilience. Signer-Base Intrusion-Resilient (SiBIR) security model, introduced in [IR02], combines some benefits of the previously proposed key-insulated [DKXY02], forward-secure [And97,BM99], and proactive [OY91,HJJ+ 97] security models. A detailed discussion of relationships among all these models is included in [IR02], and we therefore omit it here. Intuitively, SiBIR model divides time into time periods, and augments each signature with a “time-stamp” (the time period number, during which the signature was generated). Verification is essentially similar to the ordinary signature schemes, except it includes the time period as input — if the time period is changed, the signature becomes invalid. These extensions to ordinary signatures are the same as for forward-secure [And97,BM99] and key-insulated models [DKXY02]. In addition, the ordinary signer is replaced in SiBIR model —similarly to the key-insulated model of [DKXY02]— with two modules: signer and home base (thus, the name of the model: Signer-Base Intrusion-Resilient, or SiBIR for short). The signer, using its secret key, can generate signatures, but only for the current time period (unlike forward-secure signer, which can also generate signatures for all the future periods). At the end of the current period, the signer must update its secret key for the next period — this requires an update message from the base. Thus, intrusion-resilient schemes, as the key-insulated ones [DKXY02], preserve security of both past and future time periods, when the signer is compromised. Moreover —unlike in the key-insulated model— for intrusion-resilient schemes this security is preserved even if both signer and base are compromised, as long as the compromises are not simultaneous. In the case of simultaneous compromise, the security of past (but not the future) periods is preserved — this is the best that can be achieved. To achieve such security, SiBIR model, much as proactive schemes [OY91,HJJ+ 97], provides a refresh mechanism: base changes its key and sends a refresh message to the signer, which uses it to refresh its key as well. This can be done at arbitrary times and as many times as desired (unlike update, which is done only at the pre-arranged times, and typically has a limit on the total number of updates); refresh is transparent to the verifier (in contrast, update changes the time period number and is therefore visible to the verifier). Refresh prevents the attacker from learning anything by breaking into the base. Specifically, learning the base secrets at arbitrary times by itself does not help attacker in any way. Only if the attacker compromises both signer and base simultaneously, does she learn all the system’s secrets and can therefore generate — just as the system itself — valid signatures for all the future periods. But if the attacker misses at least one refresh message between the compromising of the base and signer secrets, then compromising the base does not add anything useful to her knowledge Thus, SiBIR model minimizes the impact of the key compromises, which explains the Intrusion-Resilient part of its name.

104

Gene Itkis

Previous Scheme. The only currently known SiBIR signature scheme, called SiBIR1 [IR02], builds on the techniques from [IR01]. SiBIR1 is closely based on a specific ordinary signature scheme of Guillou-Quisquater [GQ88], and as such requires random oracle and specific number-theoretic complexity assumptions (including a version of the strong RSA assumption). Though no attacks are known, SiBIR1 is proven in [IR02] to be secure only against partially restricted adversaries (partially adaptive or partially synchronous)1 . SiBIR1 signing and verifying costs are exactly the same as in [IR01], which are essentially the same as for the underlying ordinary Guillou-Quisquater signatures: two exponentiations with short (typically, 128-160 bits) exponents. SiBIR1 requires O(T ) modular exponentiations per update and a single modular multiplication per refresh, both for the signer and for the base. Both the signer and the base each store only two secret values (similar to GQ secret keys). New Results. We present a new SiBIR signature construction. This scheme is generic: it requires only a one-way function and an ordinary signature scheme — both can be given as black-boxes2 . In particular, it does not require a random oracle. Due to generic nature of our new scheme, we call it gSiBIR. We also prove gSiBIR to be secure against fully-adaptive adversary. Finally, while the previous SiBIR scheme requires the total number of periods T to be pre-set at the key-generation stage, our scheme can be easily adjusted to support unbounded number of periods. Our construction achieving this feature is of independent interest. It draws on the tree-based forward-secure constructions of [BM99,MMM02], and our technique of removing the bound on T can be applied back to [MMM02,BM99] to yield modest improvement3 . In fact, our mechanism can be presented as a generalization of these previous methods: casting them explicitly as using self-delimiting encodings4 . Once the use of the self-delimiting encodings is explicit, it is easy to find more efficient encodings.

2

Intrusion-Resilient Security Model

The definitions in this section are from [IR02] and are included here for selfcontainment. 1

2

3

4

A modification of SiBIR1, provably secure against the strongest —fully adaptive— adversary, has being obtained by the author and will be published separately. We also use a cryptographic hash function and a one-time signature scheme: both can be efficiently constructed from the given one-way function, e.g., using [NY89] for hash function and [Lam79,Mer89] for the one-time signatures. The previous method of removing the bound did not really remove it completely. Instead, it allowed the bound to be so large that reaching it would conflict with the security parameters of the scheme. In contrast, our method makes the number of periods really unbounded by anything, including security parameters. As a result, in our method, it is in principle possible to change the security parameters “on the fly”. The self-delimiting encodings are an important and well-known concept, playing an important role, in particular, in Kolmogorov Complexity [Lev73,Lev74,LV93], and in some scheduling issues, e.g., in [IL89].

Generic Intrusion-Resilient Signatures

2.1

105

Functional Definition

We first define the functionality of the system components; security definition is given in the subsequent subsection. The system’s secret keys may be modified in two different ways, called update and refresh. Updates change the secrets from one time period to the next (e.g. from one day to the next), changing also the period number in the signatures. In contrast, refreshes affect only the internal secrets and messages of the system, and are transparent to the verifier. Thus, we use notation SK t.r for secret key SK , where t is the time period (the number of times the key has been updated) and r is the “refresh number” (the number of times the key has been refreshed since the last update). We say t.r = t .r when t = t and r = r . Similarly, we say t.r < t .r when either t < t or t = t and r < r . We follow the convention of [BM99], requiring a key update immediately after key generation: to obtain the keys for t = 1 (this is done merely for notational convenience, in order to make the number of time periods T equal to the number of updates, and need not affect the efficiency of an actual implementation). Similarly, we require a key refresh immediately after key update: to obtain keys for r = 1 (this is also done for convenience, and need not affect efficiency of an actual implementation; in particular, the update and refresh information sent by the base to the signer can be combined into a single message). Definition 1. Signer-Base key-evolving signature scheme is a septuple of probabilistic polynomial-time algorithms (Gen, Sign, Ver ; US, UB; RB, RS) 5 : 1. Gen, the key generation algorithm. In: security parameter(s) (in unary), the total number T of time periods Out: initial signer key SKS 0.0 , initial home base key SKB 0.0 , and the public key PK . 2. Sign, the signing algorithm. In: current signer key SKS t.r , message m Out: signature (t, sig) on m for time period t 3. Ver , the verifying algorithm In: message m, signature (t, sig) and public key PK Out: “valid” or “invalid” (as usual, signatures generated by Sign must verify as “valid”) 4. UB, the base key update algorithm In: current base key SKB t.r Out: new base key SKB (t+1).0 and the key update message SKU t 5. US, the signer update algorithm In: current signer secret key SKS t.r and the key update message SKU t Out: new signer secret key SKS (t+1).0 5

Intuitively (and quite roughly), the first three correspond to the ordinary signatures; the first four correspond to forward-secure ones, the first five (with some restrictions) correspond to key-insulated ones; and all seven are needed to provide the full power of the intrusion-resilient signatures.

106

Gene Itkis

6. RB, the base key refresh algorithm In: current base key SKB t.r Out: new base key SKB t.(r+1) and the corresponding key refresh message SKR t.r 7. RS, the signer refresh algorithm In: current signer key SKS t.r and the key refresh message SKR t.r Out: new signer key SKS t.(r+1) (corresponding to the base key SKB t.(r+1) ) 2.2

Security Definition

In order to formalize security, we need a notation for the number of refreshes in each period: Let RN (t) denote the number of times the keys are refreshed in the period t: i.e., there will be RN (t)+1 instances of signer and base keys. Recall that each update is immediately followed by a refresh; thus, keys with refresh index 0 are never actually used. RN is used only for notational convenience: it need not be known; security is defined below in terms of RN (among other parameters). Now, consider all the keys generated during the entire run of the signature scheme. They can be generated by the following “thought experiment” (we do not need to actually run it — it is used just for definitions). Experiment Generate-Keys(k, T, RN ) t ← 0; r ← 0 (SKS t.r , SKB t.r , PK ) ← Gen(1k , T ) for t = 1 to T (SKB t.0 , SKU t−1 ) ← UB(SKB (t−1).r ) SKS t.0 ← US(SKS (t−1).r , SKU t−1 ) for r = 1 to RN (t) (SKB t.r , SKR t.(r−1) ) ← RB(SKB t.(r−1) ) SKS t.r ← RS(SKS t.(r−1) , SKR t.(r−1) ) Let SKS ∗ , SKB ∗ , SKU ∗ and SKR ∗ be the sets consisting of, respectively, signer and base keys and update and refresh messages, generated during the above experiment. We want these sets to contain all the secrets that can be directly stolen (as opposed to computed) by the adversary. Thus, we omit from these sets the keys SKS t.0 , SKB t.0 for 0 ≤ t ≤ T , SKU 0 and SKR 1.0 , which are never actually stored or sent (because key generation is immediately followed by update, and each update is immediately followed by refresh). Note that SKR t.0 for t > 1 is used (it is sent together with SKU t−1 to the signer), and thus is included into SKR ∗ . To define security, let F , the adversary (or “forger”), be a probabilistic polynomial-time oracle Turing machine with the following oracles: – Osig, the signing oracle (constructed using SKS ∗ ), which on input (m, t, r) (1 ≤ t ≤ T , 1 ≤ r ≤ RN (t)) outputs Sign(SKS t.r , m)

Generic Intrusion-Resilient Signatures

107

– Osec, the key exposure oracle (based on the sets SKS ∗ , SKB ∗ , SKU ∗ and SKR ∗ ), which 1. on input (“s”, t.r) for 1 ≤ t ≤ T, 1 ≤ r ≤ RN (t) outputs SKS t.r ; 2. on input (“b”, t.r) for 1 ≤ t ≤ T, 1 ≤ r ≤ RN (t) outputs SKB t.r ; 3. on input (“u”, t) for 1 ≤ t ≤ T − 1 outputs SKU t and SKR t+1 .0; and 4. on input (“r”, t.r) for 1 ≤ t ≤ T, 1 ≤ r < RN (t), outputs SKR t.r . Queries to Osec oracle correspond to intrusions, resulting in the corresponding secrets exposures. Exposure of an update key SKU t−1 automatically exposes the subsequent refresh key SKR t.0 , because they are sent together in one message. Adversary’s queries to Osig and Osec must have the t.r values within the appropriate bounds. It may be reasonable to require the adversary to “respect erasures” and prohibit a value that should have been erased from being queried (or used for signature computation). However, this restriction is optional, and in fact we do not require it here. Note, that respecting erasures is local to signer and base and is different from requiring any kind of global synchronization. For any set of valid key exposure queries Q, time period t ≥ 1 and refresh number r, 1 ≤ r ≤ RN (t), we say that key SKS t.r is Q-exposed : [directly] if (“s”, t.r) ∈ Q; or [via refresh] if r > 1, (“r”, t.(r−1)) ∈ Q, and SKS t.(r−1) is Q-exposed; or [via update] if r = 1, (“u”, t−1) ∈ Q, and SKS (t−1).RN (t−1) is Q-exposed. Replacing SKS with SKB throughout the above definition yields the definition of base key exposure (or more precisely, of SKB t.r being Q-exposed ). Both definitions are recursive, with direct exposure as the base case. Clearly, exposure of a signer key SKS t.r for the given t and any r enables the adversary to generate legal signatures for this period t. Similarly, simultaneous exposure of both base and signer keys (SKB t.r , SKS t.r , for some t, r) allows the adversary to run the algorithms of definition 1 to generate valid signatures for any messages for all later periods t ≥ t. Thus, we say that the scheme is (t, Q)-compromised, if either – SKS t.r is Q-exposed for some r, 1 ≤ r ≤ RN (t); or – SKS t .r and SKB t .r are both Q-exposed for some t < t. In other words, a particular time period has been rendered insecure if either the signer was broken into during that time period, or, during a previous time period, the signer and the base were compromised without a refresh in between. Note that update and refresh messages by themselves do not help the adversary in our model—they only help when combined, in unbroken chains, with signer or base keys6 . If the scheme is (j, Q)-compromised, then clearly adversary, pos6

Interestingly, and perhaps counter-intuitively, a secret that has not been exposed might still be deduced by adversary. For example, it may be possible in some implementations to compute SKB t.r from SKB t.r+1 and SKR t.r (similarly for update and for the signer). The only case we cannot allow —in order to stay consistent with our definition— is that of adversary computing SKS t.r without Q-exposing some SKS t.r . So, it may be possible to compute SKS t.r from SKS t.r+1 and SKR t.r , but not from SKS t+1.0 and SKU t.r (even if r = RN (t)).

108

Gene Itkis

sessing the secrets returned by Osec in response to queries in Q, can generate signatures for the period t. The following experiment captures adversary’s functionality. Intuitively, adversary succeeds if she generates a valid signature without “cheating”: not obtaining this signature from Osig, asking only legal queries (e.g. no out of bounds queries), and not compromising the scheme for the given time period. We call this adversary (fully) adaptive because she is allowed to decide which keys and signatures to query based on previous answers she receives. Experiment Run-Adaptive-Adversary(F, k, T, RN ) Generate-Keys(k, T, RN ) (m, j, sig) ← F Osig,Osec (1k , T, PK , RN ) Let Q be the set of key exposure queries F made to Osec; if Ver (m, j, sig) =“invalid” or (m, j) was queried by F to Osig or there was an illegal query or the scheme is (j, Q)-compromised then return 0 else return 1 We now define security for the intrusion resilient signature schemes. Definition 2. Let SiBIR[k, T, RN ] be a signer-base key-evolving scheme with security parameter k, number of time periods T , and table RN of T refresh numbers. For adversary F , define adversary success function as SuccSiBIR (F, SiBIR[k, T, RN ]) = Pr[Run-Adaptive-Adversary(F, k, T, RN ) = 1] def

Let insecurity InSecIR−fa (SiBIR[k, T, RN ], τ, qsig ) be the maximum of SuccSiBIR (F, SiBIR[k, T, RN ]) over all fully adaptive adversaries F that run in time at most τ and ask at most qsig signature queries. Finally, SiBIR[k, T, RN ] is (τ, , qsig )-intrusion-resilient if InSecIR−fa (SiBIR[k, T, RN ], τ, qsig ) <  . Although the notion of (j, Q)-compromise depends only the set Q, it is important how Q is generated by the adversary. Allowing the adversary to decide her queries based on previous answers gives her potentially more power. Our scheme is the first SiBIR scheme shown to be secure against such a fully adaptive adversary.

3

Intrusion-Resilient Scheme: Generic Construction

Our scheme, gSiBIR, builds on the generic forward-secure signatures techniques of [BM99,MMM02]. In fact, it can be viewed as an intrusion-resilient implementation of their tree-based forward-secure constructions (T-schemes, in terminology of [MMM02]). However, since intrusion-resilient model is more complex, and exacts stronger security requirements, such an implementation is not trivial.

Generic Intrusion-Resilient Signatures

109

At an intuitive level, all these tree-based schemes (including ours), construct a certificate hierarchy, where the root is included in the public key of the main scheme, and the leaves are the public keys of the underlying ordinary signature scheme, used for the individual time periods. As in all certificate hierarchies, authentication must include all the certificates in the certification path7 . Such a certificate hierarchy is (re-)constructed, as time evolves. The challenge for intrusion-resilience is that the information, enabling further (re-)construction of the hierarchy, must be stored and used in such a way that adversary cannot subvert it in order to extend the hierarchy beyond the periods of her control, or to learn any secret keys for any additional time periods (even if she fully controlled both modules, just not at the same times). Our scheme uses any ordinary signature scheme, denoted as Σ. An instance of Σ is denoted as S; the corresponding public and secret keys are S.PK , S.SK , respectively. Each time period t uses a separate instance S t of Σ. Σ signatures are used only in the leaves, to sign actual messages. For certification, gSiBIR uses specific one-time signature mechanisms8 . We adjust these mechanisms to address the above mentioned challenge: to prevent adversary from extending the hierarchy beyond the compromised periods. In the simplest case, we base our certification mechanism on Diffie-Lamport one-time signatures [Lam79,Mer89] (which require only any one-way function, given as a black-box). For each internal node of the certification hierarchy, the secret keys corresponding to this node are used to sign only the (hashed) public keys of its children nodes. Denote an instance of the certification mechanism as C, and the corresponding public and private keys as C.PK , C.SK , respectively. 3.1

Bounded T Schemes

First, we consider the case when the total number of periods T is pre-set a priori (in section 3.2 we generalize it to the unbounded number of periods). Wlog, let T be an exact power of 2. Trees, Labels, Keys and Signatures Trees. Consider a complete balanced binary tree with T leaves. Let each leaf correspond to a time period; the periods t are arranged in increasing order from left to right from 0 to T − 1. 7

8

As observed by [Kra00] for forward-secure signatures, this certification can be optimized using the standard Merkle tree construction [Mer89]. Unfortunately, this optimization appears hard to realize in the intrusion-resilient model. It was observed in [Mer87] for standard, and in [AR00] for the forward-secure schemes that use similar certificates (such as [BM99,MMM02]), that each certifying public key, is used only for authenticating a single message — the corresponding certificate of the children’s public keys, and thus one-time signatures are sufficient for this purpose. In our scheme, however, the specific one-time signature (or similar) mechanisms are deeply integrated with rest of the scheme to assure intrusion resilience, and not just forward security.

110

Gene Itkis

Labels. Label each node with a string representing its path from the root (denoting left edge with 0 and right with 1). Then, all nodes in a subtree of some v contain v as prefix (and vice versa: all nodes with prefix v are in v’s subtree). The root is labeled with the empty string λ, and each leaf has full lg T bits in its label t, padded with leading 0s, if needed. Right Path. For a node v define the right path of v to be the set ρ(v) of the right children of the nodes on the path from v to the root. All nodes on the right path of a node v can be obtained by replacing one of the 0s in the v’s label with 1 and truncating the label to the right of it. So, there are ≤ lg T nodes in any right path (equality is achieved only for the leftmost node: 00...0). For example, a node labeled 0101010 has the following nodes as its right path: {1, 011, 01011, 0101011}. Define right path successor ρ+ (v) to be the longest element of ρ(v). In the above example ρ+ (0101010) = 0101011; as another example, ρ+ (1010111) = 1011. Keys. As in the case of forward-secure schemes of [BM99,MMM02], associated with each period t is an instance S t of the underlying ordinary signature scheme Σ. The keys S t .PK , S t .SK for period t are generated during the update preceding the period t, and destroyed during the following update. All signatures during the period t are signed using S t . In addition, with each node v we associate an instance C v of a one-time signature scheme similar to [Lam79,Mer89] (we’ll specify details later, but for now the differences with [Lam79,Mer89] can be ignored). The C λ .PK is the root public key gSiBIR.PK . Generating and Sharing. All the values we will be discussing below are generated and stored at base and signer. Here we introduce the notation for denoting the origin and location of the values: A value X is denoted as X (b) if it was generated at the base, and X (s) if generated at the signer. In our construction we actively utilize secret sharing (we use the simple XOR sharing). Thus, for a value X, b[X] denotes its share stored at the base, and s[X] its share stored at the signer. Then, b[X] ⊕ s[X] = X. Note: b[X (s) ] can be computed at the base independently, and even before generation of X (s) at the signer: in this case, when signer generates X (s) , it can use b[X (s) ] to compute s[X (s) ] = X (s) ⊕ b[X (s) ]. Then the signer deletes both X (s) and b[X (s) ] — this results in generating and sharing X (s) between the base and signer, without violating the one-directional communication restriction of our model. Signatures. Each gSiBIR signature during period t consists of the S t signature, plus the certification path of S t .PK . The certification path consists of the C signatures, each parent in the tree certifying its children; C λ .PK is the root certification authority’s public key and is thus the whole scheme’s public key. Key Management. At time t, gSiBIR maintains S t .SK in the signer, and all C v∈ρ(t) .SK values distributed between signer and base (i.e., s[C v .SK ], b[C v .SK ] respectively). In addition, the signer stores both the public key S t .PK and its certification path. The details of update procedure will be discussed below.

Generic Intrusion-Resilient Signatures

111

Certification Mechanism. gSiBIR uses certification, just as [BM99,MMM02], to allow keys for future periods to be generated at later times as needed, but still be tied in to the root — the scheme’s public key. However, in the case of forward security, these future-enabling certification keys are not considered a security risk at break-ins: forward-secure schemes protect only the past keys, while the keys obtained at a break-in can, by design and necessity, generate all future periods’ signatures. This is not the case for intrusion-resilient schemes. Here, a break-in of any one module must not expose any secret keys useful outside the current period. So, a natural solution is to distribute the secret keys, which will be used in the future, between the signer and base (using proactively secure secret sharing). This protects the future keys from break-ins. However the problem remains: how to use these certification keys without compromising not only the next period (which requires the use of such a key) but also all other future periods that may rely on the same certification keys? We solve this problem by making sure that the certification keys can be reconstructed from the shares in such a way that the reconstructed (partial) secret key is good only for certifying the proper child certificates. The onetime signatures of [Lam79,Mer89] (reviewed below) provide a simple certification mechanism allowing just such a “targeted” reconstruction. Still, even this is not enough: the above requires each secret key to be generated in only one of the modules (computing public key requires evaluating the one-way function on the secret key9 ). This makes the secret key vulnerable to exposure by a break-in into single module. To address this issue, we use two similar mechanisms C (b) , C (s) : one using keys originally generated at the base (and then shared with the signer); and the other generating its secret keys at the signer (these too are shared upon generation between the signer and the base, as outlined above). The secrets for C (b) , C (s) are generated simultaneously at the base and the signer respectively. As a result, to subvert the certification mechanism, both signer and base must be simultaneously compromised by the adversary: to either collect all shares of the certification secrets or to expose them immediately at creation, before they are shared. But such a simultaneous exposure forfeits the security of all the future signatures anyways (again by design and necessity). So, our security goal is achieved. One-Time Signatures. C, signing k bits, randomly generates its secret key C.SK = x : xi.β |0 ≤ i ≤ k − 1; β = 0, 1. The public key is then computed as C.PK = y : yi.β = f (xi.β )|0 ≤ i ≤ k − 1; β = 0, 1. The signature for message m  = m0 m1 . . . mk−1 is then σ : σi = xi.mi . It is verified by checking that f (σi ) = yi.mi for all 0 ≤ i ≤ k − 1. Forging such a signature is equivalent to 9

In principle, it might be possible generate the secrets distributively and customize the general two party function evaluation [Yao82] (and other general secure multiparty computation methods) to suit our model and purpose. But we do not explore this direction, as our goal is much more restricted and we seek much more efficient solutions.

112

Gene Itkis

inverting f for one of the 2k public values yi.β . For convenience, we occasionally denote the above signature σ of m  as σ = C.SK (m).  Generating, Sharing and Refreshing C-Secrets. Putting together the above methods we achieve sharing and reconstructing of C secret keys, satisfying the requirements of our model: For a node v in the certification tree, when C v must be created, the base gen(b) (s) (b) erates at random C (b) v .SK , b[C v .SK ], and b[C v .SK ]; computes C v .PK from (s) (b) (b) (b) C (b) v .SK ; sends to the signer b[C v .SK ] and s[C v .SK ] = C v .SK ⊕ b[C v .SK ]. (b) The secret C v .SK (it was only present at the base) and local copy of s[C v(b) .SK ] are then destroyed. At the same time, the signer, similarly, generates at random C (s) v .SK ; com(s) (s) (s) putes from it C (s) .PK and s[C .SK ] = C .SK ⊕ b[C .SK ] and destroys v v v v C (s) .SK . v (b) (s) Now, for node v the base has only C (b) v .PK , b[C v .SK ] and b[C v .SK ]; simi(s) (b) (s) larly, the signer has C v .PK , s[C v .SK ] and s[C v .SK ]. To refresh any of the secrets SK , the base generates a random value r, changes its share b[SK ] ← b[SK ]⊕r, and sends r to the signer. Upon receiving r the signer changes his of SK : s[SK ] ← s[SK ] ⊕ r. The r is destroyed at both immediately upon completion of the above local operations. Key Certification and Pro-certification. The public key C (b) v .PK and (b) (s) (∗) C (s) .PK will each be used for certification C .SK (h(C .PK )||h(C v v v0 v0 .PK )|| (b) (s) (b) (s) (b) (s) h(C v1 .PK )||h(C v1 .PK )) of C v0 .PK , C v0 .PK , C v1 .PK , and C v1 .PK , whenever these are generated. Since all the public keys are of the same length and are used to sign a much shorter string, we use the standard technique of compressing the keys being certified with a cryptographic hash function10 h. In our case, we compress each of the keys to be signed separately. Thus to each public key of a child corresponds a portion of the public (and thus secret) key of the parent. When a child’s public key is generated, only a half of the parent’s corresponding key portion is required for the certification. Thus the shares of that portion’s half, which are not needed for the certification, can be destroyed — doing this also guarantees that at a later time no other public key can be certified. Specifically, as soon as C (b) v .PK is generated at the base (as described above), the base destroys its shares not needed for the certification of (b)

(b) C (b) v .PK : namely, the bases securely erases b[C par(v) .SK (. . . h(C v .PK ) . . .)] (s)

and b[C par(v) .SK (. . . h(C (b) v .PK ) . . .)]. The base keeps the remaining shares (b)

(s)

(b) b[C par(v) .SK (. . . h(C (b) v .PK ) . . .)], and b[C par(v) .SK (. . . h(C v .PK ) . . .)] to be used at a later time for the certification. (b) and The signer similarly destroys s[C par(v) .SK (. . . h(C (s) v .PK ) . . .)] (s)

(b)

(s) s[C par(v) .SK (. . . h(C (s) v .PK ) . . .)], and keeps s[C par(v) .SK (. . . h(C v .PK ) . . .)] (s)

and s[C par(v) .SK (. . . h(C (s) v .PK ) . . .)]. 10

Such cryptographic hash functions can be constructed from any one-way functions, e.g., using the classical construction of [NY89].

Generic Intrusion-Resilient Signatures

113

Note, that after the above is done, no key C.PK  , such that h(C.PK  ) = (s) h(C (s) v .PK ), can be certified in place of C v .PK , even after the total and simultaneous exposure of all the secrets of both the signer and the base. Similarly, for C (b) v .PK . In order to be able to perform such a certification, the signer needs (b) (s) to either obtain both C par(v) .SK and C par(v) .SK when these are created, or obtain their shares at the same time — in both cases this requires a simultaneous exposure of the signer and the base. We say, that these keys are now pro-certified. Updates and Refreshes. The update for period t securely erases S t .SK in the signer and initiates updating the right path successor v = ρ+ (t) of t: UpdateNode(v). Prior to UpdateNode(v), no keys for any nodes in the subtree of v had existed. For v itself, signer already contains C v .PK and its certification path; and C v .SK is shared between signer and base, for both C (b) , C (s) (i.e., signer (s) (b) (s) already contains s[C (b) v .SK ], s[C v .SK ] and base has b[C v .SK ], b[C v .SK ]). UpdateNode(v) generates, shares, and pro-certifies the keys as described above the children of v and all its left descendents: i.e. nodes of the form v0∗ , up to length lg T (the longest of these is t+1). For the leaf t + 1, its certification path is reconstructed from the shares (no internal nodes’ keys can be modified in this path due to pro-certification): the base sends its shares to the signer, which does the reconstruction. The base also sends b[C t+1 .SK ] to the signer, and the signer generates S t+1 and uses C t+1 .SK to certify its public key. The outline of the node update is as follows: UpdateNode(v) if |v| < |T | then % if not at the leaf yet (b) (b) in Base: C v0 .(PK , SK ) ← C.Gen; C v1 .(PK , SK ) ← C.Gen; (b) (b) (s) (s) Randomly generate b[C v0 .SK ], b[C v1 .SK ], b[C v0 .SK ], b[C v1 .SK ] ; (b) (b) (b) Set s[C v0 .SK ] ← C v0 .SK ⊕ b[C v0 .SK ], (b) (b) (b) s[C v1 .SK ] ← C v1 .SK ⊕ b[C v1 .SK ]; (b) (b) Securely erase C v0 .SK , C v1 .SK , (b)

(b)

b[C (∗) v .SK (h(C v0 .PK ) . . . h(C v1 .PK ) . . .)]; (b) (s) (s) Base send to Signer: C v{0,1} .PK ; b[C v0 .SK ], b[C v1 .SK ], (b)

(b)

s[C v0 .SK ], s[C v1 .SK ]; (s)

(s)

in Signer: C v0 .(PK , SK ) ← C.Gen; C v1 .(PK , SK ) ← C.Gen; (s) (s) (s) Set s[C v0 .SK ] ← C v0 .SK ⊕ b[C v0 .SK ], (s) (s) (s) s[C v1 .SK ] ← C v1 .SK ⊕ b[C v1 .SK ]; (b) (b) Store s[C v0 .SK ], s[C v1 .SK ]; (s) (s) (s) (s) Securely erase C v0 .SK , C v1 .SK , b[C v0 .SK ], b[C v1 .SK ], (s)

(s)

s[C (∗) v .SK (. . . h(C v0 .PK ) . . . h(C v1 .PK ))]; % if at the leaf else Base send to Signer: shares of the certification path from v to the root .SK for all prefixes w of v) (i.e. remaining shares of C (s,b) w % no C.PK outside this path can be certified by the recoverable secrets

114

Gene Itkis

in Signer: securely erase S t .SK ; S t+1 .(SK , PK ) ← S.Gen; recover (from the available shares) the certification path of S t+1 .PK algorithm gSiBIR.Gen(l, T ) Generate C λ .(SK , PK ) ← C.Gen(l); PK ← (C λ .PK , T ) Share C λ .SK between signer and base; UpdateNode(λ); % SKS 0 now contains the certification path for S 0 .PK % and all C v .PK , for ∀v ∈ ρ(0lg T ) % Also, SKS 0 , SKB 0 contain corresponding shares % of C v .SK , for ∀v ∈ ρ(0lg T ) return (SKS 0 , SKB 0 , PK ) Fig. 1. Key generation algorithm. Refresh index on the keys is omitted to simplify notation algorithm gSiBIR.Sign(M, SK t ) return (σ = S t .Sign(M ), t, S t .PK , ψt = certification path of S t .PK )  algorithm gSiBIR.Ver (M, PK  , (σ, t, PK , ψ))  return Σ.Ver (M, PK ) (ψ is a correct certification path t of PK  )

Fig. 2. Signing and verifying algorithms

Refreshes are straight-forward: the base generates a random value Rt.r for each secret SK t.r whose share it is currently holding, xor’s those random values with the corresponding secrets to obtain the new secret values SK t.(r+1) ← SK t.r ⊕ Rt.r , and sends all the Rt.r to the signer (who refreshes his secret keys the same way). Proof Sketch. The idea of the proof is to force the adversary to either forge a new signature for some S t used by the simulator, or to diverge from the simulator’s certification tree, resulting in adversary inverting a one way function used for the certification. In either case, the simulator must guess where adversary success will take place. The case of adversary forging a signature for some S t is straight-forward. Suppose the adversary diverges from the simulator’s certification tree at node v (meaning that the C v .PK  used by adversary is not equal C v .PK of the simulator, while for the parent w of v both adversary and simulator use the same value C w .PK ). Then in order to certify C v .PK  the adversary (s) must find inverses of some values in C (b) w .PK and C w .PK . The adversary can find one of these from the simulator, by exposing the appropriate keys. But in order to find from the simulator both, the adversary would require a simultaneous compromise (which would eliminate adversary’s success). Without such a compromise, the simulator can simulate the queries for one of these inverses

Generic Intrusion-Resilient Signatures

115

(randomly guessed by the simulator), thus forcing the adversary to invert the corresponding one-way function. 3.2

Unbounded T Scheme

Now, we extend the above scheme to the case where no bound on the number of periods is preset. Prefixless Notation. The main idea is that instead of using the binary representation of t as the path to the proper leaf in the tree, we use t˜ which is a monotone prefixless (or self-delimiting) encoding of t: for no two strings t, t the encoding of one is a prefix of the encoding of the other: t˜ = t˜ {0, 1}∗ . Setting the length of all strings to have the same length is one way to guarantee prefixlessness, albeit only for a finite set of strings. This is indeed the way the prefixless property is assured in [BM99] and the above construction. Other prefixless encodings abound. The simplest one handling infinite sets of inputs is probably to view each string t as an integer represented in unary (using only 1s), and delimited with a single 0: t˜ = 1t 0. Of course, such an encoding is very inefficient: |t˜| = 2t . However, a simple bootstrapping strategy can make it much more efficient: t˜ can consist of the length |t| of t, encoded in a prefixless notation, followed by t itself. In this case |t˜| = 2|t| = 2 lg t. Iterating this one more time we can get a prefixless encoding for which |t˜| = lg t+2 lg lg t = (1+o(1)) lg t. For practical purposes, it probably makes sense to stop here, though further iteration may yield better asymptotics (e.g. iterating lg∗ t times, we can get ∗ |t˜| = lg t + lg lg t + ... + lglg t t + lg∗ t). Another approach to achieve prefixless encoding, (instead of using unary notation at the base step) can represent each bit as two bits, 0 as 00, 1 as 01, and use 11 to mark the end of the string. Clearly this encoding can be easily improved by encoding t in ternary and using 10 to represent the ternary digit 2. Then |t˜| = 2 log3 t < 1.6 lg t. Further  reductions are possible: essentially, for any monotone function f such that 1/f ≤ 1 it is possible to achieve an encoding such that t˜ ≤ f (t). In particular, for any  > 0 there exists a prefixless encoding such that |t˜| ≤ (1 + ) lg t + O(1). These methods can also be used in the same bootstrapping strategy. Future-Enabling Keys. Representing the leaves in a prefixless notation helps only in part. In addition, at any time we need to maintain (distributively) some future-enabling key, which are used to extend the certification hierarchy to the future time periods. In the fixed T scheme above, at time t these are the secret keys corresponding to the nodes of the right path ρ(t). Using a generic prefixless encoding, e.g. as the second one in the paragraphs above, may not provide us with a small future-enabling key set. The first encoding scheme, using unary encoding as the base for bootstrapping, however, does this very well. In fact, just as before, the same right path for it provides exactly the desired future-enabling key set. And the number of nodes in the right path |ρ(t˜)| = (1 + o(1)) lg t, i.e. increases only marginally compared to the fixed T scheme above (much less than by a factor of 2 — or by additive o(lg t) nodes,

116

Gene Itkis

more precisely). At the same time, the early periods t : (1 + o(1)) lg t < lg T ) (i.e. t : t1+o(1) < T ) would actually even benefit from using the unbounded T scheme, compared to the fixed T scheme above. Other methods for constructing prefixless encodings with small right path sets exist as well. The two level construction in [MMM02] can be viewed as such. It is somewhat in between the unbounded and fixed T approaches: it uses fixed string length to achieve prefixlessness of the length of t (listed in logarithmic scale). As such, it still handles only bounded T , though the bound can easily be arranged to include all T that might be even remotely realistic in our universe (or match the security parameter requirements). Indeed, from the practical point it achieves a good compromise and offers good performance. Still, our methods can achieve almost a factor of two improvement in some cases. For example, for t = 300, using [MMM02] construction, |t˜| = 21, while using methods such as above, we can easily achieve |t˜| = 13 bits11 (the cost for [MMM02] method depends on the upper bound on T in the [MMM02], which the authors proposed to be T ≈ 2128 ; we use a more favorable limit of ≈ 232 — the next smaller T would be 216 , which seems a bit too low for at least some cases; using the higher bound of T ≈ 2128 would increase |t˜| by additional 2 bits for all t in the [MMM02] constructions). Finding other prefixless encodings which would keep the future-enabling key set small is a possible future research direction. 3.3

Optimizations

There are a number of improvements one can achieve for the above scheme. For example, for reasons of its simplicity, we have used the original Diffie-Lamport one-time signature scheme which requires 2k secrets (and as long a public key) to sign k bits. An obvious improvement to the scheme is to use more efficient one-time signature schemes such as the Merkle scheme [Mer89] requiring k + lg k + 1 values. Other even more efficient schemes are also possible but they may 11

The prefixless encoding we use in these calculations encodes t as: t in binary with the leading bit dropped (it’s always 1 anyways); prefixed by the length of t minus 1 (for the dropped bit), and with its leading bit dropped as well; that prefixed with the length of the previous segment, again with the leading bit dropped; and finally that all is prefixed with the length of the previous segment, but this time in unary, terminated with 0, and again the leading 1 dropped. So, t = 300 is encoded as 10 11 1000 100101100, with the dropped 1s crossed out, but kept for clarity. Other comparisons with [MMM02]: For t in approximately 65,000–109 (i.e. up to a billion) range using [MMM02] gives |t˜| = 37 bits for all t vs. 20-37 bits using our method (with the smaller t˜ using 20 bits and the encoding length increasing as t grows — approximately 1 bit per doubling of t). At the smaller end, for t in the approximate range of 25-250, [MMM02] construction results in |t˜| being 13 for all of them, while our construcitons would give 8 for t in 16-31 range; 9 for 32-63; 10 for 64-127; and 11 for 128-255. In comparing these bits two considerations should be kept in mind: on the one hand, each bit in the discussion above results in one extra C signature and/or key in our gSiBIR costs; on the other hand, it may be possible to take a bit or a couple off either encoding by further fine-tuning of the corresponding methods.

Generic Intrusion-Resilient Signatures

117

involve some trade-offs between computation and signature/key sizes (e.g. the Winternitz modification, also described in [Mer89]) and/or may conflict with pro-certification. It is also easy to significantly reduce the size of the secret keys — since C.SK are just random values, they can be replaced with pseudo-random ones (using a seed per node, so they can be deleted easily; or use some alternative techniques, e.g. based on storing the right path of the classic pseudo-random function tree [GGM86]). But this technique can easily be applied only to C (s) .SK ; for C (b) .SK it conflicts with the pro-certification. Other optimizations are less obvious: it may be possible to reuse some of the keys by using some collusion-resilience, set-cover, and/or frameproof codes techniques. Finally, it may be possible to apply the Merkle certification tree, just as in [MMM02], but without loosing the intrusion-resilience.

Acknowledgments The author thanks Leo Reyzin and Jonathan Katz for very helpful discussions.

References And97.

Ross Anderson. Invited lecture. Fourth Annual Conference on Computer and Communications Security, ACM (see http://www.ftp.cl.cam.ac.uk/ ftp/users/rja14/forwardsecure.pdf), 1997. AR00. Michel Abdalla and Leonid Reyzin. A new forward-secure digital signature scheme. In Tatsuaki Okamoto, editor, Advances in Cryptology– ASIACRYPT 2000, volume 1976 of Lecture Notes in Computer Science, pages 116–129, Kyoto, Japan, 3–7 December 2000. Springer-Verlag. Full version available from the Cryptology ePrint Archive, record 2000/002, http://eprint.iacr.org/. BM99. Mihir Bellare and Sara Miner. A forward-secure digital signature scheme. In Michael Wiener, editor, Advances in Cryptology–CRYPTO ’99, volume 1666 of Lecture Notes in Computer Science, pages 431–448. Springer-Verlag, 15–19 August 1999. Revised version is available from http://www.cs.ucsd.edu/˜mihir/. DKXY02. Yevgeniy Dodis, Jonathan Katz, Shouhuai Xu, and Moti Yung. Keyinsulated public key cryptosystems. In Knudsen [Knu02]. GGM86. Oded Goldreich, Shafi Goldwasser, and Silvio Micali. How to construct random functions. Journal of the ACM, 33(4):792–807, October 1986. GQ88. Louis Claude Guillou and Jean-Jacques Quisquater. A “paradoxical” indentity-based signature scheme resulting from zero-knowledge. In Shafi Goldwasser, editor, Advances in Cryptology–CRYPTO ’88, volume 403 of Lecture Notes in Computer Science, pages 216–231. Springer-Verlag, 1990, 21–25 August 1988. HJJ+ 97. Amir Herzberg, Markus Jakobsson, Stanislaw Jarecki, Hugo Krawczyk, and Moti Yung. Proactive public key and signature systems. In Fourth ACM Conference on Computer and Communication Security, pages 100– 110. ACM, April 1–4 1997.

118

Gene Itkis

IL89.

IR01.

IR02.

Knu02. Kra00.

Lam79. Lev73.

Lev74.

LV93. Mer87.

Mer89.

MMM02.

NY89.

OY91.

Yao82.

G. Itkis and L. A. Levin. Power of fast VLSI models is insensitive to wires’ thinness. In 30th Annual Symposium on Foundations of Computer Science, pages 402–407, Research Triangle Park, North Carolina, 30 October–1 November 1989. IEEE. Gene Itkis and Leonid Reyzin. Forward-secure signatures with optimal signing and verifying. In Joe Kilian, editor, Advances in Cryptology–CRYPTO 2001, volume 2139 of Lecture Notes in Computer Science, pages 332–354. Springer-Verlag, 19–23 August 2001. Gene Itkis and Leonid Reyzin. Intrusion-resilient signatures, or towards obsoletion of certificate revocation. In Moti Yung, editor, Advances in Cryptology–CRYPTO 2002, Lecture Notes in Computer Science. SpringerVerlag, 18–22 August 2002. Available from http://eprint.iacr.org/2002/054/. Lars Knudsen, editor. Advances in Cryptology–EUROCRYPT 2002, Lecture Notes in Computer Science. Springer-Verlag, 28 April–2 May 2002. Hugo Krawczyk. Simple forward-secure signatures from any signature scheme. In Seventh ACM Conference on Computer and Communication Security. ACM, November 1–4 2000. Leslie Lamport. Constructing digital signatures from a one way function. Technical Report CSL-98, SRI International, October 1979. Leonid A. Levin. On the concept of a random sequence, in Russian). Doklady Akademii Nauk SSSR (Proceedings of National Academy of Science of USSR), 5(14):1413–1416, 1973. Leonid A. Levin. Laws of information conservation (non-growth) and aspects of the foundations of probability theory, in Russian). Problemy Peredachi Informatsii, 3(10):206–210, 1974. Ming Li and Paul Vit´ anyi. An Introduction to Kolmogorov Complexity and Its Applications. Springer-Verlag, 1993. Ralph C. Merkle. A digital signature based on a conventional encryption function. In Carl Pomerance, editor, Advances in Cryptology–CRYPTO ’87, volume 293 of Lecture Notes in Computer Science, pages 369–378. SpringerVerlag, 1988, 16–20 August 1987. Ralph C. Merkle. A certified digital signature. In G. Brassard, editor, Advances in Cryptology–CRYPTO ’89, volume 435 of Lecture Notes in Computer Science, pages 218–238. Springer-Verlag, 1990, 20–24 August 1989. Tal Malkin, Daniele Micciancio, and Sara Miner. Effcient generic forwardsecure signatures with an unbounded number of time periods. In Knudsen [Knu02]. Moni Naor and Moti Yung. Universal one-way hash functions and their cryptographic applications. In Proceedings of the Twenty First Annual ACM Symposium on Theory of Computing. (May 15–17 1989: Seattle, WA, USA), pages 33–43, New York, NY 10036, USA, 1989. ACM Press. Rafail Ostrovsky and Moti Yung. How to withstand mobile virus attacks. In 10-th Annual ACM Symp. on Principles of Distributed Computing, pages 51–59, 1991. A.C. Yao. Protocols for secure computations. In 23rd Annual Symposium on Foundations of Computer Science, pages 160–164, Chicago, Illinois, 3–5 November 1982. IEEE.

Efficient Re-keying Protocols for Multicast Encryption Giovanni Di Crescenzo1 and Olga Kornievskaia2, 1

Telcordia Technologies, Morristown, NJ, 07960, USA [email protected] 2 CITI, University of Michigan, MI, USA [email protected]

Copyright 2002, Telcordia Technologies, Inc. All Rights Reserved.

Abstract. Never halting growth of the Internet has influenced the development and use of multicast communication which is proving to be an effective method for delivery of data to multiple recipients. Vast number of applications come to benefit from this efficient means of communication. With existing security threats in the Internet it has become imperative to look into multicast security. One of the challenges in securing multicast communication is to efficiently establish and manage shared keys in large and dynamic groups. In this paper we propose very efficient re-keying protocols for multicast encryption. One of our protocols has complexity at most logarithmic in all measures considered in the literature, namely, communication, number of keys stored by the user and by the center, and time complexity per update. We then analyze the performance of the family of tree-based re-keying protocols for multicast encryption, with respect to a set of multiple update operations, and show that for a particular class of such updates, we can modify these schemes so that they guarantee essentially optimal performance. Specifically, while performing m update operations each one at a time would guarantee a complexity of O(m log n), we show that for a specific (but large) class of instances, we can modify the schemes so that they guarantee O(log n) communication complexity only, by keeping the user storage complexity O(log n), n being the maximum size of the multicast group.

1

Introduction

Multicast communication over the Internet is proving to be an effective method for delivery of data to multiple recipients. Many applications for it include audio and video conferencing, news feeds, stock quotes, collaborative applications, and pay TV. Yet, with existing security threats in the Internet it is imperative to consider multicast security. 

Part of this work done while an intern at Telcordia.

S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 119–132, 2003. c Springer-Verlag Berlin Heidelberg 2003 

120

Giovanni Di Crescenzo and Olga Kornievskaia

Securing multicast communications poses challenges of several types, such as algorithmic problems, system and communication design, and secure implementation. Even when restricting to security analysis, it still presents several problems such as access control, authentication and availability. In this paper we focus on access control. A common technique uses symmetric key encryption, distributing a cryptographic key to the group members. This group key is used to encrypt and decrypt messages exchanged between members of the group. Then a key management problem is defined by asking that the following invariant is maintained: all group members, and only them, have knowledge of the group key, given that the set of group members changes over time. Key management protocols in the literature can be divided into two groups: collaborative and non-collaborative. The collaborative key management refers to a method of managing the group key between the group members without outside assistance. Many collaborative key management protocols have been proposed (see, e.g., [13,20,3,2,17,21,14]). The non-collaborative key management, which we consider in this paper, relies on the existence of a central entity, or center, that is responsible for generating and distributing a group key. Then the main problem in non-collaborative key management becomes to construct efficient re-keying protocols that manage the distributed keys after updates to the multicast group, while keeping efficiency in at least some of the known measures. Many non-collaborative key management protocols have been proposed (see, e.g., [11,6,4,5,23,22,8,10,1,19,16,12]), some studying multicast encryption and others broadcast encryption. It has been remarked that in multicast encryption no bound is assumed on the coalition of removed users, as done in broadcast encryption. The main focus of this paper is on studying non-collaborative key management, presenting new re-keying protocols for multicast encryption, and paying attention to all complexity measures, such as communication complexity, storage complexity for the user and the center, time complexity, and storage information assumptions. Our Results and Comparison with Previous Results. We present two efficient re-keying protocols for multicast encryption. The first has both communication and storage complexity O(log n), n being an upper bound on the size of the group, but linear time complexity. The second has complexity O(log n) in all measures considered in the literature, namely: communication, number of keys stored by the user and by the center, and time complexity per update. This solves a problem posed in [8], who noted that improving the server storage complexity to less than quasi-linear (and keeping all other security properties unchanged) was at the time an unsolved problem. We also analyze the performance of tree-based re-keying protocols (e.g., [22,23]) with respect to multiple update operations. For such family of protocols, handling a single update operation requires complexity O(log n), and multiple update operations can be clearly handled by processing one at a time, thus giving complexity O(m log n), where m is the number of updates. A protocol proposed in [16] improves the communication complexity to O(m) but requires user storage

Efficient Re-keying Protocols for Multicast Encryption

121

complexity O(log2 n); this storage complexity is improved in [12] to O(log1+ n). Both methods are not tree-based. In this paper we give a specific (but large) class of update operations for which a natural modification of all tree-based protocols guarantees very small communication complexity per update and keeps logarithmic user storage complexity. Specifically, while performing m update operations one at a time guarantees a complexity of O(m log n), we show that for this specific class of update operations, we can modify the schemes so that they guarantee only O(log n) communication complexity and still keep O(log n) user storage complexity. Organization of the Paper. In Section 2 we recall definitions for multicast encryption, key-management protocols and re-keying protocols, and some protocols proposed in the literature. In Section 3 we present our first re-keying protocol for multicast encryption, having efficient storage and communication complexity. In Section 4 we present our second re-keying protocol, which is efficient in all complexity measures considered in the literature. In Section 5 we consider the extension of our protocols to handling multiple update operations efficiently.

2

Definitions and Background

In this section we recall the notion of multicast encryption, key-management protocols and re-keying protocols, and review some re-keying protocols from the literature. Multicast Encryption Schemes. Informally, multicast encryption schemes are methods for performing secure communication among a group of users. In such schemes the group structure is dynamically changing, as a consequence of additions and deletions of members. Moreover, they are divided into two phases: an off-line initialization phase, in which a center communicates privately to each group member, and an on-line communication phase, in which the only communication means between center and group members is through a multicast channel, used by the center (or any other party) to broadcast messages to all others. Given that the communication is secured through private-key encryption, the main goal of a multicast encryption scheme is to guarantee that a random key, called the group key, is shared among parties, even after several join or leave operations (since then this key can be used to encrypt and decrypt messages exchanged between members of the group). This reduces the design of a multicast encryption scheme to that of a key management scheme, which is supposed to maintain the following invariant: all group members, and only them, have the knowledge of the group key where the group membership is not fixed, but varies. The invariant is mantained using a re-keying protocol. Moreover, as done in the literature so far, we only study the design of the security module underlying a multicast encryption scheme; other issues, such as the maintenance of any data structure used to represent the current group members are ignored and assumed to be taken care of by a separate module.

122

Giovanni Di Crescenzo and Olga Kornievskaia

More formally, a key management protocol for multicast encryption is defined as a triple made of an initialization protocol I, and two re-keying algorithms J and L, with the following syntax. The protocol I is executed between a center and all group members and results in a string si sent to each group member Ui (typically containing a set of keys). The algorithm J (resp., L), on input a join request (resp., leave request) from some user, is executed by the center and results in a string s sent by multicast to the group, and, possibly, a string su sent to the joining (resp., leaving) user u. The triple of algorithms satisfies at any time the following three requirements: 1. Correctness: all current group members, using the string si received at the end of protocol I and all strings su received through multicast after join or leave updates, can compute the group key. 2. Forward security: any efficient adversary corrupting all users who have left the group, can distinguish the current and future group keys from random and independent ones only with probability at most negligible. 3. Backward security: any efficient adversary corrupting a user who joins the group can distinguish the previous group keys from random and independent ones only with probability at most negligible. Complexity Measures. Several parameters are used to compare key-management protocols for multicast encryption. Perhaps the most important is the communication complexity, which we define to be the maximum length of the messages sent through multicast after a join or leave operation. Also important is the user storage, which we define to be the maximum number of random keys that a user needs to store. Analogously, the center storage is defined as the maximum number of random keys that the center needs to store. Finally, another measure is the time complexity, that we define to be the maximum running time of users and server while executing the update operations. We note that a more general definition of user and center storage (namely, referring to the total storage rather than the number of stored keys) would be somewhat misleading with respect to the real storage complexity of the scheme. In particular, all protocols in the literature would require center storage linear in the number of participants since the center needs to differentiate among users and to keep them in an appropriate data structure, such as a tree. Instead, as also done in other areas of cryptography, the focus here is on the random keys which are the most expensive resource, both from a security standpoint (since they have to be kept private) and from a storage standpoint (since data structure information usually requires less storage). Re-keying Protocols in the Literature. We briefly review some re-keying protocols for multicast encryption given in the literature. In particular, we review two main families: minimal storage schemes (e.g., [8]), and tree-based schemes (e.g., [22,23,8,18]), since they will be used in our constructions. Minimal Storage Schemes. A typical scheme in this family works as follows. Each user Ui holds two keys: a group key Kg and a unique secret key Ki shared

Efficient Re-keying Protocols for Multicast Encryption

123

between this user and the center. The center holds two keys: a group key Kg and a secret key l. This secret key is used to generate the keys for all the users in the group by using it with a pseudo-random function f, for instance, as Ki = fl (i). When a new user joins the group, the center chooses a new group key Kg , and sends it to the new user, encrypted with Ki and to all other users through the multicast channel, encrypted with the old group key. When a user leaves the group, the center chooses a new group key Kg , encrypts this key with Ki for each i, and transmits the result to each user Ui . The communication cost of this scheme is linear in the number of clients in the group. The storage cost of both user and center is constant. Tree-Based Schemes. These schemes reduce the communication cost of key update to logarithmic but increase the storage cost. The center creates a balanced tree. For simplicity, we assume that for n members, n = 2t for some natural number t. Each node in the tree represent a key. Leaf nodes represent individual keys known to a particular user. Each user is given all the keys on the path from the leaf node to the root. Thus, each user Ui holds log n + 1 keys. The root node is considered to be a group key because it is known to all the users. The center stores all the keys in the tree, requiring storage cost to be 2n + 1. When a user leaves the group, the center needs to change all the keys on the path from the removed leaf to the root. The center generates log n new keys. Each of the keys is encrypted using some keys in the tree and multicasted to all users. Overall, each update requires messages of size O(log n) to be sent. The case of a user joining the group can be handled similarly. Several variations over this basic scheme have been considered in the literature.

3

A Storage-Efficient Re-keying Protocol

In this section we present a storage-efficient re-keying protocol for multicast encryption. The minimal storage schemes give a constant storage cost but require a linear communication cost. The tree-based schemes reduce the communication cost at the expense of some increase in the storage cost. We propose a scheme that gives us the best of both worlds. More formally, we achieve the following: Theorem 1. There exists (constructively) a key-management protocol for multicast encryption, with communication complexity O(log n), user storage complexity O(log n), center storage complexity O(1), and time complexity O(n), and making storage assumptions such as access to user indices and to a tree-based data structure for the group of users. We note that the storage assumptions needed for our construction, as stated in the statement of the theorem, are comparable to those made in all previous schemes in the literature. In the rest of the section we prove the above theorem. An Informal Description. The storage cost in the tree-based schemes is linear because the center needs to store all the keys in the tree. We would like to remove that requirement by reconstructing all the keys in the tree in a unique way from

124

Giovanni Di Crescenzo and Olga Kornievskaia

the keys in the leaves only. First of all, the center uniformly chooses a key k and keeps it secret. Then, the key of the leaf associated with a certain user is generated as the output of an execution of a block cipher F on input the user’s index, and using key k (consequently, the center does not need to store the keys associated with the leaves but only key k). Moreover, the key of each internal node is obtained in a unique way from the keys of that node’s children, as the output of block cipher F on input the bitwise xor of the children’s keys, and using k as a key (consequently, keys associated with internal nodes need not be stored since they can be recovered from the keys at the leaves). This equality among each internal node and its two children will be an invariant satisfied by the tree during the lifetime of the scheme. When a user joins the group, the tree is extended to replace one previous leaf with two new ones (one associated with the joining user and one associated with an old group member), and the center computes a new key associated with the joining user’s leaf and recomputes all the keys on the path from that leaf to the root, so that they satisfy the above invariants. The center sends the new leaf key to the associated user who stores it, and the other keys on that path to the appropriate subsets of users, by encrypting each key using both children’s keys on the multicast channel. When a user leaves the group, his sibling is moved up one level in the tree and the center recomputes all the keys on the path from this node to the root, so that they satisfy the above invariants. These keys are then sent to the appropriate subsets of users, by encrypting each key using both children’s keys on the multicast channel. A More Formal Description. Let m be the initial size of the group, let U1 , . . . , Um be the group members, and let ski be the secret key shared between user Ui and the center. Also, denote by l(j) and r(j) be the left and right child of node j, by F a block cipher, and assume, for sake of a simplified description, that F has equal key and block length (e.g., κ = l = 128). We now describe the protocol I and the algorithms J , L forming our scheme. Protocol I goes as follows: 1. the center constructs a binary tree with m leaves and such that each leaf has a sibling, and associates leaf i with user Ui 2. the center uniformly chooses a key k and computes keys ki = Fk (i), for each leaf i of the tree 3. the center computes key kj = Fk (kl(j) ⊕ kr(j) ), for each internal node j of the tree 4. the center sends to each member Ui a private-key encryption, using shared key ski , of the keys associated with all nodes on the path from leaf i to the root Algorithm J , run by the center, using key k, goes as follows: 1. replace one leaf of the tree with an internal node and two new leaves as his two children, call them a and b; associate joining user with b and user associated with previous leaf with a

Efficient Re-keying Protocols for Multicast Encryption

2. 3. 4. 5.

125

generate a key sk to be shared between center and joining user compute key kb = Fk (b) and keys ki = Fk (i), for each leaf i of the tree compute key kj = Fk (kl(j) ⊕ kr(j) ), for each internal node j of the tree for each node j on this path from leaf b to the root, send key kj to joining user by encrypting it using key sk and send kj through the multicast channel by encrypting it twice, once using key kl(j) and once using key kr(j) .

Algorithm L, run by the center, using key k, goes as follows: 1. remove the leaf associated with leaving user and bring its sibling, renamed with a new name a, up one level in the tree 2. compute key ki = Fk (i), for each leaf i of the tree 3. compute key kj = Fk (kl(j) ⊕ kr(j) ), for each internal node j of the tree 4. for each node j on this path from leaf b to the root, send key kj to joining user and send kj through the multicast channel by encrypting it twice, once using key kl(j) and once using key kr(j) 5. for each node j on this path from leaf b to the root, send kj through the multicast channel by encrypting it twice, once using key kl(j) and once using key kr(j) . Properties of (I, J , L). The correctness requirement is satisfied since at any time each group member is given the current key associated with the root of the tree. Note that this invariant is true after the execution of I and remains so after each execution of J and L. To see that the forward security requirement is also satisfied, it is enough to observe two facts. First, during an execution of algorithm L, all keys on the path from the root to the leaf of a user leaving the group are always replaced by new ones (specifically, the parent of the leaving user gets a new key ka = Fk (a), and consequently all its ancestors are updated by the equation kj = Fk (kl(j) ⊕kr(j) )). Second, by assuming that the block cipher used behaves like a pseudo-random permutation, with all but negligible probability (specifically, the probability that all inputs kl(j) ⊕ kr(j) to Fk are distinct), all keys in the tree, including the new root key and the future ones, are indistinguishable from random and independent keys by any adversary obtaining all keys of leaving users. To see that the backward security requirement is also satisfied, it is enough to observe two facts. First, during an execution of algorithm J , all keys on the path from the root to the leaf of a user joining the group are fresh and replace the previous ones (specifically, the joining user gets a new key kb = Fk (b) and all keys associated with its ancestors, which are updated by the equation kj = Fk (kl(j) ⊕ kr(j) )). Second, by assuming that the block cipher used behaves like a pseudo-random permutation, with all but negligible probability (specifically, the probability that all inputs kl(j) ⊕ kr(j) to Fk are distinct), all keys in the tree, including all previous root keys, are indistinguishable from random and independent keys by any adversary corrupting the joining user. Performance of (I, J , L). The communication complexity for each update operation is O(log n), since it consists of sending twice all keys in a path from

126

Giovanni Di Crescenzo and Olga Kornievskaia

a leaf to the root through the multicast channel. The user storage complexity is O(log n), since each user is given all keys on the path from the root to the leaf associated with the user herself. The center storage complexity is O(1), since only one key is stored by the center and all keys in the tree are computed using this key. The time complexity is O(n), since a linear number of block cipher applications are necessary in steps 4 of algorithm J and 3 of algorithm L. The scheme only uses an index to distinguish each user and implements a tree data structure over the group.

4

A Storage-Efficient and Computation-Efficient Scheme

The time complexity of the scheme described in the previous section is linear in the number of current group members. Here we propose a scheme that improves this computational complexity at the price of slightly increasing center storage, which still remains quite efficient (i.e., logarithmic). This scheme is the first to achieve complexity at most logarithmic in all four measures. More formally, we achieve the following: Theorem 2. There exists (constructively) a key-management protocol for multicast encryption, with communication complexity O(log n), user storage complexity O(log n), center storage complexity O(1), and time complexity O(log n), and making storage assumptions such as access to user indices, to a tree-based data structure for the group of users and to counters for each node in the tree. As for Theorem 1, we observe that the storage assumptions needed for our construction, stated in the statement of Theorem 2, are comparable to those made in all previous schemes in the literature. In the rest of the section we prove the above theorem. An Informal Description. In this scheme the center uniformly chooses a single key k and uses it to generate one intermediate key for each node of the tree. Then a counter is associated with each node of the tree, and the final key associated with a node is obtained as the output of block cipher F on input the current value of the counter, and using the intermediate key associated with that node. (Consequently, intermediate keys associated with internal nodes need not be stored since they can be recovered from key k; counters do need to be stored but they are significantly shorter than the keys.) When a user joins the group, a new leaf is added to the tree, and the center computes a new key associated with the joining user’s index and leaf and recomputes all the keys on the path from that leaf to the root. The center sends the new leaf key and the other keys on that path to the joining user who stores them. When a user leaves the group, the center increments all counters on the path from the leaf associated with this index to the root, and compute new keys for these nodes. These keys are then sent to the appropriate subsets of users, using the sibling’s keys on the multicast channel.

Efficient Re-keying Protocols for Multicast Encryption

127

A More Formal Description. Let m be the initial size of the group, let U1 , . . . , Um be the group members, and let ski be the secret key shared between user Ui and the center. Also, denote by cj a counter associated with node j, and by F a block cipher with equal key and block length (e.g., κ = l = 128). We now describe the protocol I and the algorithms J , L forming our scheme. Protocol I goes as follows: 1. the center constructs a binary tree with m leaves and associates leaf i with user Ui 2. the center uniformly chooses and stores a key k 3. for each node j of the tree, the center stores cj = 0, intermediate key qj = Fk (j), and associates key kj = Fqj (cj ) to node j 4. the center sends to each member Ui , using shared key ski , encryption of the keys associated with all nodes on the path from leaf i to the root. Algorithm J , run by the center, using key k, goes as follows: 1. 2. 3. 4.

add one leaf to the tree, call it a, and associate it with the joining user set counter ca = 0 and qa = Fk (a) increment (by 1) counters cj for each node j in the path from a to the root for each node j on this path, recompute intermediate key as qj = Fk (j), update final key as kj = Fqj (cj ), and give key kj to the joining user 5. for each node j on this path, recompute intermediate keys ql(j) = Fk (l(j)) and qr(j) = Fk (r(j)) and final keys kl(j) = Fql(j) (cl(j) ) and kr(j) = Fqr(j) (cr(j) ), and send kj through the multicast channel by encrypting it twice, first using key kl(j) and then using key kr(j) .

Algorithm L, run by the center, using key k, goes as follows: 1. remove the leaf associated with leaving user and bring its sibling, renamed with a new name a, up one level in the tree 2. increment (by 1) counters cj for each node j in the path from a to the root 3. for each node j on this path, recompute intermediate keys as qj = Fk (j) and update final keys as kj = Fqj (cj ) 4. for each node j on this path, recompute intermediate keys ql(j) = Fk (l(j)) and qr(j) = Fk (r(j)) and final keys kl(j) = Fql(j) (cl(j) ) and kr(j) = Fqr(j) (cr(j) ), and send kj through the multicast channel by encrypting it twice, first using key kl(j) and then using key kr(j) . Properties of (I, J , L). The correctness requirement is satisfied since at any time each group member is given the current key associated with the root of the tree. Note that this invariant is true after the execution of I and remains so after each execution of J and L. To see that the forward security requirement is also satisfied, it is enough to observe two facts. First, during an execution of algorithm L, all keys on the path from the root to the leaf of a user leaving the group are always replaced

128

Giovanni Di Crescenzo and Olga Kornievskaia

Table 1. Summarizes the complexity orders of multicast encryption protocols discussed in this paper. The first three columns show results from other schemes in the literature with user storage at most logarithmic (resp., [8], [23,22], and [8]). The last two columns show the results of this paper (resp, Section 3 and 4). Evaluation parameters

Minimal storage scheme user storage 1 center storage 1 communication n computation n server storage user index assumptions

Basic tree Improved Our first scheme tree scheme scheme

Our second scheme

log n n log n log n user index, tree

log n 1 log n log n user index, tree, counters

log n n/ log n log n log n user index, tree

log n 1 log n n user index, tree

by new ones (specifically, the parent of the leaving user gets a new key ka = Fqa (ca ), where qa = Fk (a), and consequently all its ancestors are updated by the equations cj = cj + 1, qj = Fk (j) and kj = Fqj (cj )). Second, by assuming that the block cipher used behaves like a pseudo-random permutation, with all but negligible probability (specifically, the probability that all values Fqj (cj ) are distinct), all keys in the tree, including the new root key and the future ones, are indistinguishable from random and independent keys by any adversary obtaining all keys of leaving users. To see that the backward security requirement is also satisfied, it is enough to observe two facts. First, during an execution of algorithm J , all keys on the path from the root to the leaf of a user joining the group are fresh and replace the previous ones (specifically, the joining user gets a new key ka = Fqa (ca ), where qa = Fk (a), and consequently all its ancestors are updated by the equations cj = cj + 1, qj = Fk (j) and kj = Fqj (cj )). Second, by assuming that the block cipher used behaves like a pseudo-random permutation, with all but negligible probability, (specifically, the probability that all values Fqj (cj ) are distinct), all keys in the tree, including all previous root keys, are indistinguishable from random and independent keys by any adversary corrupting the joining user. Performance of (I, J , L). The communication complexity for each update operation is O(log n), since it consists of sending twice all keys in a path from a leaf to the root through the multicast channel. The user storage complexity is O(log n), since each user is given all keys on the path for the root to the leaf associated with the user herself. The center storage complexity is O(1), since only one key is stored by the center and all keys in the tree are computed using this key. The time complexity is O(log n), since at most a logarithmic number of block cipher applications are necessary in steps 3, 4 nad 5 of algorithms J and L. The scheme uses an index to distinguish each user, implements a tree data structure over the group and keeps a counter for each node in the tree. We note that although these storage assumptions are quite reasonable in practice, they are somewhat less desirable than those in the scheme of Section 3.

Efficient Re-keying Protocols for Multicast Encryption

5

129

Extension to Multiple Update Operations

As seen in previous sections, group membership is not static and varies as members join and leave. In this section we consider the situation when multiple members join or leave at the same time or within a sufficiently short period of time. (In fact, we will restrict without loss of generality to multiple leave operation only.) An immediate solution to handle multiple update operations would be to handle each one at a time. Here we note that in some special cases it is possible to obtain significantly more efficient procedures by simple modifications to the original scheme. In particular, we consider the case of subsets that are of interval shape, namely, of an arbitrary subset of consecutive leaves in the tree. For such subsets, we show that processing m updates can be performed at about twice the communication and computation costs of a single update. (We also note that a similar result can be obtained for subsets that can be written as the union of a constant number of disjoint intervals.) Now we proceed more formally. Some Definitions. Let T be a binary tree and let LT be a list containing all leaves of T , from the leftmost to the rightmost. We define an interval I as a subset of consecutive nodes from list LT . For any subtree S of T , define list LS as above, and define a border interval I as a subset of consecutive nodes from list LS such that either the first node of I is the first node of LS or the last node of I is the last node of LS (in the former case, we say this is a left border interval, while in the latter we say it is a right border interval). For any interval I, define t(I) as equal to the communication complexity necessary to perform leave operations for all group members associated with the leaves in I. Our Contribution. We show the following two facts. Fact 1 For each interval I, it holds that t(I) ≤ 2t(J) + 1, where J is a border interval. Fact 2 For each border interval J, it holds that t(J) ≤ log n, where n is the number of leaves of tree T . Clearly by combining these two facts, we obtain that t(I) ≤ 2 log n + 1 for each interval I. (Note that this is a significant improvement over the t(I) = m log n performance of the immediate approach of handling each leave operation separately.) Analysis. We start to show Fact 1. First of all, note that any interval I can be extended to an interval J such that J can be written as the concatenation of two intervals J1 and J2 such that J1 is a right border interval for the left subtree of T and J2 is a left border interval for the right subtree of T . By observing that communicating updates for the root key may require sending at most one key, we obtain that t(I) ≤ t(J1 )+t(J2 )+1. Since both J1 and J2 are border intervals, and by the symmetry properties of binary trees, we obtain that t(Ja ) ≤ t(J3−a ), for some a ∈ {1, 2}, from which fact (1) follows.

130

Giovanni Di Crescenzo and Olga Kornievskaia

We now proceed to show Fact 2. First of all, we make two basic observations (a) and (b). First, consider the case in which the interval constitutes an entire subtree S of T ; we see that in this case it becomes useless to update the keys associated with the internal nodes of S, and therefore the number of keys to be updated is equal to the distance between the root of S and the root of T (call this: observation (a)). Now, consider the case in which the interval constitute two subtrees S1 and S2 of T ; by applying observation (a), we might derive that the number of keys to be updated is equal to the distance between the root of S1 and the root of T added with the distance between the root of S2 and the root of T . However, note that we are actually overcounting since the two paths share some nodes and it is enough to change the keys associated with the shared nodes only once rather than twice. Therefore, the keys to be updated are the keys on the path from the root of S1 to the root of T plus the keys on the path from the root of S2 and the root of T that have not been counted in the previous path (call this: observation (b)). Now we go on toward the proof of fact (2). Assume, for simplicity, that J is a left border interval. We now associate a list of subtrees to border interval J, as follows. Start with the subtree S1 of largest possible level that contains any prefix P1 of the nodes in J, and remove these nodes from J (i.e., set J1 = J \ P1 ). Then continue the same operation with J1 until no nodes are left. Let S1 , . . . , Sk be the list of subtrees thus obtained. By the structure of binary trees, we derive that rl(Si ) = rl(Si+1 ) − 1, for i = 1, . . . , k − 1, where by rl(S) we denote the distance of the root of subtree S from the root of tree T . Now, note that by observation (a), processing the updates of leaves in subtree S1 requires at most one path of keys to be updated (precisely, the path from the root of S1 to the root of T ); this counts to at most rl(S1 ) ≤ log n keys to be transmitted. A similar bound would hold for all other subtrees S1 , . . . , Sk ; however, we can use observation (b) and note that each subtree Si asks for only 1 additional key to be updated, given that the subtrees S1 , . . . , Si−1 have already been processed. Therefore, the total communication would be log n+k ≤ 2 log n, the last inequality following by observing that there can be at most log n subtrees defined as above.

6

Conclusion

In this paper we proposed new re-keying protocols for multicast encryption that process one update with logarithmic complexity in all measures considered in the literature, namely, number of keys stored by the user and by the center, communication and time complexity per update. We analyzed the performance of the family of tree-based multicast schemes with respect to multiple update operations, and showed that for a particular family of such updates, we can modify these schemes so that they guarantee essentially optimal communication complexity by keeping the user storage complexity small. Our protocols process m updates from a particular family with communication complexity and user storage complexity O(log n). Other protocols in the literature ([12]) process m updates of any type with communication complexity O(m) and user storage complexity O(log1+ n). We believe that closing this gap is an interesting open problem.

Efficient Re-keying Protocols for Multicast Encryption

131

References 1. J. Anzai, N. Matsuzaki and T. Matsumoto, A quick group key distribution scheme with entity revocation. In Proceedings of ”Advances in Cryptology – ASIACRYPT 99”, Lecture Notes in Computer Science, Springer Verlag, 1999. 2. K. Becker and U. Wille. Communication complexity of group key distribution. In Proceedings of the 5th ACM Conference on Computer and Communication Security, San Francisco, CA, November 1998. 3. M. Burmester and Y. Desmedt. A secure and efficient conference key distribution system. In Proceedings of ”Advances in Cryptology – EUROCRYPT’94”, Lecture Notes in Computer Science, 1994. 4. C. Blundo and A. Cresti. Space Requirements for Broadcast Encryption. In Proceedings of ”Advances in Cryptology – EUROCRYPT’94”, Lecture Notes in Computer Science, 1994. 5. C. Blundo, L. Frota Mattos and D. Stinson. Trade-Offs between communication and storage in unconditionally secure systems for broadcast encryption and interactive key-distribution. In Proceedings of ”Advances in Cryptology – CRYPTO’96”, Lecture Notes in Computer Science, 1996. 6. C. Blundo, A. De Santis, A. Herzberg, S. Kutten, U. Vaccaro, and M. Yung, Perfectly-secure key distribution for dynamic conferences. In Proceedings of ”Advances in Cryptology – CRYPTO’92”, Lecture Notes in Computer Science, 1993. 7. R. Canetti, J. Garay, G. Itkis, D. Miccianco, M. Naor, and B. Pinkas. Multicast security: A taxonomy and efficient authentication. In IEEE INFOCOMM, 1999. 8. R. Canetti, T. Malkin, and K. Nissim. Efficient communication storage tradeoffs for multicast encryption. In Proceedings of ”Advances in Cryptology – EUROCRYPT’99”, Lecture Notes in Computer Science, Springer Verlag, 1999. 9. R. Canetti and B. Pinkas. A taxonomy of multicast security issues. Internet Draft, August 2000. 10. I. Chang, R. Engel, D. Kandlur, D. Pendarakis, and D. Saha. Key management for secure internet multicast using boolean function minimization techniques. In IEEE INFOCOMM, 1999. 11. A. Fiat and M. Naor, Broadcast Encryption. In Proceedings of ”Advances in Cryptology – CRYPTO 93”, Lecture Notes in Computer Science, Springer Verlag, 1994. 12. D. Halevy and A. Shamir, The LSD Broadcast Encryption Scheme, In Proceedings of ”Advances in Cryptology – CRYPTO 02”, Lecture Notes in Computer Science, Springer Verlag, 2002. 13. I. Ingemarsson, D. Tang, and C. Wong. A conference key distribution system. IEEE Transactions on Information Theory, 28(5):714–720, September 1982. 14. Y. Kim, A. Perrig, and G. Tsudik. Simple and fault-tolerant key agreement for dynamic collaborative groups. In Proceedings of the 7th ACM Conference on Computer and Communication Security, CCS’00, November 2000. 15. M. Luby and J. Staddon. Combinatorial bounds for broadcast encryption. In Proceedings of ”Advances in Cryptology – EUROCRYPT’98”, Lecture Notes in Computer Science, Springer Verlag, 1998. 16. S. Naor, M. Naor, and J. Lotspiech, Revocation and Tracing Schemes for Stateless Receivers, In Proceedings of ”Advances in Cryptology – CRYPTO 01”, Lecture Notes in Computer Science, Springer Verlag, 2001. 17. A. Perrig. Efficient collaborative key management protocols for secure autonomous group communication. In CryptTEC, 1999.

132

Giovanni Di Crescenzo and Olga Kornievskaia

18. R. Poovendram and J. Baras. An information theoretic analysis of rooted-tree based secure multicast key distribution. In Proceedings of ”Advances in Cryptology – CRYPTO’99”, Lecture Notes in Computer Science, Springer Verlag, 1999. 19. R. Safavi-Naini and H. Wang. New constructions for multicast re-keying schemes using perfect hash families. In Proceedings of the 7th ACM Conference on Computer and Communication Security, CCS’00, Ahtens, Greece, 2000. 20. D. Steer, L. Strawczynski, W. Diffie, and W. Wiener. A secure audio teleconference system. In Proceedings of ”Advances in Cryptology – CRYPTO’88”, Lecture Notes in Computer Science, Springer Verlag, Santa Barbara, CA, August 1988. 21. M. Steiner, G. Tsudik, and M. Waidner. Key agreement in dynamic peer groups. In IEEE Transactions on Parallel and Distributed Systems, 2000. 22. D. Wallner, E. Harder, and R. Agee. Key management for multicast: Issues and architectures. RFC 2627, June 1999. 23. C. Wong, M. Gouda, and S. Lam. Secure group communication using key graphs. In Proceedings of the ACM SIGCOMM’98, September 1998.

On a Class of Key Agreement Protocols Which Cannot Be Unconditionally Secure Frank Niedermeyer and Werner Schindler Bundesamt f¨ ur Sicherheit in der Informationstechnik (BSI) Godesberger Allee 185–189, 53175 Bonn, Germany {Frank.Niedermeyer,Werner.Schindler}@bsi.bund.de Abstract. In [5] a new key agreement protocol called CHIMERA was introduced which was supposed to be unconditionally secure. In this paper an attack against CHIMERA is described which needs little memory and computational power and is successful almost with probability 1. The bug in the security proof in [5] is explained. Further, it is shown that a whole class of CHIMERA-like key agreement protocols cannot be unconditionally secure. Keywords: Key agreement protocol, unconditional security, CHIMERA

1

Introduction

Exchanging and storing session keys has been one of the fundamental problems in cryptography. Various approaches are possible: – Classical Method. The session keys are stored on a punched tape, floppy disk or smart card. The storing device is delivered by a courier. – Common Method. The keys are exchanged over an insecure channel using a public key protocol. Only the authorized receiver can read (RSA-based), resp. derive (Diffie-Hellman-like protocol) the session keys (cf. [4] for various protocols). – Futuristic Method 1. The key bits are transmitted as a quantum bit-stream. An eavesdropper’s measurement of the bit-stream will influence the bits by the uncertainty priciple of quantum physics. Thus the key exchange protocol will fail in the end which in turn makes the gained knowledge for the eavesdropper worthless (cf. e.g. [1]). – Futuristic Method 2. Random bits are broadcasted by a satellite with very low power. Thus it is only possible to receive a noisy version of these bits. Potential communication partners exchange messages depending on their received versions of the bit-stream sent by the satellite which finally yield to a common key. An attacker who also receives a noisy version of the random bits shall not be able to determine the key (cf. [2]). Each method has particular drawbacks. Key delivery by a courier is involved, costly and not applicable for open networks. Moreover, the courier may not be trustworthy. Public key cryptography requires a public key infrastructure. Public key algorithms might be broken in the future (e. g. by unexpected breakthroughs in mathematics or with quantum computers), and the transmitted S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 133–145, 2003. c Springer-Verlag Berlin Heidelberg 2003 

134

Frank Niedermeyer and Werner Schindler

messages might be still relevant then (consider electronic voting, for instance). Smart cards storing and handling secret public key parameters may be broken. The quantum method is unconditionally secure if the fundamental assumptions of quantum theory are correct. However, quantum cryptography is still in its infancy and it will be costly. Apart from other problems (cf. Example 4) it seems to be a minimal condition that the authority who operates a broadcasting satellite (Futuristic method 2) must be trustworthy as she knows the correct bit sequences. Recently, a new key agreement protocol called CHIMERA was proposed (cf. [5]) which should ensure privacy. Alice and Bob have no common secret but choose independently random start values x and y. Then they exchange a number of messages which depend on x and y. Finally, Alice and Bob use their start values and the exchanged messages to derive a session key k. Moreover, in [5] a proof was given that even an eavesdropper with unlimited computational power who has observed all messages was not able to compute k, or to guess k with a better chance than without knowing the exchanged messages (which implied that CHIMERA was unconditionally secure). Though its performance is not efficient, at least for very sensitive information CHIMERA seems to provide a perfect solution for the key exchange problem, as far as secrecy is concerned. CHIMERA could be implemented as an ‘emergency protocol’ which is used if the standard protocol turns out to be broken. Of course, authentication had to be ensured using appropriate mechanisms in this scenario, for example by exchanging hash values encrypted with the session key k, where the hash values depend on a secret passphrase, the actual key k, and the user’s identity. As an important advantage no cryptographic key needs to be stored in the cryptographic devices. Unfortunately, CHIMERA is not unconditionally secure but rather totally insecure. In this paper an attack is described with which CHIMERA can be broken with little computational power and little memory. The paper is organized as follows. In Subsect. 2.1 CHIMERA is described. The theoretical foundations of the attack and the results of practical experiments are given in Subsects. 2.3 and 2.4. Consequently, the proof in [5] which should ensure unconditional security must be false. The bug is explained in Subsect. 2.5. Moreover, in Sect. 3 very general classes of CHIMERA-like key agreement protocols are defined. It will be shown that none of these key agreement protocols is unconditionally secure.

2

CHIMERA

2.1

Description

Referring to [5] we describe the key agreement protocol CHIMERA. Alice and Bob generate independently random bit sequences (a0i )i 1, let r be a prime factor of Φk (), and let d be an integer satisfying 1  d  deg Φk /2. Set n = hr for some h, q = n + d , and t = d + 1. It follows from lemma 1 that r satisfies r | kd − 1, and in general r  id − 1 for 0 < i < k. The restriction d  deg Φk /2 is imposed to ensure the Hasse bound is satisfied. Theorem 1. The choice of parameters proposed above leads to curves containing a subgroup of order r with embedding degree at most k. Proof. From the condition q = hr + d it follows that q k − 1 ≡ dk − 1 (mod r). Since Φk () | dk − 1 [17, theorem 2.45(i)], the restriction r | Φk () implies r | dk − 1, that is, dk − 1 ≡ 0 (mod r). Therefore, q k − 1 ≡ 0 (mod r), that is, ' & r | q k − 1. An important observation here is that we still must verify that r !  q i − 1 for i 0 < i < k, even for such a special case as r = Φk (). Since  − 1 = u|i Φu (), it is obvious that Φk ()  i − 1, and hence, apparently r  i − 1. However, this reasoning is wrong: the relation Φk ()  i − 1 only holds for the polynomials themselves, not necessarily for some specific argument . In contrast to the original work by Miyaji et al., the above criteria are not exhaustive, and it is not difficult to find other conditions leading to perfectly valid parameters. For instance, an obvious generalization is q = n + Φk ()g() + d , for any polynomial g(). This does not help much because in general Φk ()g() makes the trace t too big to satisfy the Hasse bound, except when the term d cancels the term of highest degree in Φk ()g() and the remaining terms are of suitably low degree (for example, for k = 9, by picking an appropriate g, one can obtain q = n−3 −1). Also, if d is even, k is even, and k/2 is odd, it can be verified that setting t = −d +1 and q = hr −d is equally possible, that is, r | q k −1 (the restriction to even k such that k/2 is odd ensures that q k/2 − 1 ≡ −(dk + 1) ≡ 0 (mod r)). Sometimes, fortuitous solutions barely resembling (but related to) the MNT criteria can be found for particular choices of k. However, for simplicity we will focus on the parameters considered in theorem 1; extending the discussion below to other parameters should not be difficult, and would be hardly necessary in practice.

3

Solving the CM Equation

The strategy to build curves given the above criteria seems straightforward: choose  and h, find a prime q and the corresponding trace t according to the 2

Miyaji et al. consider only h = 1, that is, curves of prime order.

260

Paulo S.L.M. Barreto, Ben Lynn, and Michael Scott

proposed relations, solve for the CM discriminant D (and for V ) the CM equation DV 2 = 4q − t2 , or equivalently, DV 2 = 4n − (t − 2)2 , and use the CM method to compute the curve equation coefficients. Since n = hr and r | Φk (), we can write n = mΦk () for some m. Thus, the CM equations for the parameter criteria given in section 2.1 has the form: DV 2 = 4mΦk () − (d − 1)2 .

(1)

Unfortunately, this approach is not practical, because in general the CM discriminant D is too large (comparable to q), and cryptographically significant parameters would have q ≈ 2160 at least. It is possible to find toy examples using this direct approach, though. For instance, the curve E : y 2 = x3 − 3x + 183738738969463 over F449018176625659 has 4r points, where r = 112254544155601. The subgroup of order r has embedding degree k = 12. This curve satisfies m = 4, r = Φk (), t =  + 1, and q = 4Φk () +  for  = 3255. The CM equation is DV 2 = 4q − t2 where D = 13188099 and V = 11670, and the class number is 2940. Miyaji et al. solve this problem for k = 3, 4, 6 by noticing that the CM equation leads, in these cases, to a quadratic Diophantine equation reducible to Pell’s equation, whose solution is well known [26]. The case of arbitrary k is much harder, since no general method is known to solve Diophantine equations of degree deg(Φk )  4. However, Tzanakis [28] describes how to solve quartic elliptic Diophantine equations of form V 2 = a4 + b3 + c2 + d + e2 , where a > 0 (notice the squared independent term). For k = 5, 8, 10, 12, the degree of Φk is 4, so that equation 1 has the form DV 2 = a4 + b3 + c2 + d + f . If a solution to this equation in small integers is known (as can often be found by exhaustive search), this equation reduces to Tzanakis form by multiplying both sides by D and applying a linear transformation involving the known solution, so that the independent term of the transformed equation is a perfect square. Unfortunately again, this approach has proven unsuccessful in practice. Using the Magma implementation of Tzanakis method, we were not able to find any cryptographically significant examples of curves with k = 5, 8, 10, 12, the only such cases being those where D is too large for traditional CM methods. We have not tried more recent variants of the CM method like that of [1], so there is still hope that solutions of equation 1 with large D are actually practical. But even if this direct approach remains out of reach, there are successful ways to generate suitable field and curve parameters, as we show next. 3.1

A Particular Case

Let p be a prime (not to be confused with the finite field size q). We describe how to find algebraic solutions of equation 1 for the special case D = 3, d = 1, and k = 3i 2j ps , for certain exponents i, j, s and prime p > 3. In principle, this method enables the ratio m/r to get arbitrarily small for large k if s = 0.

Constructing Elliptic Curves with Prescribed Embedding Degrees

261

The cyclotomic polynomials are known [21] to satisfy the following properties. If v is any prime dividing u, then Φuv (x) = Φu (xv )/Φu (x). On the other hand, if v  u, then Φuv (x) = Φu (xv ). Using these properties, it is easy to show by induction that for all i > 0, Φ3i () = 2·3

i−1

+ 3

i−1

i

+ 1 and Φ2i ·3 () = 2 − 2

i−1

+ 1.

i−1

Restrict  so that  ≡ 1 (mod 3). Thus, 4Φ3i () − 1 = 3[(23 + 1)/3]2 and i−1 4Φ2i ·3 () − 3 = [(22 − 1)]2 . In the first case, multiplying both sides by ( − 1)2 i−1 leads to 4 · ( − 1)2 Φ3i () − ( − 1)2 = 3[( − 1)(23 + 1)/3]2 , which gives the solution k = 3i , r = Φ3i ()/3, t =  + 1, m = ( − 1)2 , V = ( − 1)(23

i−1

+ 1)/3.

In the second case, multiplying both sides by ( − 1)2 /3 leads to 4 · [( − i−1 1)2 /3]Φ2i ·3 () − ( − 1)2 = 3[( − 1)(22 − 1)/3]2 , which gives the solution k = 2i · 3, r = Φ2i ·3 (), t =  + 1, m = ( − 1)2 /3, V = ( − 1)(22

i−1

− 1)/3.

In both cases, we assume that q = mr +  is prime. Similarly, one can show by induction that, for any prime p > 3 and for all i, j > 0 i−1 j i−1 j−1 + 1)2 + 3]. Φ3i pj () = [(23 p + 1)2 + 3]/[(23 p i−1 j−1

+ 1)2 + 3] for any z leads to 4 · Multiplying both sides by 12z 2 [(23 p i−1 j 2 3i−1 pj−1 2 2 3z [(2 + 1) + 3]Φ3i pj () − (6z) = 3[2z(23 p + 1)]2 . Choosing r to be any large factor of Φ3i pj (), this gives the solution k = 3i pj ,  = 6z + 1, n = 3z 2 [(23 V = z(23

i−1 j−1

p

i−1 j

p

+ 1)2 + 3]Φ3i pj (),

+ 1).

It is also straightforward (but tedious) to show that Φ3i 2j ps () = [(23

i−1 j−1 s

i−1 j−1 s−1

hence 4 · 3z 2 [(23 2 p which gives the solution

2

p

− 1)2 + 3]/[(23

i−1 j−1 s−1

2

p

− 1)2 + 3]

− 1)2 + 3]Φ3i 2j ps () − (6z)2 = 3[2z(23

k = 3i 2j ps ,  = 6z + 1, m = 3z 2 [(23 V = 2z(23

i−1 j−1 s−1

2

p

i−1 j−1 s

2

p

− 1)2 + 3],

− 1).

i−1 j−1 s

2

p

− 1)]2 ,

262

Paulo S.L.M. Barreto, Ben Lynn, and Michael Scott

In all cases, it is necessary to ensure that q = mΦk () +  is prime. Appendix A contains a detailed example of this method. It is unclear whether this strategy can be extended to more general k, since the corresponding expressions get very involved. Moreover, this method only produces solutions for D = 3, which could potentially have a lower security level, even though no specific vulnerability based on the small D value is known at the time of this writing [4, section VIII.2]. The next method we describe is suitable for much more general D. 3.2

A General Method

The general form of the criteria we have been considering is n = mΦk (), t = d + 1, q = n + t − 1 = mΦk () + d , where 1  d  deg Φk /2. Usually one wants m to be small and Φk () to contain a large prime factor r (the best case is then Φk () itself being a prime). However, finding solutions under these conditions is hard for any k such that deg Φk > 2. Therefore, we relax the restrictions on m, say, by allowing m to be comparable to r. It turns out that obtaining suitable parameters becomes quite easy. Consider the CM equation DV 2 = 4mΦk () − (d − 1)2 . We assume that both D and  are chosen and t  D; we want to find a solution m (and V ) to this equation. For convenience, let A = 4Φk () and B = (d − 1)2 , so that the equations reads DV 2 = Am − B. Initially, find the smallest m0  0 such that D | Am0 −B, that is, Am0 −B = z0 D for some z0 . If A is invertible modulo D, then m0 = B/A (mod D) and z0 = (Am0 − B)/D. If A is not invertible modulo D but B is a multiple of D, then m0 = 0 and z0 = −B/D. Otherwise, there is no solution for this choice of  and D. Thus we ensure that m0 is never larger than D. Define: mi = m0 + iD, zi = z0 + iA. Substituting these into the CM equation gives Dzi = Ami − B. This means that any solution to this equation involves zi that is a perfect square. Therefore, solve V 2 = z0 + iA for V and i, and pick up the smallest solution i such that q = mi Φk () + d is a prime. This requires z0 to be a quadratic residue (QR) modulo A. If so, all solutions iα can be written as iα = i0 + αA, where i0 = (V02 − z0 )/A √ and V0 = z0 (mod A). A neat strategy to obtain a tight ratio between log q and log r is to restrict the search to i0 alone and vary only . Experiments we conducted showed that, in practice, m tends to be close to r. Nevertheless, such solutions are perfectly suitable for most pairing-based cryptosystems, the only exception being the short signature scheme of [6]. Appendix B contains examples of this method.

Constructing Elliptic Curves with Prescribed Embedding Degrees

4

263

Implementation Issues

Since curves with medium-sized embedding degree k can be effectively constructed as described above, the natural question to ask is how to efficiently implement the underlying arithmetic and, particularly, the Tate pairing. We restrict the discussion to even k. Let even(Fqk ) denote the subset of Fqk consisting of polynomials whose component monomials are all of even degree, i.e. even(Fqk ) = {u ∈ Fqk : u(x) = ak−2 xk−2 +ak−4 xk−4 +· · ·+a0 }. Similarly, let odd(Fqk ) denote the subset of Fqk consisting of polynomials whose component monomials are all of odd degree, i.e. odd(Fqk ) = {u ∈ Fqk : u(x) = ak−1 xk−1 + ak−3 xk−3 + · · · + a1 x}. We propose representing Fpk as Fp [x]/Rk (x) with a reduction polynomial of form Rk (x) = xk + x2 + ω for some ω ∈ Fp . This choice is motivated by the following analysis. Lemma 2. If R(x) = xk +x2 +ω is irreducible over Fq , then r(x) = xk/2 +x+ω is irreducible over Fq . Proof. By contradiction. If r(x) = f (x)g(x) for some f, g ∈ Fq [x], then R(x) = & ' r(x2 ) = f (x2 )g(x2 ), against the hypothesis that R(x) is irreducible. This establishes that the mapping ψ : Fq [x]/r(x) → Fq [x]/R(x), ψ(f ) = F such that F (x) = f (x2 ), induces an isomorphism between Fqk/2 and even(Fqk ). Notice that this lemma would remain valid if R contained more even-degree monomials, but a trinomial is better suited for efficient implementations. However, it is possible that an irreducible binomial R(x) = xk + ω exists, in which case it would be an even better choice. Lemma 3. Let Q = (u, v) ∈ E(Fqk ) where E : v 2 = f (u), u ∈ even(Fqk ), and f (u) is a quadratic nonresidue. Then v ∈ odd(Fqk ). Proof. Notice that f (u) ∈ even(Fqk ). Let v(x) = α(x) + xβ(x), where α, β ∈ even(Fqk ). Then v 2 = (α2 + β 2 ) + x(2αβ) ∈ even(Fqk ), so that either α = 0 or β = 0. But β = 0 would mean that f (u) = α2 is a quadratic residue, against the ' & hypothesis. Therefore, α = 0, that is, v ∈ odd(Fqk ). We are now prepared to address the main theorem: Theorem 2. Let Q = (u, v) ∈ E(Fqk ) where E : v 2 = f (u), u ∈ even(Fqk ), and f (u) is a quadratic nonresidue. If S = (s, t) ∈ Q, then s ∈ even(Fqk ) and t ∈ odd(Fqk ). Proof. This is a consequence of the elliptic addition rules [25, algorithm 2.3]. It is straightforward but tedious to show that the point addition formula and the doubling formula do satisfy the theorem. Thus, we only have to let S = mQ, and proceed by induction on m. ' & This way, curve points Q = (u, v) ∈ E/ Fpk , where f (u) ∈ even(Fqk ) is a quadratic nonresidue, are not only suitable for the computation of the Tate

264

Paulo S.L.M. Barreto, Ben Lynn, and Michael Scott

pairing e(P, Q) for any P ∈ E/ Fp (because obviously Q is linearly independent from P ); they also have the nice property that the technique of denominator elimination [3, section 5.1] is applicable, thus nearly doubling the performance of Miller’s algorithm.

5

Conclusion

We have shown how to effectively solve the problem of constructing elliptic curves with prescribed embedding degree, and suggested ways to efficiently implement the resulting curves so that pairing-based cryptosystems are practical. A natural line of further research is to investigate whether curves of any given embedding degree can be constructed for general abelian varieties. We also point out that the problem of generating elliptic curves of prime or near-prime order remains open; such curves are important in certain cryptosystems, like the BLS signature scheme [6]. We are grateful to Steven Galbraith and Frederik Vercauteren for their valuable comments and feedback during the preparation of this work.

References 1. A. Agashe, K. Lauter, R. Venkatesan, “Constructing elliptic curves with a given number of points over a finite field,” Cryptology ePrint Archive, Report 2001/096, http://eprint.iacr.org/2001/096/. 2. R. Balasubramanian, N. Koblitz, “The improbability that an Elliptic Curve has Subexponential Discrete Log Problem under the Menezes-Okamoto-Vanstone Algorithm,” Journal of Cryptology, Vol. 11, No. 2, 1998, pp. 141–145. 3. P.S.L.M. Barreto, H.Y. Kim, B. Lynn, M. Scott, “Efficient Algorithms for PairingBased Cryptosystems,” Cryptology ePrint Archive, Report 2002/008, http://eprint.iacr.org/2002/008/. 4. I. Blake, G. Seroussi and N. Smart, “Elliptic Curves in Cryptography,” Cambridge University Press, 1999. 5. D. Boneh and M. Franklin, “Identity-based encryption from the Weil pairing,” Advances in Cryptology – Crypto’2001, Lecture Notes in Computer Science 2139, pp. 213–229, Springer-Verlag, 2001. 6. D. Boneh, B. Lynn, and H. Shacham, “Short signatures from the Weil pairing,” Asiacrypt’2001, Lecture Notes in Computer Science 2248, pp. 514–532, SpringerVerlag, 2002. 7. R. Crandall and C. Pomerance, “Prime Numbers: a Computational Perspective,” Springer-Verlag, 2001. 8. R. Dupont, A. Enge, F. Morain “Building curves with arbitrary small MOV degree over finite prime fields,” Cryptology ePrint Archive, Report 2002/094, available at http://eprint.iacr.org/2002/094. 9. G. Frey, M. M¨ uller, and H. R¨ uck, “The Tate Pairing and the Discrete Logarithm Applied to Elliptic Curve Cryptosystems,” IEEE Transactions on Information Theory, 45(5), pp. 1717–1719, 1999. 10. G. Frey and H. R¨ uck, “A Remark Concerning m-Divisibility and the Discrete Logarithm in the Divisor Class Group of Curves,” Mathematics of Computation, 62 (1994), pp. 865–874.

Constructing Elliptic Curves with Prescribed Embedding Degrees

265

11. S.D. Galbraith, K. Harrison, D. Solera, “Implementing the Tate pairing,” Algorithmic Number Theory – ANTS V, 2002, to appear. 12. F. Hess, “Exponent Group Signature Schemes and Efficient Identity Based Signature Schemes Based on Pairings,” Cryptology ePrint Archive, Report 2002/012, available at http://eprint.iacr.org/2002/012/. 13. IEEE Std 2000–1363, “Standard Specifications for Public Key Cryptography,” 2000. 14. A. Joux, “A one-round protocol for tripartite Diffie-Hellman,” Algorithm Number Theory Symposium – ANTS IV, Lecture Notes in Computer Science 1838, pp. 385– 394, Springer-Verlag, 2000. 15. A. Joux and K. Nguyen, “Separating Decision Diffie-Hellman from Diffie-Hellman in Cryptographic Groups,” Cryptology ePrint Archive, Report 2001/003, http://eprint.iacr.org/2001/003/. 16. G.J. Lay, H.G. Zimmer, “Constructing Elliptic Curves with Given Group Order over Large Finite Fields,” Algorithmic Number Theory Symposium – ANTS I, Lecture Notes in Computer Science 877 (1994), pp. 250–263. 17. R. Lidl and H. Niederreiter, “Introduction to finite fields and their applications,” Cambridge University Press, 1986. 18. A. Menezes, T. Okamoto and S. Vanstone, “Reducing elliptic curve logarithms to logarithms in a finite field,” IEEE Transactions on Information Theory 39(1993), pp. 1639–1646. 19. A. Miyaji, M. Nakabayashi, and S. Takano, “New explicit conditions of elliptic curve traces for FR-reduction,” IEICE Trans. Fundamentals, Vol. E84 A, no. 5, May 2001. 20. F. Morain, “Building cyclic elliptic curves modulo large primes,” Advances in Cryptology – Eurocrypt’91, Lecture Notes in Computer Science 547 (1991), pp. 328– 336. 21. T. Nagell, “Introduction to Number Theory,” 2nd reprint edition, Chelsea Publishing, 2001. 22. K.G. Paterson, “ID-based signatures from pairings on elliptic curves,” Cryptology ePrint Archive, Report 2002/004, available at http://eprint.iacr.org/2002/004/. 23. R. Sakai, K. Ohgishi and M. Kasahara, “Cryptosystems based on pairing,” 2000 Symposium on Cryptography and Information Security (SCIS2000), Okinawa, Japan, Jan. 26–28, 2000. 24. O. Schirokauer, D. Weber and T. Denny, “Discrete Logarithms: the Effectiveness of the Index Calculus Method,” ANTS, pp. 337–361, 1996. 25. J.H. Silverman, “Elliptic curve discrete logarithms and the index calculus,” Workshop on Elliptic Curve Cryptography (ECC’98), September 14–16, 1998. 26. N.P. Smart, “The Algorithmic Resolution of Diophantine Equations,” London Mathematical Society Student Text 41, Cambridge University Press, 1998. 27. N. Smart, “An Identity Based Authenticated Key Agreement Protocol Based on the Weil Pairing,” Cryptology ePrint Archive, Report 2001/111, available at http://eprint.iacr.org/2001/111/. 28. N. Tzanakis, “Solving elliptic diophantine equations by estimating linear forms in elliptic logarithms. The case of quartic equations,” Acta Arithmetica 75 (1996), pp. 165–190. 29. E. Verheul, “Self-blindable Credential Certificates from the Weil Pairing,” Advances in Cryptology – Asiacrypt’2001, Lecture Notes in Computer Science 2248 (2002), pp 533–551.

266

A

Paulo S.L.M. Barreto, Ben Lynn, and Michael Scott

An Example of the Closed-Form Construction

This simple construction implements the method of section 3.1 and quickly yields a curve and a point of large prime order r, with embedding degree k = 12. 1. 2. 3. 4.

Choose z of an appropriate size at random. Set w = 3 · z, t = w + 2. Set r = w4 + 4w3 + 5w2 + 2w + 1. If r is not prime return to step 1. Set q = (w6 + 4w5 + 5w4 + 2w3 + w2 + 3w + 3)/3. If q is not prime, return to step 1. 5. Use the CM method to find the curve of the form y 2 = x3 + B with discriminant D = 3 of order n = q + 1 − t. Find a point of order r on the curve using the method described in [13, section A11.3].

Note that n = mr and m = 3z 2 . Rather than using the CM method in step 5, in practice small values of B can be tested to find the correct curve [7]. An example run of this algorithm yields z = 67749197969 r = 1706481765729006378056715834692510094310238833 q = 23498017525968473690296083113864677063688317873484513641020158425447 n = 23498017525968473690296083113864677063688317873484513640816910831539 Here r is a 151-bit prime, and q is a 224-bit prime. The curve is quickly found as E : y 2 = x3 + 4 over Fq .

B

An Example of the General Construction

Let s be the approximate desired size (in bits) of the subgroup order r, let D be the chosen CM discriminant, and let k be the desired embedding degree. The following procedure implements a simplified subset of the general construction method described in section 3.2, and yields a suitable field size q, as prime subgroup order r, the curve order n (it also indirectly provides the cofactor m, which it seeks to minimize, and the trace of Frobenius t). 1. Choose  ≈ 2s/g at random, where g ≡ deg(Φk ). 2. Compute r ← Φk (), t ←  + 1, A ← 4r, and B ← ( − 1)2 . 3. Check that r is prime and also that A is invertible mod D and not a perfect square. If these conditions fail, choose another  in step 1. 4. Find the smallest m0  0 such that Am0 − B = z0 D for some z0 , namely, set m0 ← B/A (mod D) and z0 ← (Am0 − B)/D. √ 5. Check that z0 is a QR mod r, and then compute V ← z0 (mod r). If z0 is not a QR mod r, or if V 2 − z0 = 0 (mod 4), choose another  in step 1. This ensures that V 2 − z0 = 0 (mod A). 6. Let i0 ← (V 2 − z0 )/A, m ← m0 + i0 D, n ← mr, and q ← n + t − 1. If q is not prime, restart with another  at step 1. Otherwise, we have the solution.

Constructing Elliptic Curves with Prescribed Embedding Degrees

267

An example run of this algorithm for k = 7 and D = 500003 yields q = 125070141847460013396986527273692733814291536913611095852428963052461\ 4109630975056367228761343097

r = 93161485761743186136191195699326539602148725131 m = 13425090940189806839398998187415093504886695170332 n = 125070141847460013396986527273692733814291536913611095852428963052461\ 4109630975056367228694013492

t = 67329606

Here r is a 157-bit prime, and q is a 320-bit prime. The curve is quickly found as E : y 2 = x3 − 3x + b over Fq , where b = 315283565391589690418903185062076693159181569566876474809008162248459\ 256213526466473404332175506.

Another example, this time for k = 11 and D = 500003: q = 645793306563485513812965048035098778963201537968134813236427213716936\ 868831560525236938964558029767656530135495362724707835601050941159

r = 33237721806329292477733472892286817383477632299281817794659481922677 m = 19429529807320017250929519781158178098446838731085667916658667094871 n = 645793306563485513812965048035098778963201537968134813236427213716936\ 868831560525236938964558029767656530135495362724707835601045289667

t = 5651493

Here r is a 225-bit prime, q is a 448-bit prime, and the curve is E : y 2 = x + x + b over Fq , where 3

b = 114943390928260306683630109134459805121604770378075777246396805900505\ 342675782177132483241141214345727335963156146267084855055515119955.

A Signature Scheme with Efficient Protocols Jan Camenisch1 and Anna Lysyanskaya2

2

1 IBM Research Zurich Research Laboratory CH–8803 R¨ uschlikon [email protected] Computer Science Department Brown University Providence, RI 02912 USA [email protected]

Abstract. Digital signature schemes are a fundamental cryptographic primitive, of use both in its own right, and as a building block in cryptographic protocol design. In this paper, we propose a practical and provably secure signature scheme and show protocols (1) for issuing a signature on a committed value (so the signer has no information about the signed value), and (2) for proving knowledge of a signature on a committed value. This signature scheme and corresponding protocols are a building block for the design of anonymity-enhancing cryptographic systems, such as electronic cash, group signatures, and anonymous credential systems. The security of our signature scheme and protocols relies on the Strong RSA assumption. These results are a generalization of the anonymous credential system of Camenisch and Lysyanskaya.

1

Introduction

Digital signature schemes are a fundamental cryptographic application, invented together with public-key cryptography by Diffie and Hellman [20] and first constructed by Rivest, Shamir and Adleman [32]. They give the electronic equivalent of the paper-based idea of having a document signed. A digital signature scheme consists of (1) a key generation algorithm that generates a public key PK and a secret key SK; (2) a signing algorithm that takes as inputs a message m and a secret key SK and generates a signature σ; and (3) a verification algorithm that tests whether some string σ is a signature on message m under the public key PK. Signature scheme exists if and only if oneway functions exist [30,33]. However, the efficiency of these general constructions, and also the fact that these signature schemes require the signer’s secret key to change between invocations of the signing algorithm, makes these solutions undesirable in practice. Using an ideal random function (this is the so-called random-oracle model), several, much more efficient signature schemes were shown to be secure. Most notably, those are the RSA [32], the Fiat-Shamir [21], and the Schnorr [34] signature schemes. However, ideal random functions cannot be implemented in S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 268–289, 2003. c Springer-Verlag Berlin Heidelberg 2003 

A Signature Scheme with Efficient Protocols

269

the plain model [12], and therefore in the plain model, these signature schemes are not provably secure. Gennaro, Halevi, and Rabin [24] and Cramer and Shoup [16] proposed the first signature schemes whose efficiency is suitable for practical use and whose security analysis does not assume an ideal random function. Their schemes are secure under the so-called Strong RSA assumption. In contrast to these stand-alone solutions, our goal is to construct signature schemes that are suitable as building blocks for other applications. Consider the use of signature schemes for constructing an anonymous credential system [13,14,15,19,27,7]. While such a system can be constructed from any signature scheme using general techniques for cryptographic protocol design, we observe that doing it in this fashion is very inefficient. Let us explain this point in more detail. In a credential system, a user can obtain access to a resource only by presenting a credential that demonstrates that he is authorised to do so. In the paper-based world, examples of such credentials are passports that allow us to prove citizenship and authorise us to vote, driver’s licenses that prove our ability to drive cars, etc. In the digital world, it is reasonable to imagine that such a credential will be in the form of a digital signature. Let us imagine that the user’s identity is his secret key SK. Let us think of a credential as a signature on this secret key. A credential system is anonymous if it allows users to demonstrate such credentials without revealing any additional information about their identity. In essence, when the user shows up before the verifier and demonstrates that he has a credential, the verifier can infer nothing about who the user is other than that the user has the right credential. Additionally, an anonymous credential system allows the user to obtain a credential anonymously. Using general zero-knowledge proofs, it is possible to prove statements such as “I have a signature,” without saying anything more than that (i.e., without disclosing what this credential looks like). However, doing so requires that the problem at hand be represented as, for example, a Boolean circuit, and then the proof that the statement is true requires a proof that the circuit has a satisfying assignment. This method, however, requires expensive computations beyond what is considered practical. An additional complication is obtaining credentials in a secure way. The simple solution where the user reveals his identity SK to the credential granting organization, who in turn grants him the credential, is ruled out: we want to allow the user to be anonymous when obtaining credentials as well; we also want to protect the user from identity theft, and so the user’s secret key SK should never be leaked to any other party. Here, general techniques of secure two-party computation save the day: the user and the organization can use a secure twoparty protocol such that the user’s output is a signature on his identity, while the organization learns nothing. But this is also very expensive: general secure two-party computation also represents the function to be computed as a Boolean circuit, and then proceeds to evaluate it gate by gate.

270

Jan Camenisch and Anna Lysyanskaya

So far, we have come up with a construction of an anonymous credential system from a signature scheme using zero-knowledge proofs of knowledge and general two-party computation. We have also observed that this solution is not satisfactory as far as efficiency is concerned. A natural question therefore is whether we can create a signature scheme which will easily yield itself to an efficient construction of an anonymous credential system. In other words, we need signature schemes for which proving a statement such as “I have a signature,” is a much more efficient operation than representing this statement as a circuit and proving something about such a circuit. We also need to be able to have the user obtain a signature on his identity without compromising his identity. In this paper, we construct a signature scheme that meets these very general requirements. The generality of our approach enables the use of this signature scheme for other cryptographic protocols in which it is desirable to prove statements of the form “I have a signature,” and to obtain signatures on committed values. Our signature scheme is based on the Strong RSA assumption, and of all provably secure signatures known, it runs a close second in efficiency to the signature scheme of Cramer and Shoup [16]. Our results can be thought of as a generalization of the anonymous credential system due to Camenisch and Lysyanskaya [7], and can also be viewed as a generalization of the Ateniese et al. [1] group signature scheme. In Section 2 we give the basic signature scheme, for which we give a proof of security in Section 3. In Section 4 we generalize this signature scheme to a scheme that allows to sign a block of messages instead of one message. Such a signature scheme can for example be used to encode public-key attributes as proposed by Brands [5], or to issue signatures that can only be used once (see Section 6.3). In section 5 we mention known protocols that are needed for our purposes, and then in section 6 we give protocols for signing a committed value and for proving knowledge of a signature. To be more accessible to a non-specialist reader, our exposition also includes two extra sections. In the Appendix A, we explain our notation, and give the definition of a secure signature scheme, and a definition of a proof of knowledge. We also cover a few number-theoretic basics in Appendix B.

2 2.1

The Basic Signature Scheme Preliminaries

Let us recall some basic definitions (see Appendix B for more in-depth discussion) Definition 1 (Safe primes). A prime number p is called safe if p = 2p + 1, such that p is also a prime number. (The corresponding number p is known as a Sophie Germain prime.) Definition 2 (Special RSA modulus). An RSA modulus n = pq is special if p = 2p + 1 and q = 2q  + 1 are safe primes.

A Signature Scheme with Efficient Protocols

271

Notation: By QRn ⊆ Z∗n we will denote the set of quadratic residues modulo n, i.e., elements a ∈ Z∗n such that ∃b ∈ Z∗n such that b2 ≡ a mod n. 2.2

The Scheme

Key Generation. On input 1k , choose a special RSA modulus n = pq, p = 2p + 1, q = 2q  + 1 of the length n = 2k. Choose, uniformly at random, a, b, c ∈ QRn . Output PK = (n, a, b, c), and SK = p. Message Space. Let m be a parameter. The message space consists of all binary strings of length m . Equivalently, it can be thought of as consisting of integers in the range [0, 2m ). Signing Algorithm. On input m, choose a random prime number e of length e ≥ m + 2, and a random number s of length s = n + m + l, where l is a security parameter. Compute the value v such that v e ≡ am bs c mod n Verification Algorithm. To verify that the tuple (e, s, v) is a signature on message m in the message space, check that v e ≡ am bs c mod n, and check that 2e > e > 2e −1 . Efficiency Analysis. In practice, n = 1024 is considered secure. As for the message space, if the signature scheme is intended to be used directly for signing messages, then m = 160 is good enough, because, given a suitable collisionresistant hash function, such as SHA-1, one can first hash a message to 160 bits, and then sign the resulting value. Then e = 162. The parameter l guides the statistical closeness of the simulated distribution to the actual distribution, hence in practice, l = 160 is sufficient, and therefore s = 1024 + 160 + 160 = 1346. Therefore, this signature scheme requires one short (160-bit) and two long (1024 and 1346) exponentiations for signing, while verification requires two short (162 and 160 bits) and one long (1346-bit) exponentiations. This is not as efficient as the Cramer-Shoup signature, which requires three short (i.e., 160-bit) and one long (i.e., 1024-bit) exponentiations for signing, and four short exponentiations for verifying. However, our signature scheme has other redeeming qualities.

3

Proof of Security

We will show that the signature scheme in Section 2 is secure. Note that a forgery (m, s, e, v) may have value e of length e not necessary a prime number. We prove security under the Strong RSA assumption [2,22]: Assumption 1 (Strong RSA Assumption). The strong RSA assumption states that it is hard, on input an RSA modulus n and an element u ∈ Z∗n , to compute values e > 1 and v such that v e ≡ u mod n. More formally, we assume that for all polynomial-time circuit families {Ak }, there exists a negligible function ν(k) such that Pr[n ← RSAmodulus(1k ); u ← QRn ; (v, e) ← Ak (n, u) : e > 1 ∧ v e ≡ u mod n] = ν(k)

272

Jan Camenisch and Anna Lysyanskaya

The tuple (n, u) generated as above, is called an instance of the flexible RSA problem. The following lemma (proved in Appendix B, Lemma 14) is important for the proof of security: Lemma 1. Let a special RSA modulus n = pq, p = 2p +1, q = 2q  +1, be given. Suppose we are given the values u, v ∈ QRn and x, y ∈ Z, gcd(x, y) < x such that v x ≡ uy mod n. Values z, w > 1 such that z w ≡ u mod n can be computed efficiently. Suppose that a forger’s algorithm F is given. Suppose F makes K signature queries in expectation before it outputs a successful forgery. By the Markov inequality, half the time, F needs 2K or fewer queries. Without loss of generality, we may assume that K is known (because if it is not, then it can be estimated experimentally). Suppose that the output of F is (m, s, e, v). Let us distinguish three types of outputs. – Type 1: The forger’s exponent e is relatively prime to all the exponents output by the signature oracle. – Type 2: The forger’s value e is not relatively prime to some previously seen exponent ei and the root v is different from the corresponding root vi . – Type 3: The forger’s values e is not relatively prime to some previously seen exponent ei and v = vi , but the tuple (m, s) is new. By F1 (resp., F2 , F3 ) let us denote the forger who runs F but then only outputs the forgery if it is of Type 1 (resp., Type 2, Type 3). The following lemma is clear by the standard hybrid argument: Lemma 2. If the forger F success probability is , then one of F1 , F2 , or F3 succeeds with probability at least /3. Next, we will show that under the strong RSA assumption, success probabilities for each of F1 , F2 , F3 are negligible. Lemma 3. Under the Strong RSA assumption, the success probability of F1 is negligible. More strongly, if F1 succeeds with probability 1 in expected time O(p1 (k)), using expected number Q(k) queries, then there is an algorithm that runs in time O(p1 (k)) and solves the flexible RSA problem with probability /4. Proof. Suppose F1 succeeds with non-negligible probability. Let us show that this contradicts the Strong RSA assumption. Recall that Q is the expected number of queries, and by the Markov inequality, half the time, F1 will need at most 2Q = O(p1 (k)) queries. Let us construct an algorithm A such that, on input a strong RSA instance (n, u), and with access to F1 , A will output (v, e) such that e > 1 and v e ≡ u mod n. Now A plays the role of the signer, as follows:

A Signature Scheme with Efficient Protocols

273

Key Generation: Choose 2Q prime numbers e1 , . . . , e2Q . Choose, at random, !2Q value r1 ∈ Zn2 and r2 ∈ Zn2 . Let a = uE , where E = i=1 ei , b = ar1 , c = ar2 . Let (n, a, b, c) be the public key. Signing: Upon receiving the ith signature query mi , choose a value si of length i si s uniformly at random. Output (ei , si , vi ), where vi = am i bi ci , where ai = r1 r2 E/ei , bi = ai , ci = ai . (In other words, ai , bi and ci are values such u that aei i = a, bei i = b, and cei i = c). Note that (ei , si , vi ) constitute a valid signature: ei i si viei = (am i bi ci )

= ami bsi c . Note that the public key and all the signatures have the same distribution as in the original scheme. Suppose the forgery is (e, s, v) on message m, and e > 4. This gives us v e = m s a b c = uE(m+r1 s+r2 ) . Observe that if it is the case that e and E(m + r1 s + r2 ) are relatively prime, then by Shamir’s trick (Lemma 13) we are done. We will not necessarily have this nice case, but we will show that things will be good enough. First, by assumption on F1 we have gcd(e, E) = 1. Second, we will use the following claim: Claim. With probability at least 1/2 over the random choices made by the reduction, it is the case that either gcd(e, m + r1 s + r2 ) < e, or, on input e, one can efficiently factor n. The claim immediately gives us the conditions of Lemma 1, which gives us values v, w > 1 such that v w ≡ u mod n. We must now prove the claim. Suppose that it is false. Then the probability that gcd(e, m + r1 s + r2 ) < e is less than 1/2. Suppose r2 = x + φ(n)z. Note that z is independent of the view of the adversary and it is chosen statistically indistinguishable from random over Zn . Therefore, if with probability greater than 1/2, e | m + r1 s + r2 , this holds for greater than 1/2 of all the possible choices for z. Then there exists a value z0 ∈ Zn such that e | (m+r1 s+x+φ(n)z0 ) and e | (m + r1 s + x + φ(m)(z0 + 1)). Then it holds that e | φ(n) with probability at least 1/2. Therefore, with probability 1/2, we either factor n (by Lemma 11) or γ = gcd(e, m + r1 s + r2 ) < e. Therefore, A succeeds in breaking the strong RSA assumption whenever the number of queries does not exceed 2Q (which happens with probability 1/2 by Markov inequality) and whenever the conditions of the claim are satisfied (which happens with probability 1/2, independently on the number of queries), and so the success probability of A is 1 /4, while its running time is O(p1 (k)). Lemma 4. Under the Strong RSA assumption, the success probability of F3 is negligible. More strongly, if F3 succeeds with probability 1 in expected time O(p1 (k)), using expected number Q(k) queries, then there is an algorithm that runs in time O(p3 (k)) and succeeds with probability 3 /2.

274

Jan Camenisch and Anna Lysyanskaya

Proof. The input to the reduction is (n, e), an instance of the flexible RSA problem. The key generation and signatures in this case are as in the proof of Lemma 3. The signatures (m, s, e, v) and (mi , si , ei , v) give us the following: Note that from gcd(ei , e) = 1 and the fact that 2e > e, ei > 2e −1 it follows that ei = e and thus am bs ≡ ami bsi . This implies that m + r1 s ≡ mi + r1 si mod φ(n). As (m, s) = (mi , si ), and r1 > m, r1 > mi , m + r1 s = mi + r1 si . Therefore, φ(n) | m + r1 s − mi + r1 si = 0, and so by Corollary 3, we are done. The probability that A succeeds in breaking the Strong RSA assumption is at least /2 because with probability at least 1/2, the forger F3 will need at most 2Q queries. Lemma 5. Under the Strong RSA assumption, the success probability of F2 is negligible. More strongly, if F2 succeeds with probability 2 in expected time O(p2 (k)), using expected number Q(k) queries, then there is an algorithm that runs in time O(p2 (k)) and succeeds with probability (k)/8Q(k). Proof. The input to the reduction is (n, u), an instance of the flexible RSA problem. Choose a value 1 ≤ i ≤ 2Q(k) uniformly at random. With probability 1/2Q(k), the exponent e will be the same as in the i’th query to the signature oracle. The idea of this reduction is that we will set up the public key in such a way that, without knowledge of the factorization of n, it is possible to issue correctly distributed signatures on query j = i, in almost the same manner as in the proof of Lemma 3. For query i, the answer will be possible only for a specific, precomputed value of t = mα + s, where α = logb a. !2Q Key Generation: Choose 2Q(k) random primes {ej }. Let E = ( j=1 ej )/ei . Choose α, β ∈ Z2n uniformly at random. Choose a random value t of length s . Let b = uE mod n. Let a = bα mod n, and c = bei β−t . Let (n, a, b, c) be the public key. Signing: Upon receiving the j th signature query mj , j = i, choose a value sj m s of length s uniformly at random. Output (ej , sj , vj ), where vj = aj j bj j cj , e β−t i where bj = uE/ej , aj = bα . (In other words, aj , bj and cj j , cj = bj ej ej e are values such that aj = a, bj = b, and cj j = c). Note that (ej , sj , vj ) constitute a valid signature. Upon receiving the ith signature query mi , compute the value si = t − αmi . vi = bβ . Note that viei = bei β−t+t = bt c = ami bsi c. Note that if s is sufficiently long (e.g., n + m + l, where n is the length of the modulus n, m is the length of a message, and l is a security parameter), si is distributed statistically close to uniform over all strings of length s . Suppose the forgery gives (m, s, e, v) such that e is not relatively prime to ei . Then e = ei because e is of length e . Also, as the verification is successful for the forgery as well as for the i’th query, v ei = am bs c = bmα+s+ei β−t =

A Signature Scheme with Efficient Protocols

275

b(m−mi )α+(s−si )+ei β . Let α = α1 φ(n) + α2 . Note that with probability very close to 1, the value α1 ∈ {0, 1}. Also, note that α1 is independent on the adversary’s view. Therefore, if the probability that ei | (m − mi )α + (s − si ) + ei β is nonnegligibly higher than 1/2, then it is the case that ei | (m−mi )α2 +(s−si )+ei β and ei | (m − mi )(φ(n) + α2 ) + (s − si ) + ei β, and therefore ei | (m − mi ) (note that by definition of a forgery m = mi ). If the length e is at least two bits longer than the length of a valid message m , this is impossible. Therefore, with probability at least 1/2, the following condition is met: ei  (mα + s + ei β − t) = γ. Therefore, we have the following: v ei ≡ uEγ , where gcd(ei , γ) = 1, and so by Lemma 13, we compute ei ’th root of u. Therefore, A succeeds in solving an instance of the flexible RSA problem when 2Q is a sufficient number of queries (with probability at least 1/2 this is the case by the Markov inequality), when i is chosen correctly (this is the case with probability 1/2Q) and when the condition is met (with probability 1/2 independently of everything else). Therefore, A succeeds with probability 2 /8Q in time p2 (k). As the number of signature queries is upper-bounded by the number of steps performed by the adversary (i.e., the running time), we have shown the following theorem: Theorem 1. Our signature scheme is secure under the Strong RSA assumption. More precisely, if a forger breaks our signature scheme in time p(k) with probability (k), then the Strong RSA assumption can be broken in time O(p(k)) with probability Ω((k)/p(k)).

4

The Scheme for Blocks of Messages

Suppose, instead of signing a single message m, one wants to sign a block of L messages m1 , . . . , mL . For most applications, these two problems are equivalent, as in order to sign such a block, it is sufficient to hash it to a small message m using a collision-resistant hash function, and then to sign m using a one-message signature scheme. However, if the application we have in mind involves proving knowledge of a signature and proving relations among certain components of m1 , . . . , mL , it may be helpful to have a signature scheme especially designed to handle blocks of L messages. Here, we propose one such scheme. Key Generation. On input 1k , choose a special RSA modulus n = pq, p = 2p + 1, q = 2q  + 1. Choose, uniformly at random, a1 , . . . , aL , b, c ∈ QRn . Output PK = (n, a1 , . . . , aL , b, c), and SK = p. Let n denote the length of the modulus n. n = 2k. Message Space. Let m be a parameter. The message space consists of all binary strings of length m . Equivalently, it can be thought of as consisting of integers in the range [0, 2m ). Signing Algorithm. On input m1 , . . . , mL , choose a random prime number e > 2m +1 of length e ≥ m + 2, and a random number s of length s = n + m + l, where l is a security parameter. Compute the value v such that mL s 1 v e ≡ am 1 . . . aL b c mod n

276

Jan Camenisch and Anna Lysyanskaya

Verification Algorithm. To verify that the tuple (e, s, v) is a signature on mL s e −1 1 message m, check that v e ≡ am . 1 . . . aL b c mod n, and check that e > 2 Theorem 2. The signature scheme above is secure for blocks of messages, under the Strong RSA assumption. Proof. We will show a reduction from a forger F for this signature scheme, to an algorithm that breaks the basic signature scheme from Section 2. The reduction receives as input the public key of the basic scheme: (n, a, b, c), and has oracle access to the basic signature scheme. Key Generation. Choose an index I ∈ [1, L] at random. Let Let aI = a. For j ∈ [1, L], j = I, choose and integer rj from Z2n uniformly at random, and let aj = brj . Output the public key (n, a1 , . . . , aL , b, c). Signature Generation. On input signature query (m1 , . . . , mL ), ask the signature oracle for a signature on the value mI . Upon receiving the value  (s, v, e), let s = s − j =I rj mj . The resulting signature is (s , v, e). It is easy to see that it is distributed statistically close to the right distribution provided that s = n + m + l, where l is sufficiently long. Suppose the adversary’s forgery is (m1 , . . . , mL , s, v, e). Then (s + L j=2 rj mj , v, e) is a valid signature in the basic scheme on message mI . Suppose the signature’s e is different from any previously seen ei . Then we are done: we have created forger F∞ and contradicted Lemma 3. So suppose this e = ei . But (m1 , . . . , mL ) = (mi1 , . . . , miL ). With probability at least 1/L, it is the case that mI ≤ miI , and we have created a forged signature on message mI . Overall, if the adversary F succeeds in time p(k) with probability  using Q(k) queries, the reduction succeeds against the basic scheme in time p(k) with probability at least /2L also using Q(k) queries.

5

Preliminary Protocols

In this section, we will show how to construct a secure protocol for signing a committed message as described in Section 1, under our basic signature scheme described in Section 2. 5.1

Commitment Scheme

The following commitment scheme is due to Fujisaki and Okamoto [23] and elaborated on by Damg˚ ard and Fujisaki [18]. Its security relies on the hardness of factoring. Key Generation The public key consists of a special RSA modulus n of length n , and h ← QRn , g ← h, where h is the group generated by h. Commitment The commitment Commit(PK, x, r) for inputs of length x and randomness r ∈ Zn , is defined as follows: Commit((n, g, h), x, r) = g x hr mod n.

A Signature Scheme with Efficient Protocols

277

Lemma 6. The commitment scheme described above is statistically hiding and computationally binding if factoring is hard. Remark 1. Note that for the hiding property, the only requirement from the public key is that g ∈ h, and so the public key can be generated adversarially provided that this requirement holds. We want to guarantee the security of this commitment scheme even when the commitment scheme’s public key is provided by the adversary. To ensure that, we add the following key certification procedure: the party that generated the commitment keys (n, g, h) must prove existence of the discrete logarithm logh g mod n. This can be done using well-known techniques that we elaborate on in the sequel (cf. [18]). 5.2

Known Discrete-Logarithm-Based Protocols

All protocols assume that the public parameters were fixed after the adversary was fixed (otherwise a non-uniform adversary may know secret keys). The following known protocols are secure under the strong RSA assumption: – Proof of knowledge of discrete logarithm representation modulo a composite [22]. That is to say, a protocol with common inputs (1) a (n, g1 , . . . , gm ), where n is a special RSA modulus and gi ∈ QRn for all 1 ≤ i ≤ m; and (2) a value! C ∈ QRn , the prover proves knowledge of values α1 , . . . , αm such that m C = i=1 giαi mod n. – Proof of knowledge of equality of representation modulo two (possibly different) composite moduli [9]. That is to say, a protocol with common inputs (n1 , g1 , . . . , gm ) and (n2 , h1 , . . . , hm ) and the values C1 ∈ QRn1 and C2 ∈ QRn2 , the prover proves knowledge of values α1 , . . . , αm such that C1 =

m  i=1

giαi mod n1

and

C2 =

m 

i hα i mod n2 .

i=1

– Proof that a committed value is the product of two other committed values [8]. That is to say, a protocol with common inputs (1) a commitment key (n, g, h) as described in Section 5.1; and (2) values Ca , Cb , Cab in QRn , where the prover proves knowledge of the integers α, β, ρ1 , ρ2 , ρ3 such that Ca = g α hρ1 mod n, Cb = g β hρ2 mod n, and Cab = g αβ hρ3 mod n. – Proof that a committed value lies in a given integer interval. That is to say, a protocol with common inputs (1) a commitment key (n, g, h) as described in Section 5.1; (2) a value C ∈ QRn ; (3) integers a and b, where the prover proves knowledge of the integers α and ρ such that C = g α hρ mod n and a ≤ α ≤ b. An efficient protocol that implements such proofs is due to Boudot [4]. However, if the integer α the prover knows in fact satisfies the the more restrictive bound, then there are more efficient protocols (these protocols are

278

Jan Camenisch and Anna Lysyanskaya

very similar to a simple proof of knowledge of a discrete log; see, e.g.,[9]). More precisely, there protocols are applicable if the prover’s secret α satisfies b−a b−a − b−a − ≤ α ≤ b−a 2 − 2 2 2 + 2 2 , where  is a security parameter derived from the size of the challenge sent by the verifier as second message in the protocol and from the security parameter that governs the zero-knowledge property. We note that in many setting this restriction can be easily dealt with. Notation: In the sequel, we will sometimes use the notation introduced by Camenisch and Stadler [10] for various proofs of knowledge of discrete logarithms and proofs of the validity of statements about discrete logarithms. For instance, ˜ γ ∧ (u ≤ α ≤ v)} PK{(α, β, γ) : y = g α hβ ∧ y˜ = g˜α h denotes a “zero-knowledge Proof of Knowledge of integers α, β, and γ such that ˜ are ˜ γ holds, where v < α < u,” where y, g, h, y˜, g˜, and h y = g α hβ and y˜ = g˜α h ˜ ˜ elements of some groups G = g = h and G = ˜ g  = h. The convention is that Greek letters denote quantities the knowledge of which is being proved, while all other parameters are known to the verifier. Using this notation, a proofprotocol can be described by just pointing out its aim while hiding all details.

6

Protocols for the Signature Scheme

In the sequel, we rely on honest-verifier zero-knowledge proofs of knowledge of representation protocols; they can be converted into a general zero-knowledge proof using standard techniques based on trapdoor commitment schemes (cf. [17]). 6.1

Protocol for Signing a Committed Value

Informally, in a protocol for signing a committed value, we have a signer with public key PK, and the corresponding secret key SK, and a user who queries the signer for a signature. The common input to the protocol is a commitment C for which the user knows the opening (m, r). In the end of the protocol, the user obtains a signature σPK (m), and the signer learns nothing about m. Lemma 7. Figure 1 describes a secure protocol for signing a committed value. Proof. Let the extractor E be as follows: it runs the extractor for step 6.1 of the protocol, and discovers x and r that are of lengths x and r respectively. It then queries the signature oracle for a signature on x. It obtains the signature (s, e, v). Note that the value s is a random integer of length s . The extractor then sets r = s − r. If s ≥ n + , where  is a security parameter guiding the statistical closeness, then the value r obtained this way will be statistically close to the value r generated by S, and so statistically, the adversary’s view is the same whether talking to the real signer S, or to the extractor E.

A Signature Scheme with Efficient Protocols

279

Common Inputs: Public key (n, a, b, c) of the signature scheme with parameters m , s , e . Commitment public key (nC , gC , hC ), commitment C. x rC hC mod nC . User’s Input: x and rC such that C = gC Signer’s Input: Factorization of n. U ←→ S Form commitment Cx = ax br mod n, and prove that Cx is a commitment to the same value as C. Prove knowledge of x, r. Prove that x ∈ (0, 2 m ), and r ∈ (0, 2 n ). U ←− S Choose a random r of length s , and a prime number of length e .  Compute v = (Cx br c)1/e . Send (r , e, v) to the user. User’s Output: Let s = r + r . The user outputs a signature (s, e, v) on x. Fig. 1. Signature on a Committed Value

6.2

Proof of Knowledge of a Signature

In this section, we give a zero-knowledge proof of knowledge protocol for showing that a committed values is a signature on another committed value. The protocol is meant to be comprehensible rather than to be optimized; an protocol is provided in Appendix C. The protocol provided in this section as well as the one provided Appendix C apply proofs that a secret lies in a given interval. If one wants to use the more efficient protocols to implement there proofs (c.f. Section 5), then the primes e used to issue signatures should lie in the interval [2e−1 +2e−2 −2e−4−c −z , 2e−1 + 2e−2 + 2e−4−c −z ] where c and z are the security parameters (c is the size of the challenges sent by the verifier and z controls the statistical zero-knowledge property). Similarly, only messages in the interval [0, 2m −4−c −z ] should be signed. Typical values are c = z = 80. Thus, when wanting an effective message space of 160 bits, then one must set m = 160 + 80 + 80 + 4 = 324 and hence e = m + 2 = 326 and s = 1024 + 160 + 324 = 1408. Lemma 8. The protocol in Figure 2 is a zero-knowledge proof of knowledge of the values (x, rx , s, e, v) such that Cx = g x hrx mod n and VerifyPK (x, s, e, v). Proof. The completeness property is clear. The zero-knowledge property follows because the simulator can form the commitments at random (as they reveal nothing statistically), and then invoke the simulator for the zero-knowledge proofs of knowledge of representation and belonging to an interval. We must now show that there exists an extractor that computes a valid signature. Our extractor will invoke the extractor for the proof protocols we use as a building block. If our extractor fails, then we can set up a reduction that breaks the Strong RSA assumption. Suppose the extractor succeeds and computes (x, s, e, w, z, rx , rs , re , rw , rz , r) such that C = (Cv )e hr and C = ax bs cg z hr , and Cz = g ew hrz . Note that this implies that Cve = ax bs cg z . Let v = Cv /g w . Note that v e = Cve /g z = ax bs c. Because we also know that x is of length m , s is of length s and e is of length e , we are done: we output (s, e, v) which is a signature on the message x.

280

Jan Camenisch and Anna Lysyanskaya Common Inputs: Public key (n, a, b, c) of the signature scheme with parameters m , s , e Public key (g, h) of a commitment scheme modulo n Commitment public key (nC , gC , hC ), commitment Cx x rx User’s Input: The values (x, rx ) such that Cx = gC hC mod nC and x is of length m and rx is of length n . Values (s, e, v), a valid signature on message x. U ←→ V Choose, uniformly at random, values w, rs , re , rw , rz and r of length n . Compute the following quantities: Cs = g s hrs mod n, Ce = g e hre mod n, Cv = vg w mod n, Cw = g w hrw , z = ew, Cz = g z hrz . C = (Cv )e hr mod n. It is important to note that C = (Cv )e hr = v e g we hr = (ax bs c)g z hr mod n. Send the values (Cs , Ce , Cv , Cw , Cz , C) to the verifier and carry out the following zero-knowledge proofs of knowledge: 1. Show that C is a commitment in bases (Cv , h) to the value committed to in the commitment Ce : PK{( , ρ, ρe ) : C = Cv hρ ∧ Ce = g  hρe } 2. Show that C/c is also a commitment in bases ((a, b, g), h), to the values committed to by commitments Cx ,Cs ,Cz , and that the power of h here is the same as in C: PK{(ξ, σ, ζ, , ρx , ρs , ρz , ρ) : C/c = aξ bσ g ζ hρ ∧ Cx = g ξ hρx ∧ Cs = g σ hρs ∧ Cz = g ζ hρz ∧ C = (Cv ) hρ } 3. Show that Cz is a commitment to the product of the values committed to by Cw and Ce : PK{(ζ, ω, , ρz , ρw , ρe , ρ ) : Cz = g ζ hρz ∧ Cw = g ω hρw ∧ 

Ce = g  hρe ∧ Cz = (Cw ) hρ } 4. Show that Cx is a commitment to an integer of length m and Ce is a commitment to an integer in the interval (2 e −1 , 2 e ). Fig. 2. Proof of Knowledge of a Signature

6.3

Protocols for Signatures on Blocks of Messages

We note that straightforward modifications to the protocols above give us protocols for signing blocks of committed values, and to prove knowledge of a signature on blocks of committed values. We also note that, using any protocol for proving relations among components of a discrete-logarithm representations of a group element [11,8,5], can be used to demonstrate relations among components of a signed block of messages. We highlight this point by showing a protocol that allows an off-line double-spending test.

A Signature Scheme with Efficient Protocols

281

In order to enable an off-line double-spending test, a credential is a signature on a tuple (SK, N1 , N2 ), and in a spending protocol (i.e., credential showing) a user reveals the value b = aSK + N1 mod q, for some prime number q, and N2 , for a value a ∈ Zq chosen by the verifier. The user must then prove that the value revealed corresponds to the signed block of messages (SK, N1 , N2 ). Here, we describe a mechanism that, combined with our signature scheme for blocks of messages, achieves this. We note that these types of techniques are similar to those due to Brands [5]. We assume that we are given, as public parameters, public values (p = 2q + 1, gp ) where p and q are primes, and gp Zp has order q. We require that q > max(2m , 2N ) where N is the length of the nonce N1 . We also assume that we are given another commitment public key, (nC , gC , hC ), same as in the rest of this Chapter. In order to prove that the values (b, N2 ) correspond to the signature on his secret key committed to in commitment CSK the user forms commitments and C1 to N1 and C2 to N2 . He then shows that he has a signature on the block of values committed to by this block of commitments, and that the value committed to in C1 is of length N . He reveals N2 and shows that C2 is a valid commitment to N2 . He also carries out the following proof of knowledge protocol: r

β α rα PK{(α, β, rα , rβ ) : gpb = (gpa )α g β ∧ CSK = gC hC ∧ C1 = gC hCβ } .

Lemma 9. The protocol described above is a proof of knowledge of a block of values (SK, N1 ) such that (SK, N1 , N2 ) is signed, and CSK is a commitment to SK, and b = aSK + N1 mod q. Proof. (Sketch) The completeness and zero-knowledge properties follow in the usual way. We must now address the question of extraction. The fact that we extract (SK, N1 , N2 ) and a signature on them follows similarly as for the protocol in Figure 2. The only thing we worry about is that b = aSK + N1 mod q. Using a knowledge extractor from the proofs of knowledge of equal representations, we extract values α and β of lengths m and N such that loggp (gpb ) = b = αa + β mod q, (as the order of gp is q) and α is the value committed to in CSK while β is the value committed to in C1 . Under the strong RSA assumption, it follows that α = SK and β = N1 , and so b = SKa + N1 mod q as we require.

References 1. G. Ateniese, J. Camenisch, M. Joye, and G. Tsudik. A practical and provably secure coalition-resistant group signature scheme. In M. Bellare, editor, Advances in Cryptology — CRYPTO 2000, volume 1880 of Lecture Notes in Computer Science, pages 255–270. Springer Verlag, 2000. 2. N. Bari´c and B. Pfitzmann. Collision-free accumulators and fail-stop signature schemes without trees. In W. Fumy, editor, Advances in Cryptology — EUROCRYPT ’97, volume 1233 of Lecture Notes in Computer Science, pages 480–494. Springer Verlag, 1997.

282

Jan Camenisch and Anna Lysyanskaya

3. M. Bellare and O. Goldreich. On defining proofs of knowledge. In E. F. Brickell, editor, Advances in Cryptology — CRYPTO ’92, volume 740 of Lecture Notes in Computer Science, pages 390–420. Springer-Verlag, 1992. 4. F. Boudot. Efficient proofs that a committed number lies in an interval. In B. Preneel, editor, Advances in Cryptology — EUROCRYPT 2000, volume 1807 of Lecture Notes in Computer Science, pages 431–444. Springer Verlag, 2000. 5. S. Brands. Rethinking Public Key Infrastructure and Digital Certificates— Building in Privacy. PhD thesis, Eindhoven Institute of Technology, Eindhoven, The Netherlands, 1999. 6. G. Brassard, D. Chaum, and C. Cr´epeau. Minimum disclosure proofs of knowledge. Journal of Computer and System Sciences, 37(2):156–189, Oct. 1988. 7. J. Camenisch and A. Lysyanskaya. Efficient non-transferable anonymous multishow credential system with optional anonymity revocation. In B. Pfitzmann, editor, Advances in Cryptology — EUROCRYPT 2001, volume 2045 of Lecture Notes in Computer Science, pages 93–118. Springer Verlag, 2001. 8. J. Camenisch and M. Michels. Proving in zero-knowledge that a number n is the product of two safe primes. In J. Stern, editor, Advances in Cryptology — EUROCRYPT ’99, volume 1592 of Lecture Notes in Computer Science, pages 107– 122. Springer Verlag, 1999. 9. J. Camenisch and M. Michels. Separability and efficiency for generic group signature schemes. In M. Wiener, editor, Advances in Cryptology — CRYPTO ’99, volume 1666 of Lecture Notes in Computer Science, pages 413–430. Springer Verlag, 1999. 10. J. Camenisch and M. Stadler. Efficient group signature schemes for large groups. In B. Kaliski, editor, Advances in Cryptology — CRYPTO ’97, volume 1296 of Lecture Notes in Computer Science, pages 410–424. Springer Verlag, 1997. 11. J. L. Camenisch. Group Signature Schemes and Payment Systems Based on the Discrete Logarithm Problem. PhD thesis, ETH Z¨ urich, 1998. Diss. ETH No. 12520, Hartung Gorre Verlag, Konstanz. 12. R. Canetti, O. Goldreich, and S. Halevi. The random oracle methodology, revisited. In Proc. 30th Annual ACM Symposium on Theory of Computing (STOC), pages 209–218, 1998. 13. D. Chaum. Security without identification: Transaction systems to make big brother obsolete. Communications of the ACM, 28(10):1030–1044, Oct. 1985. 14. D. Chaum and J.-H. Evertse. A secure and privacy-protecting protocol for transmitting personal information between organizations. In M. Odlyzko, editor, Advances in Cryptology — CRYPTO ’86, volume 263 of Lecture Notes in Computer Science, pages 118–167. Springer-Verlag, 1987. 15. L. Chen. Access with pseudonyms. In E. D. ann J. Goli´c, editor, Cryptography: Policy and Algorithms, volume 1029 of Lecture Notes in Computer Science, pages 232–243. Springer Verlag, 1995. 16. R. Cramer and V. Shoup. Signature schemes based on the strong RSA assumption. In Proc. 6th ACM Conference on Computer and Communications Security, pages 46–52. ACM press, nov 1999. 17. I. Damg˚ ard. Efficient concurrent zero-knowledge in the auxiliary string model. In B. Preneel, editor, Advances in Cryptology — EUROCRYPT 2000, volume 1807 of Lecture Notes in Computer Science, pages 431–444. Springer Verlag, 2000. 18. I. Damg˚ ard and E. Fujisaki. An integer commitment scheme based on groups with hidden order. http://eprint.iacr.org/2001, 2001.

A Signature Scheme with Efficient Protocols

283

19. I. B. Damg˚ ard. Payment systems and credential mechanism with provable security against abuse by individuals. In S. Goldwasser, editor, Advances in Cryptology — CRYPTO ’88, volume 403 of Lecture Notes in Computer Science, pages 328–335. Springer Verlag, 1990. 20. W. Diffie and M. E. Hellman. New directions in cryptography. IEEE Trans. on Information Theory, IT-22(6):644–654, Nov. 1976. 21. A. Fiat and A. Shamir. How to prove yourself: Practical solution to identification and signature problems. In A. M. Odlyzko, editor, Advances in Cryptology — CRYPTO ’86, volume 263 of Lecture Notes in Computer Science, pages 186–194. Springer Verlag, 1987. 22. E. Fujisaki and T. Okamoto. Statistical zero knowledge protocols to prove modular polynomial relations. In B. Kaliski, editor, Advances in Cryptology — CRYPTO ’97, volume 1294 of Lecture Notes in Computer Science, pages 16–30. Springer Verlag, 1997. 23. E. Fujisaki and T. Okamoto. A practical and provably secure scheme for publicly verifiable secret sharing and its applications. In K. Nyberg, editor, Advances in Cryptology — EUROCRYPT ’98, volume 1403 of Lecture Notes in Computer Science, pages 32–46. Springer Verlag, 1998. 24. R. Gennaro, S. Halevi, and T. Rabin. Secure hash-and-sign signatures without the random oracle. In J. Stern, editor, Advances in Cryptology — EUROCRYPT ’99, volume 1592 of Lecture Notes in Computer Science, pages 123–139. Springer Verlag, 1999. 25. S. Goldwasser, S. Micali, and C. Rackoff. The knowledge complexity of interactive proof-systems. SIAM Journal of Computing, 18(1):186–208, Feb. 1989. 26. S. Goldwasser, S. Micali, and R. Rivest. A digital signature scheme secure against adaptive chosen-message attacks. SIAM Journal on Computing, 17(2):281–308, Apr. 1988. 27. A. Lysyanskaya, R. Rivest, A. Sahai, and S. Wolf. Pseudonym systems. In H. Heys and C. Adams, editors, Selected Areas in Cryptography, volume 1758 of Lecture Notes in Computer Science. Springer Verlag, 1999. 28. S. Micali. 6.875: Introduction to cryptography. MIT course taught in Fall 1997. 29. G. L. Miller. Riemann’s hypothesis and tests for primality. Journal of Computer and System Sciences, 13:300–317, 1976. 30. M. Naor and M. Yung. Universal one-way hash functions and their cryptographic applications. In Proceedings of the Twenty-First Annual ACM Symposium on Theory of Computing, pages 33–43, Seattle, Washington, 15–17 May 1989. ACM. 31. M. O. Rabin. Probabilistic algorithm for testing primality. Journal of Number Theory, 12:128–138, 1980. 32. R. L. Rivest, A. Shamir, and L. Adleman. A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM, 21(2):120–126, Feb. 1978. 33. J. Rompel. One-way functions are necessary and sufficient for secure signatures. In Proc. 22nd Annual ACM Symposium on Theory of Computing (STOC), pages 387–394, Baltimore, Maryland, 1990. ACM. 34. C. P. Schnorr. Efficient signature generation for smart cards. Journal of Cryptology, 4(3):239–252, 1991. 35. A. Shamir. On the generation of cryptographically strong pseudorandom sequences. In ACM Transaction on Computer Systems, volume 1, pages 38–44, 1983.

284

A A.1

Jan Camenisch and Anna Lysyanskaya

On the Notation and Definitions Used Notation

Let A be an algorithm. By A(·) we denote that A has one input (resp., by A(·, . . . , ·) we denote that A has several inputs). By A(·) we will denote that A is an indexed family of algorithms. y ← A(x) denotes that y was obtained by running A on input x. In case A is deterministic, then this y is unique; if A is probabilistic, then y is a random variable. If S is a set, then y ← S denotes that y was chosen from S uniformly at random. Let A and B be interactive Turing machines. By (a ← A(·) ↔ B(·) → b), we denote that a and b are random variables that correspond to the outputs of A and B as a result of their joint computation. Let b be a boolean function. The notation (y ← A(x) : b(y)) denotes the event that b(y) is true after y was generated by running A on input x. Finally, the statement Pr[{xi ← Ai (yi )}1≤i≤n : b(xn )] = α means that the probability that b(xn ) is TRUE after the value xn was obtained by running algorithms A1 , . . . , An on inputs y1 , . . . , yn , is α. (The above notation is from the Cryptography course taught by Silvio Micali at MIT [28].) By A(·) (·), we denote a Turing machine that makes oracle queries. I.e., this machine will have an additional (read/write-once) query tape, on which it will write its queries in binary; once it is done writing a query, it inserts a special symbol “#”. By external means, once the symbol “#” appears on the query tape, an oracle is invoked and its answer appears on the query tape adjacent to the “#” symbol. By Q = Q(AO (x)) ← AO (x) we denote the contents of the query tape once A terminates, with oracle O and input x. By (q, a) ∈ Q we denote the event that q was a query issued by A, and a was the answer received from oracle O. By A1(·) (·), we denote a Turing machine that makes just one oracle query. Analogously, by A(·) (·), we denote a Turing machine that makes  oracle queries. 

By A→B we denote that algorithm A has black-box access to algorithm B, i.e. it can invoke B with an arbitrary input and reset B’s state. We say that ν(k) is a negligible function, if for all polynomials p(k), for all sufficiently large k, ν(k) < 1/p(k). A.2

Signature Scheme

First, we cite the definition of a secure signature scheme. Definition 3 (Signature scheme [26]). Probabilistic polynomial-time algorithms (G(·), Sign(·) (·), Verify(·) (·, ·)), where G is the key generation algorithm,

A Signature Scheme with Efficient Protocols

285

Sign is the signature algorithm, and Verify the verification algorithm, constitute a digital signature scheme for efficiently samplable (in the length of its index) message space M(·) if for all m ∈ M(·) Correctness: If a message m is in the message space for a given PK, and SK is the corresponding secret key, then the output of SignSK (m) will always be accepted by the verification algorithm VerifyPK . More formally, for all values m: Pr[(PK, SK) ← G(1k ); σ ← SignSK (m) : m ∈ MPK ∧ ¬VerifyPK (m, σ  )] = 0 Security: Even if an adversary has oracle access to the signing algorithm which provides signatures on messages of the adversary’s choice, the adversary cannot create a valid signature on a message not explicitly queried. More formally, for all families of probabilistic polynomial-time oracle Turing ma(·) chines {Ak }, there exists a negligible function ν(k) such that Sign

(·)

Pr[(PK, SK) ← G(1k ); (Q, x, σ) ← Ak SK (1k ) : VerifyPK (m, σ) = 1 ∧ ¬(∃σ  | (m, σ) ∈ Q)] = ν(k) A.3

Proof of Knowledge

We also give a definition of a zero-knowledge black-box proof of knowledge protocol. Zero knowledge proofs of knowledge were introduced independently by Brassard, Chaum, and Cr´epeau [6] and by Goldwasser, Micali, and Rackoff [25], and further refined by Bellare and Goldreich [3]. The definition we give here is both stronger and simpler than some other, more general definitions, for example the one due to Goldreich and Bellare. As our main goal here is in exhibiting protocols, this is good enough because our protocols satisfy this stronger definition. In particular, our knowledge extraction property implies very strong soundness, and so some protocols that the literature considers to be proofs of knowledge would not be so under our definition. Let x be an input, and let R be a polynomially computable relation. A zero-knowledge proof of knowledge of a witness w such that R(x, w) = 1 is a probabilistic polynomial-time protocol between a prover P and a verifier V such that there exists an expected polynomial-time simulator S and a polynomialtime extractor E such that: Completeness: For all Pr[P (1k , x, w) ↔ V (1k , x) → b : b = 1] = 1 Black-Box Zero-Knowledge: For all (x, w) ∈ R, for all polynomial auxiliary inputs a, for all probabilistic polynomial-time families of Turing machines {Vk }, there exists a negligible function such that

286

Jan Camenisch and Anna Lysyanskaya

| Pr[P (1k , x, w) ↔ Vk (a, x) → b : b = 1] − 

Pr[S(1k , x)→Vk (a, x) → b : b = 1]| = ν(k) Black-Box Extraction: For all x, for all {Pk }, for all auxiliary inputs a, if for any function  Pr[Pk (a, x) ↔ V (1k , x) → b : b = 1] ≥ (k), then there exists a 

constant c and a negligible function ν(k) such that Pr[Pk (a, x)←E(1k , x) → w : R(x, w) = 1] ≥ c (k) − ν(k). In other words, if the prover convinces the verifier often, then the extractor computes the witness almost as often. Note that as the relation R is polynomial-time computable, the extractor can always just test whether its output satisfies (x, w) ∈ R. Let (E) denote the extractor that runs E with black-box access to Pk until w is computed. Definition 4 (Proof of Knowledge). A protocol that satisfies the above description for relation R is called a proof of knowledge protocol for relation R.

B

Number-Theoretic Crash Course

Definition 5 (RSA Modulus). A 2k-bit number n is called an RSA modulus if n = pq, where p and q are k-bit prime numbers. Definition 6 (Euler Totient Function). Let n be an integer. The Euler totient function φ(n) measures the cardinality of the group Z∗n . Fact 1. If n = pq is an RSA modulus, then φ(n) = (p − 1)(q − 1). Notation: We say that an RSA modulus n = pq is chosen uniformly at random with security k if p and q are random k-bit primes. Let RSAmodulus denote the algorithm that, on input 1k , outputs a random RSA modulus with security k. Let RSApk(1k ) denote the algorithm that, on input 1k , chooses two random k-bit primes, p and q, and outputs n = pq and a random e ∈ Z∗n such that gcd(e, φ(n)) = 1. Such a pair (n, e) is also called RSA public key, where the corresponding secret key is the value d ∈ Zφ(n) such that ed ≡ 1 mod φ(n). Assumption 2 (RSA Assumption). The RSA assumption is that given an RSA public key (n, e), and a random element u ∈ Z∗n , it is hard to compute the value v such that v e ≡ u mod n. More formally, we assume that for all polynomial-time circuit families {Ak }, there exists a negligible function ν(k) such that Pr[(n, e) ← RSApk(1k ); u ← Z∗n ; v ← Ak (n, e, u) : v e ≡ u mod n] = ν(k) The strong RSA assumption was introduced by Bari´c and Pfitzmann [2] and Fujisaki and Okamoto [22].

A Signature Scheme with Efficient Protocols

287

Assumption 3 (Strong RSA Assumption). The strong RSA assumption is that it is hard, on input an RSA modulus n and an element u ∈ Z∗n , to compute values e > 1 and v such that v e ≡ u mod n. More formally, we assume that for all polynomial-time circuit families {Ak }, there exists a negligible function ν(k) such that Pr[n ← RSAmodulus(1k ); u ← Z∗n ; (v, e) ← Ak (n, u) : e > 1 ∧ v e ≡ u mod n] = ν(k) The tuple (n, u) generated as above, is called a general instance of the flexible RSA problem. Notation: By QRn ⊆ Z∗n we will denote the set of quadratic residues modulo n, i.e., elements a ∈ Z∗n such that ∃b ∈ Z∗n such that b2 ≡ a mod n. If (n, u) is a general instance of the flexible RSA problem, and u ∈ QRn , then (n, u) is a quadratic instance of the flexible RSA problem. Note that as one quarter of all the elements of Z∗n are quadratic residues, the strong RSA assumption implies that it is hard to solve quadratic instances of the flexible RSA problem. In the sequel, by an instance of the flexible RSA problem, we will mean a quadratic, rather than general instance. Corollary 1. Under the strong RSA assumption, it is hard, on input a flexible RSA instance (n, u), to compute integers e > 1 and v such that v e ≡ u mod n. Definition 7 (Safe Primes). A prime number p is called safe if p = 2p + 1, such that p is also a prime number. (The corresponding number p is known as a Sophie Germain prime.) Definition 8 (Special RSA Modulus). An RSA modulus n = pq is special if p = 2p + 1 and q = 2q  + 1 are safe primes. Notation: We say that a special RSA modulus n = pq was chosen at random with security k if p and q are random k-bit safe primes. Theorem 3 (Chinese Remainder Theorem). If n = pq, and gcd(p, q) = 1, then Φ : Zn %→ Zp ×Zq , Φ(x) = (x mod p, x mod q) is an efficiently computable and invertible ring isomorphism. Lemma 10. If n = pq, p = 2p + 1, q = 2q  + 1 is a special RSA modulus, then QRn is a cyclic group under multiplication, of size p q  , where all but p + q  of the elements are generators. Corollary 2. If (n, u) is an instance of the flexible RSA problem, and n is a special RSA modulus, we may assume that u is a generator of QRn .

288

Jan Camenisch and Anna Lysyanskaya Common Inputs: Public key (n, a, b, c) of the signature scheme with parameters m , s , e Public key (g, h) of a commitment scheme modulo n Commitment public key (nC , gC , hC ), commitment Cx x rx User’s Input: The values (x, rx ) such that Cx = gC hC mod nC and x is of length m and rx is of length n . Values (s, e, v), a valid signature on message x. U ←→ V Choose, uniformly at random, values w and rw of length n . Compute the following quantities: Cv = vg w mod n, Cw = g w hrw . Send the values (Cv , Cw ) to the verifier and carry out the following zeroknowledge proofs of knowledge:

PK{(α, β, γ, , ξ, σ, ν, ) : c = Cv (1/a)ξ (1/b)σ (1/g)ϕ ∧  Cw = g ν hα ∧ 1 = Cw (1/g)ϕ (1/h)β ∧

Cx = g ξ hγ ∧ 2e−1 < < 2e ∧ 2 m −1 < ξ < 2 m }

Fig. 3. Proof of Knowledge of a Signature

The following lemma originates from the analysis of the primality test due to Miller [29] and Rabin [31]. Lemma 11. Let a composite integer n be given. Given any value x such that φ(n) | x, one can find a non-trivial divisor of n in probabilistic polynomial time. Corollary 3. Let n be an integer. Let e such that gcd(e, φ(n)) = 1 be given. Given any value x such that φ(n) | x, one can efficiently compute v such that v e ≡ u mod n. For special RSA moduli, it is also true that given x > 4 such that x | φ(n), one can factor n. The following lemma and proof are straightforward: Lemma 12. Let n = pq be a special RSA modulus, i.e., p = 2p + 1 and q = 2q  + 1, and p and q  are primes. Suppose a value x > 4 is given such that x | φ(n) = 4p q  . Then it is possible to efficiently factor n. Proof. Suppose x = 2I p , for I ∈ {0, 1, 2}. Then factoring n is immediate. The more difficult case is x = 2I p q  . This immediately gives us φ(n). By Lemma 11, we are done. Corollary 4. Given a special RSA modulus n, and an integer x such that gcd(φ(n), x) > 4, one can efficiently factor n.

A Signature Scheme with Efficient Protocols

289

The following lemma is due to Shamir [35]: Lemma 13. Let an integer n be given. Suppose that we are given the values u, v ∈ Z∗n and x, y ∈ Z, gcd(x, y) = 1 such that v x ≡ uy mod n. Then there is an efficient procedure to compute the value z such that z x = u mod n. The following lemma is a generalization of Lemma 13, in that we are not restricted to the case where gcd(x, y) = 1. However, note that this generalization applies only for special RSA moduli. Lemma 14. Let a special RSA modulus n = pq, p = 2p + 1, q = 2q  + 1, be given. Suppose we are given the values u, v ∈ QRn and x, y ∈ Z, gcd(x, y) < x such that v x ≡ uy mod n. Values z, w > 1 such that z w ≡ u mod n can be computed efficiently. Proof. Let c = gcd(x, y). If gcd(4c, φ(n)) > 4, then by Corollary 4, we factor n. Otherwise, it must be the case that gcd(c, p q  ) = 1. Therefore, there exists a value d ∈ Z∗p q such that cd ≡ 1 mod p q  . By the extended GCD algorithm, find integers a and b such that ax + by = c. Note that v x/c ≡ (v x )d ≡ (uy )d ≡ uy/c mod n Then, by Lemma 13, we find an element z such that z x/c = u and we obtain e = x/c and z.

C

A More Efficient Proof of Knowledge of a Signature

Lemma 15. The protocol in Figure 3 is a zero-knowledge proof of knowledge of the values (x, rx , s, e, v) such that Cx = g x hrx mod n and VerifyPK (x, s, e, v). The proof of this lemma is similar to the one of Lemma 8.

Efficient Zero-Knowledge Proofs for Some Practical Graph Problems Yvo Desmedt1,2, and Yongge Wang3 1

2

Computer Science, Florida State University, Tallahassee Florida FL 32306-4530, USA [email protected] Dept. of Mathematics, Royal Holloway, University of London, UK 3 Department of Software and Information Systems University of North Carolina at Charlotte 9201 University City Blvd, Charlotte, NC 28223 [email protected]

Abstract. From a cryptographic aspect zero-knowledge protocols for graph isomorphisms, graph non-isomorphisms, and graph-coloring are artificial problems, that received lots of attention. Due to recent work in network security in a broadcast setting, it seems important to design efficient zero-knowledge protocols for the following graph problems: independent set problem, neighbor independent set problem, and disjoint broadcast lines problem. In this paper, we will introduce a new concept of k-independent set problem which is a generalization of independent set and neighbor independent set problems, and we will present efficient zero-knowledge protocols for these problems. In the end of the paper we will give some cryptographic applications of k-independent set. Especially, we will point out the applications to the concept of “threshold” and appropriate access structures. Note that k-independent set also has applications outside cryptography, such as biology, methodology of scientific research, ethics, etc., which are beyond the scope of this paper. Keywords: Zero-knowledge, graph theory, secret sharing, key-escrow, complexity.

1

Introduction

The notion of zero-knowledge proof systems was introduced in the seminal paper of Goldwasser, Micali, and Rackoff [14]. Since their introduction, zero-knowledge proofs have proven to be very useful as a building block in the construction of cryptographic protocols, especially after Goldreich, Micali, and Wigderson [12] showed that all languages in NP have zero-knowledge proofs assuming the existence of secure encryption functions. The zero-knowledge protocols [12] for general NP problems are extremely complicated. Due to their importance, the efficiency of zero-knowledge protocols has received considerable attention. Several 

This author is partially supported by DARPA F30602-97-1-0205.

S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 290–302, 2003. c Springer-Verlag Berlin Heidelberg 2003 

Efficient Zero-Knowledge Proofs for Some Practical Graph Problems

291

efficient zero-knowledge protocols for some graph problems, such as graph isomorphisms, graph non-isomorphisms, and graph-coloring, have been introduced in the literature. However, from a cryptographic aspect these graph problems are very artificial. The study of secure communication and secure computation in multi-recipient (broadcast) models has been initiated by Goldreich, Goldwasser, and Linial [11], Franklin and Yung [9], and Franklin and Wright [8]. A “broadcast channel” (such as ethernet) enables one participant to send the same message—simultaneously and privately—to a fixed subset of participants. Franklin and Yung [9], Franklin and Wright [8], and Wang and Desmedt [4,19] have used hypergraph theory to build efficient, reliable, and private protocols for broadcast communications. Most of their results are related to the interplay of network connectivity and secure communications. Their results also show that in a communication network, if we use broadcast channels, then we may achieve better results for reliability and privacy. For example, Dolev [5] and Dolev et al. [6] showed that, in the case of k Byzantine faults, reliable communication is achievable only if the systems’s network is 2k + 1 connected. Hadzilacos [16] has shown that even in the absence of malicious failures connectivity k + 1 is required to achieve reliable communication in the presence of k faulty participants. Franklin and Wright [8] and Wang and Desmedt [19] have shown that if broadcast channels are used, then in the case of k Byzantine faults, probabilistic reliable communication is achievable even if the system’s network is only strongly k + 1 connected. These recent research in broadcast channels proposes the challenge for a complete study of the following problems: when constructing a (hyper) graph, one may want to prove (without revealing the trapdoor) that the graph has certain properties. For example, what is the size of the maximum independent set? what is the size of the maximum neighborhood independent set? and so on. In this paper we will design efficient zero-knowledge protocols for the k-independent set problem and for the strongly connectivity problem. These problems are important when designing protocols for broadcast channels. The main reason why we need neighbor independent set and strongly connectivity in broadcast channels is as follows: a vertex broadcast channel means that if a vertex broadcast some message, then all his neighbors (or the receivers who share the same frequency) will get this message with privacy and with authenticity. Privacy means that vertices who are not neighbors of the sender will learn nothing about the message. Authenticity means that if a receiver gets some message, he knows who has broadcast that message. It is clear that if two paths between two vertices are neighborhood disjoint then in order for the adversary to intercept the message on the two lines, one vertex is not sufficient and he has to control two vertices. However, if the two paths have a common neighbor v, then the adversary could control the vertex v and hear all things communicated on the two paths. Sometime efficient zero-knowledge interactive proof systems are needed for these problems. For example, when one designs a network and a protocol over it for a broadcast communication, then for some reason the designer may decline

292

Yvo Desmedt and Yongge Wang

to let all customers know all neighborhood disjoint paths between two vertices. But he has to prove to customers that the protocol can really work and is secure. Our efficient zero-knowledge interactive proof systems suffice for him to achieve this goal. In the end of the paper we will give some cryptographic applications of kindependent set. Especially, we will point out the applications to the concept of “threshold” and appropriate access structures. Note that k-independent set also has applications outside cryptography, such as biology, methodology of scientific research, ethics, etc., which are beyond the scope of this paper.

2

Notations and Preliminary Results

In this section we give basic notations and recall the notions of interactive proof systems, zero-knowledge proof systems in the two-party model. We also give the definitions of several problems in graph theory related to broadcast channels. Interactive Protocols. Following [14], an interactive Turing machine is a Turing machine with a public input tape, a public communication tape, a private random tape and a private work tape. An interactive protocol is a pair of interactive Turing machines sharing their public input tape and communication tape. The transcript of an execution of an interactive protocol (Peggy,Vic) is a sequence containing the random tape of Vic and all messages appearing on the communication tape of Peggy and Vic. Interactive Proof Systems. An interactive proof system for a language L is an interactive protocol in which, on an input string x, a computationally unbounded prover Peggy convinces a polynomial-time bounded verifier Vic that x belongs to L. The requirements are two: completeness and soundness. Informally, completeness states that for any input x ∈ L, the prover convinces the verifier with very high probability. Soundness states that for any x ∈ / L and any prover, the verifier is convinced with very small probability. A formal definition can be found in [14]. Zero-Knowledge Proof Systems in the Two-Party Model. A zero-knowledge proof system for a language L is an interactive proof system for L in which, for any x ∈ L, and any possibly malicious probabilistic polynomial-time verifier Vic , no information is revealed to Vic that he could not compute alone before running the protocol. This is formalized by requiring, for each Vic , the existence of an efficient simulator SVic which outputs a transcript “indistinguishable” from the view of Vic in the protocol. There exists three notions of zero-knowledge, according to the level of indistinguishability: computational, statistical, and perfect. The reader is referred to [14] for the definitions of computational, statistical, and perfect zero-knowledge proof systems. In this paper, we will only deal with computational zero-knowledge proof systems. Let G(V, E) be a graph and V = {v1 , . . . , vn }. The adjacent matrix (ai,j )n×n of G is defined as follows: ai,j = 1 if (vi , vj ) ∈ E and ai,j = 0 otherwise. Note that it is always the case that the adjacent matrix of a graph is symmetric. The following properties of graphs play an important role in the design of broadcast protocols (see [9,8,19]).

Efficient Zero-Knowledge Proofs for Some Practical Graph Problems

293

An independent set in a graph G(V, E) is a subset V  of V such that no two vertices in V  are joined by an edge in E. A vertex subset V  ⊆ V of G is neighborhood independent if for any u, v ∈ V  there is no w ∈ V such that both (u, w) and (v, w) are edges in E. Independent Set Problem (see, e.g., [10]): Given a graph G(V, E) and an integer k, does there exist a size k independent set in G? Neighborhood Independent Set Problem (see, e.g., [19]): Given a graph G(V, E) and an integer k, does there exist a size k neighborhood independent set in G? The above two problems can be generalized to the case of k-independent set problem (which is a new concept we introduce in this paper). Let u and v be two vertices of a graph G(V, E), and u-v1 -v2 -· · ·-vk−1 -v be the shortest path between u and v in G. Then we say that the distance between u and v is k. For two vertices which are not connected by any path, the distance between them is defined as ∞. A vertex subset V  ⊆ V of G is k-independent if for any u, v ∈ V  , the distance between u and v in G is at least k. Note that V  is independent if and only if it is 2-independent, and V  is neighborhood independent if and only if it is 3-independent. k-Independent Set Problem: Given a graph G(V, E) and an integer k  , does there exist a size k  k-independent set in G? Let A and B be two vertices in a graph G(V, E). We say that A and B are strongly k-connected if there are k neighborhood disjoint paths p1 , . . . , pk between A and B, that is, for any i = j(≤ k), pi and pj have no common neighbor (except A and B). In other words, for any vertex v ∈ V \ {A, B}, if there is a vertex u1 on pi such that (v, u1 ) ∈ E, then there is no u2 on pj such that (v, u2 ) ∈ E. Strong Connectivity Problem (see, e.g., [19]): Given an integer k, a graph G(V, E), and two vertices A and B, are A and B strongly k-connected? These problems are extensively used in the design of efficient, reliable, and private protocols in broadcast channels (see, e.g., [8,9,19]). The following theorem shows that all of them are NP-complete. We first give a definition. Definition 1. A vertex v in a graph G(V, E) is isolated if there is no edge adjacent to v, i.e., for all w ∈ V , (v, w) ∈ / E. Theorem 1. 1. For each k > 1, the k-independent set problem is NP-complete. 2. The strong connectivity problem is NP-complete. Proof. It is straightforward that both problems are in NP. In the following we show that all of them are NP-hard. 1. For the case k = 2, this is the independent set problem. Whence it suffices to reduce the independent set problem to k-independent problem for k ≥ 3. The input G(V, E), to independent set problem, consists of a set of vertices V = {v1 , . . . , vn } and a set of edges E. In the following we construct a graph

294

Yvo Desmedt and Yongge Wang

f (G) = GKI(VG , EG ) such that there is an independent set of size k  in G if and only if there is a k-independent set of size k  in GKI. Without loss of generality, we may assume that G has no isolated vertices, otherwise we can consider them separately. Let VG = V ∪ V  where V  = {vi,j,t : (vi , vj ) ∈ E, i < j; t = 1, . . . , k − 2} ∪ {u1 , . . . , uk−2 } and EG = {(vi , vi,j,1 ), (vi,j,1 , vi,j,2 ), . . . , (vi,j,k−2 , vj ) : i < j, (vi , vj ) ∈ E} ∪{(u1 , u2 ), . . . , (uk−3 , uk−2 ), (uk−2 , u1 )} ∪{(vi,j,t , ut ) : (vi , vj ) ∈ E; t = 1, . . . , k − 2}. By a simple (straightforward) analysis, it can be verified that, for any k-independent set V1 ⊆ VG , if V1 ∩ V  = ∅ then |V1 | = 1. It is also clear that for any two vertex u, v ∈ V , the distance between u and v is at least k in GKI if and only if (u, v) ∈ / E. Hence there is a k-independent set of size k  in GKI if and only if there is an independent set of size k  in G. 2. We reduce the neighborhood independent set problem to strong connectivity problem. The input G(V, E), to neighborhood independent set problem, consists of a set of vertices V = {v1 , . . . , vn } and a set of edges E. Let A and B be two vertices and f (G) = GSC(VG , EG ) be a graph defined as follows: VG = {A, B} ∪ V , and EG = E ∪ {(A, v), (v, B) : v ∈ V }. It is clear that two paths P1 = (A, vi , B) and P2 = (A, vj , B) are vertex disjoint and have no common neighbor (except A and B) in GSC if and only if vi and vj have no common neighbor in G(V, E). Whence there is a size k neighborhood independent set in G if and only if GSC is strongly k-connected. This completes the proof of the theorem. Q.E.D. Efficient computational zero-knowledge interactive protocols for these problems will be presented in the next two sections. Since these problems are NPcomplete, by the results of Fortnow [7], these problems cannot have perfect zero-knowledge interactive protocols unless the polynomial hierarchy collapses. Some notations: Let C be a set. sym(C) denotes the set of permutations over C. When writing x ∈R C, we mean an element chosen at random with uniform distribution from the set C. A bit-commitment function f is an encryption scheme secure as in [13] which is a probabilistic polynomial time algorithm that on an input x and a random string r ∈ {0, 1}n , outputs an encryption f (x, r). Decryption is unique, that is f (x, r) = f (y, s) implies x = y.

3

Efficient Interactive Zero-Knowledge Protocol for the k-Independent Set Problem

In the following protocol, the common inputs are a graph G(V, E) with |V | = n and an integer k. The prover tries to convince the verifier that there is a size k independent set of G. Let V  ⊆ V be an independent set of size k. Protocol 1. The following four steps are executed n times, each time using independent coin tosses.

Efficient Zero-Knowledge Proofs for Some Practical Graph Problems

295

1. The prover Peggy chooses π ∈R sym({1, . . . , n}) which is a secret random permutation and commits to each entry in the adjacency matrix in which the rows and columns are permuted according to π. More specifically, for each entry aij of the matrix (after permutation), the prover chooses a random rij , and sends the new matrix (f (aij , rij ))n×n to the verifier. 2. The verifier Vic chooses a random bit q ∈R {0, 1} and sends it to the prover. 3. If q = 0, then Peggy reveals π and opens all commitments, else Peggy opens k ∗ (k − 1)/2 commitments corresponding to the entries (vπ(i) , vπ(j) ) in the permuted adjacency matrix, where vi , vj ∈ V  . 4. V checks whether the proofs provided by the prover in step (3) are correct. That is, – when q = 0. Vic checks that the commitments are correct and the committed matrix is obtained from the adjacency matrix according to π, – when q = 1. Vic checks that all the k ∗(k −1)/2 commitments correspond to 0. If any proof is incorrect, the verifier rejects and stops. Otherwise the verifier continues to the next iteration. If the verifier has completed all these iterations then it accepts. Theorem 2. Protocol 1. is a zero-knowledge interactive proof system for the independent set problem assuming that the commitment function f is secure. Sketch of Proof. It is straightforward that the protocol is complete and sound. That is, when there is a k size independent set in G and both prover and verifier follow the protocol then the verifier always accepts. When the graph does not have a size k independent set and the verifier follows the protocol then no matter how the prover plays, the verifier will reject with probability at least 1−2−n . Thus the above protocol constitutes an interactive proof system for the independent set problem. We need to show that the above protocol is zero-knowledge. It is clear that the above prover conveys no knowledge to the specified verifier. We need however to show that our prover conveys no knowledge to all possible verifiers, including ones that deviate arbitrarily from the protocol. Let Vic be an arbitrary fixed probabilistic polynomial-time Turing machine interacting with the prover Peggy. In the following we will present a probabilistic polynomial-time simulator SVic that generates a probability distribution which is polynomially indistinguishable from the probability distribution induced on Vic ’s tapes during its interaction with the prover Peggy. It suffices to generate the distribution on the random tape and the communication tape. In the following we use Vic to construct the simulator SVic . The machine SVic monitors the execution of Vic . In particular, SVic chooses the random tape of Vic , reads message from Vic ’s communication tape, and writes message to Vic ’s communication tape. Typically, SVic tries to guess which question Vic will ask to check. SVic will encrypt an illegal adjacency matrix such that it can answer Vic in case if (SVic ) is lucky. The case in which SVic fails will be

296

Yvo Desmedt and Yongge Wang

ignored: SVic will just rewind Vic to the last success, and try its luck again. It is crucial that from the point of view of Vic the case which leads to SVic success and the case which leads to SVic failure are polynomially indistinguishable. The details are standard as those simulators by Goldreich, Micali, and Wigderson [12] for the zero-knowledge proof systems for 3-colorability. And we will omit the details. Q.E.D. Next we consider zero-knowledge interactive proof systems for the k-independent set problem. For an integer k ≥ 3 and a graph G(V, E), we first construct another graph Gk (VG , EG ) as follows. Let VG = V and EG = {(u, v) : the distance between u and v is at most k − 1}. Obviously, the graph Gk can be constructed from G efficiently. Lemma 1. Given a graph G(V, E) and an integer k ≥ 3, let Gk (VG , EG ) be defined as above. Then a vertex set V  ⊂ V is k-independent in G if and only if it is independent in Gk . Proof. Straightforward. Q.E.D. By Lemma 1, the zero-knowledge interactive proof system for independent set of Gk is trivially a zero-knowledge interactive proof system for k-independent set of G.

4

Efficient Interactive Zero-Knowledge Protocol for the Strong Connectivity Problem

Given a graph G(V, E) with V = {v1 , . . . , vn }, a direct flow function on G is a function F : E → {0, 1} where E = {u → v, v → u : (u, v) ∈ E}, that is, E is the bi-directionlized version of E. For any (u, v) ∈ E with F (u → v) = 1 we say that there is a flow from u to v (but not a flow from v to u). Alternatively, a flow F on a graph G can be described by the matrix (Fij )n×n where Fij = 1 if F (vi → vj ) = 1 and Fij = 0 otherwise. And we call this matrix the flow matrix. Note that a flow matrix is not symmetric (cf. the adjacency matrix). Assume that A = v1 , B = vn and let P = {p1 , . . . , pk } be k neighborhood disjoint paths between A and B satisfying the following minimal property: – For each path pi = (A, vi,1 , . . . , vi,m , B), if j2 > j1 and (vi,j1 , vi,j2 ) ∈ E, then j2 ≤ j1 + 2. A flow FP on G is defined as follows: for each path pi = Avi1 vi2 . . . vit B, let FP (A → vi1 ) = FP (vi1 → vi2 ) = . . . = FP (vit → B) = 1 and FP (vi1 → A) = FP (vi2 → vi1 ) = . . . = FP (B → vit ) = 0. For any edge (u, v) ∈ E which is not on any of the paths, let FP (u → v) = FP (v → u) = 0. By the neighborhood independence of P , the flow FP is well defined.

Efficient Zero-Knowledge Proofs for Some Practical Graph Problems

297

In the following protocol, the common inputs are a graph G(V, E) with |V | = n and an integer k. The prover tries to convince the verifier that there are k neighborhood disjoint paths between A and B. Let P = {p1 , . . . , pk } be k neighborhood disjoint paths between A and B and FP be the corresponding flow function. Protocol 2. The following three steps are executed for sufficiently many times, each time using independent coin tosses (Peggy is the prover and Vic is the verifier). 1. Peggy chooses a secret random permutation π ∈R sym({1, . . . , n}) such that π(1) = 1 and π(n) = n. Peggy commits to each entry in the adjacency matrix and each entry in the flow matrix in which the rows and columns are permuted according to π. More specifically, let (fij )n×n and (aij )n×n be the flow matrix and the adjacency matrix (after the permutation π) respec and computes tively. For each aij and fij the prover chooses random rij , rij    aij = f (aij , rij ), fij = f (fij , rij ). Peggy sends the new matrices (aij )n×n  and (fij )n×n to the verifier. 2. Vic chooses a random question q ∈R {0, 1, 2, 3} and sends it to the prover. 3. According to the value of q, we distinguish the following cases: (a) q = 0. Peggy reveals π and opens all commitments corresponding to the adjacency matrix. Vic verifies that the commitments are correct and the committed adjacency matrix is obtained from the original adjacency matrix according to π. (b) q = 1. Peggy reveals all entries in the first row, the last row, the first column, and the last column of the committed flow matrix. The verifier checks that in the committed flow matrix: – there are exactly k 1’s in the first row, – there is no 1 in the last row, – there is no 1 in the first column, and – there are exactly k 1’s in the last column. (c) q = 2. Vic chooses a random question i ∈R {2, . . . , n − 1} and sends it to the prover. Peggy opens the i-th row and the i-th column of the committed flow matrix. If there are 1’s in these row and column, Peggy also opens the corresponding edge entries in the committed adjacency matrix. For example, if the value of the entries fij1 and fj2 i are 1’s, then Peggy also opens the entries aij1 and aj2 i of the committed adjacency matrix. Vic verifies that either one of the following case is true: – Neither the i-th row nor the i-th column of the committed flow matrix contains an entry 1. – There is exactly one entry 1 in the i-th row and exactly one entry 1 in the i-th column of the committed flow matrix. If this is the case, then Vic also verifies that the corresponding edges exist (by checking that the entries from the adjacency matrix opened by Peggy are 1’s). (d) q = 3. Vic chooses a random question (i, j, t) ∈R {2, . . . , n − 1}3 and sends it to the prover. Peggy opens the i-th, j-th, t-th rows, the i-th,

298

Yvo Desmedt and Yongge Wang

j-th, t-th columns of the committed flow matrix. Peggy also opens the entries aij , ait , and ajt of the committed adjacency matrix. – If any entry of aij , ait , and ajt equals to 1, for example, assume that aij = 1, then Vic verifies that all flows passing through the i-th and the j-th vertices (after the permutation π) are consistent with the neighborhood disjoint property. That is, if there is any flow passing through the i-th vertex (after the permutation π) but not passing through the j-th vertex, then there is no flow through the j-th vertex at all. Similarly, if there is any flow passing through the j-th vertex (after the permutation π) but not passing through the i-th vertex, then there is no flow through the i-th vertex at all. Specifically, the following conditions hold: • If fij = 1 then exactly one entry in the i-th column of the committed flow matrix equals to 1 and exactly one entry in the j-th row of the committed flow matrix equals to 1. • If fji = 1 then exactly one entry in the j-th column of the committed flow matrix equals to 1 and exactly one entry in the i-th row of the committed flow matrix equals to 1. • If fij = 0, fji = 0, and fij  = 1 for some j  = j, then fjk = 0 for all k = j  and fkj = 0 for all k = j  . • If fij = 0, fji = 0, fi j = 1 for some i = i, then fik = 0 for all k = i and fki = 0 for all k = i . – If at least two entries among aij , ait , and ajt equal to 1, then Vic verifies that the committed flow matrix is consistent with the neighborhood independent property (we will not give the details here which are similar to those in the previous item). For example, assume that aij = ait = 1 and there is a flow passing through the i-th vertex (after the permutation π) but not passing through either the j-th or the t-th vertex, then there is no flow through the j-th or the t-th vertex at all. If any of the above verification fails, then Vic rejects and stops the protocol. Otherwise, go to the next iteration. If the verifier has completed all these iterations then it accepts. Theorem 3. Protocol 2. is an interactive proof system for the strong connectivity problem assuming that the commitment function f is secure. Sketch of Proof. First we note the following fact: Step 3a in the protocol is used to check that the commitments are correct and that the submitted adjacency matrix is obtained from the original adjacency matrix according to the permutation π. Step 3b is used to check that there are k outgoing flows from A and k incoming flows to B, but there is neither incoming flow to A nor outgoing flow from B. Steps 3c and 3d are used to check that the flows are legal and the flows correspond to k neighborhood disjoint paths from A to B. Specifically, step 3c is used to check that for each vertex (excluding A and B) there is either

Efficient Zero-Knowledge Proofs for Some Practical Graph Problems

299

no flow through it at all or there is exactly one incoming flow and one outgoing flow through it. Step 3d checks that the k paths from A to B corresponding to the flows are neighborhood disjoint. Now it is straightforward that the above protocol is complete and sound. Q.E.D. Theorem 3 shows that Protocol 2. is sound and complete. However, this protocol is not zero-knowledge. Indeed, step 3d may leak the following information: some path in the set of neighborhood disjoint paths has length of at least 3 or 4. Let P = {p1 , . . . , pk } be a set of neighborhood disjoint paths between A and B. Then generally the malicious verifier Vic does not know the l = max1≤i≤k |pi |. If l = 2, then during the execution of step 3d in Protocol 2., Vic will never have the chance to see fij = 1 (or fji = 1) in his view. And if l > 2 then Vic will have the chance to see fij = 1 (or fji = 1) in his view. Whence, only from Vic , we cannot construct a simulator to generate an indistinguishable view of Vic interacting with the prover. It follows that Protocol 2. is not zero-knowledge. This problem can be easily fixed by converting the graph G into a new graph G first and then apply the protocol. Let G(V, E) be a graph with V = {v1 , . . . , vn }. Define a new graph G (VG , EG ) by letting VG = V ∪ {vij : (vi , vj ) ∈ E, i < j} and EG = E ∪ {(vi , vij ), (vij , vj ) : i < j, vij ∈ VG }. It is straightforward that there are k neighborhood disjoint paths between v1 and vn in G if and only there are k neighborhood disjoint paths between v1 and vn in G . And there exists a set of k neighborhood disjoint paths P = {p1 , . . . , pk } between v1 and vn in G such that max1≤i≤k |pi | ≥ 4. Now we can present a zero-knowledge proof system for the strong connectivity problem. The common inputs to the following protocol is: a graph G(V, E) with V = {v1 , . . . , vn }, two vertices A = v1 , B = vn , and an integer k. The prover tries to convince the verifier that there are k neighborhood disjoint paths between A and B. Protocol 3. 1. Peggy constructs a graph G from G as above. Peggy computes k neighborhood disjoint paths P = {p1 , . . . , pk } between A and B in G such that max1≤i≤k |pi | ≥ 4 and let FP be the corresponding flow function. 2. Peggy notifies Vic of the graph G as the common inputs. Vic verifies that G is got from G properly. 3. Peggy and Vic execute the Protocol 2. on the common inputs: graph G , two vertices A, B, and the integer k. Theorem 4. Protocol 3. is a zero-knowledge interactive proof system for the strong connectivity problem assuming that the commitment function f is secure. Sketch of Proof. By Theorem 3, Protocol 3. is complete and sound. The proof of the zero-knowledge property of the protocol follows from the discussion before Protocol 3. and is similar to the proof of Theorem 2. The details are omitted here. Q.E.D.

300

5

Yvo Desmedt and Yongge Wang

Applications of k-Independent Set

Distributed Computation. There have been several applications of the concept of k-independent set in the distributed computations. For example, in Franklin and Wright [8] and Wang and Desmedt [19] (see also Franklin and Yung [9]), neighbor independent paths have been used to achieve private message transmissions over broadcast networks. For further discussion on this topic see Section 1. Access Structures. The concept of k-independent set also challenges the concept of “threshold” and has applications to computing an appropriate access structure. Indeed, given a number of participants, a graph is used to identify relationships (e.g. potential conflicts of interest: boss, family member, etc.). A secret sharing scheme based on a threshold may be inappropriate. Although general access structures have been defined by Ito, Saito, and Nishizeki [17] and have been heavily studied (see, e.g., Blundo, De Santis, Stinson, and Vaccaro [2]), no scientific way has been suggested to construct such an access structure. We suggest that the access structure be based on k-independent sets (where k is large enough). E.g. the minimal access structure could only contain k-independent sets of the graph. We also give a few other potential access structure applications of k-indenpendent set in the following. Let P = {p1 , p2 , . . . , pn } be a set of participants. Then the access structure could be defined as follows. Proposal 1 Define the Relationship Graph G(P, E) by letting E = {(pi , pj ) : pi is a friend/child/parent of pj }. Let k be sufficiently large and P  be a k-independent set of G. Then an access structure could be defined on P  instead of on P. Proposal 2 Let the Relationship Graph be defined as in Proposal 1 together with a distance function d : E → [1, ∞). The access structure ΓP is a monotone subset of  (d(pj , pl ) − 1) ≥ k  }, {S = {pi1 , pi2 , . . . , pim } : j,l 

where k is large enough. Proposal 3 Let k and m be large enough integers, and the Relationship Graph be defined as in Proposal 2. The access structure ΓP is a monotone subset of {S : there exists a k-independent set P  of size m: P  ⊆ S}. Similarly, the k-independent set problem will have applications in key-escrow protocols [18]. Note that at present, the number of trusted agencies in a keyescrow system is very limited (e.g., 2 or 3). However, in the future there may be possibility that the number of trusted agencies will increase considerably. Whence the problem of choosing trusted agencies for the key-escrow system need to be solved. If we choose several agencies who often collude, it will not help us to get privacy. They may collude to recover our private keys without

Efficient Zero-Knowledge Proofs for Some Practical Graph Problems

301

the official approval. The k-independence provides us a good solution for this problem. That is, the trusted agencies for a key-escrow system should be chosen as being independent as possible. Classification of Scientists. Indeed, the properties of k-independence of vertices have many other applications which may not be cryptographic ones. One may first note the close relationship between the distance of vertices and the Erd¨ os numbers [15]. In each graph, a distinguished vertex p is the outstanding, prolific, and venerable Paul Erd¨ os. The distance from a vertex u to p is known as u’s Erd¨ os number. Thus, for example, Paul Erd¨ os’s co-authors have Erd¨ os number 1. Those people with finite Erd¨ os number constitute the Erd¨ os component of the graph. Whence if we extend the notion of Erd¨ os number by allowing several unknown Paul Erd¨ oses in a graph, then the problem of finding the Paul Erd¨ oses in a graph is related to the problem of finding a k-independent set for some k. Note that this will be an important problem for classifying the literature (e.g., classifying the literature of cryptography) and for finding new disciplines. Erd¨ os number is mostly useful in mathematical literature. A natural question that appears is: how can we define Erd¨ os numbers for other areas outside mathematics? As a first step, we need to find an equivalence of Paul Erd¨ os for each area that we have interest. The equivalence of Paul Erd¨ os should be one of the sufficiently prolific scientists (directly or indirectly). That is, we can simply count the number of papers written by a scientist (potential Paul Erd¨ os equivalence) or compute the number the number of papers written by co-co-co...-authors 1+ distance from Erd¨ os equivalent of each scientist (the potential Paul Erd¨ os equivalence should have higher rate). The problem of k-independence may also find applications in papers or proposals reviewing process by avoiding Byzantine humans that may conspire and it makes sense that a formal study of ethics uses tools and definitions from cryptography, e.g. k-independent set. Classification of Animals/Plants. Today the classification in families of animals (e.g. cat-like) is arbitrarily. We suggest the use of a graph with a distance function based on the genetic relationship graph. The concept of k-independent set could also be used to define concepts such as family.

Acknowledgments The authors would like to thank the anonymous referees for their valuable comments on the last section of this paper.

References 1. M. Blum. How to prove a theorem so no one else can claim it. Proc. of the International Congress of Mathematicians, pages 1444–1451, Berkeley, California, U.S.A., 1987.

302

Yvo Desmedt and Yongge Wang

2. C. Blundo, A. De Santis, D. Stinson, and U. Vaccaro. Graph decompositions and secret sharing schemes. In: Advances in Cryptology, Proc. of Eurocrypt ’92, LNCS 658, pages 1–24, Springer Verlag, 1992. 3. A. De Santis, G. Di Crescenzo, O. Goldreich, G. Persiano. The Graph Clustering Problem has a Perfect Zero-Knowledge Interactive Proof. Information Processing Letters 69(4): 201-206, 1999. 4. Y. Desmedt and Y. Wang. Approximation hardness and secure communication in broadcast channels. In: Advances in Cryptology, Proc. Asiacrypt ’99, LNCS 1716, pages 247–257, Springer Verlag, 1999. 5. D. Dolev. The Byzantine generals strike again. J. of Algorithms, 3:14–30, 1982. 6. D. Dolev, C. Dwork, O. Waarts, and M. Yung. Perfectly secure message transmission. J. of the ACM, 40(1):17–47, 1993. 7. L. Fortnow. The complexity of perfect zero-knowledge. In: Proc. ACM STOC ’87, pages 204–209, ACM Press, 1987. 8. M. Franklin and R. Wright. Secure communication in minimal connectivity models. Journal of Cryptology, 13:9–30. 2000. 9. M. Franklin and M. Yung. Secure hypergraphs: privacy from partial broadcast. In: Proc. ACM STOC ’95, pages 36–44, ACM Press, 1995. 10. M.R. Garey and D.S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, San Francisco, 1979. 11. O. Goldreich, S. Goldwasser, and N. Linial. Fault-tolerant computation in the full information model. SIAM J. Comput. 27(2):506–544, 1998. 12. O. Goldreich, S. Micali, and A. Wigderson. Proofs that yield nothing but their validity or all languages in NP have zero-knowledge proof systems. J. of the ACM, 38(1):691–729, 1991. 13. S. Goldwasser and S. Micali. Probabilistic encryption. J. of Comp. and Sys. Sci., 28(2):270–299,1984. 14. S. Goldwasser, S. Micali, and C. Rackoff. The knowledge complexity of interactive proof systems. SIAM J. Comp., 18(1):186–208, 1989. 15. J. Grossman and P. Ion. On a portion of the well-known collaboration graph. Congressus Numerantium, 108:129–131, 1995. 16. V. Hadzilacos. Issues of Fault Tolerance in Concurrent Computations. PhD thesis, Harvard University, Cambridge, MA, 1984. 17. M. Ito, A. Saito, and T. Nishizeki. Secret sharing schemes realizing general access structures. In: Proc. IEEE Global Telecommunications Conf., Globecom ’87, pages 99–102, IEEE Communications Soc. Press. 18. S. Micali. Fair public-key cryptosystem. In: Advances in Cryptology, Proc. of Crypto ’92, LNCS 740, pages 113–138, Springer Verlag, 1992. 19. Y. Wang and Y. Desmedt. Secure communication in multicast channels. J. of Cryptology 14:121–135, 2001.

Reduction Zero-Knowledge Xiaotie Deng1 , C.H. Lee1 , Yunlei Zhao1,2 , and Hong Zhu2 1

2

Department of Computer Science City University of Hong Kong 83 Tat Chee Avenue, Kowloon Hong Kong {CSDENG,CHLEE}@cityu.edu.hk Department of Computer Science Fudan University, Shanghai P. R. China {990314,hzhu}@fudan.edu.cn

Abstract. In this paper we re-examine the nature of zero-knowledge. We show evidences that the classic simulation based definitions of zeroknowledge (simulation zero-knowledge) may be somewhat too strong to include some “nice” protocols in which the malicious verifier seems to learn nothing but we do not know how to construct a zero-knowledge simulator for it. We overcome this problem by introducing reduction zero-knowledge. We show that reduction zero-knowledge lies between simulation zero-knowledge and witness indistinguishability. That is, any simulation zero-knowledge protocol is also reduction zero-knowledge and reduction zero-knowledge implies witness indistinguishability but the opposite direction is not guaranteed to be true. There are two major contributions of reduction zero-knowledge. One is that it introduces reduction between different protocols and extends the approaches to characterize the nature of zero-knowledge. Note that reduction is a widely used paradigm in the field of computer science. Another is that in contrast to normal simulation zero-knowledge reduction zero-knowledge can be made more efficient (especially for the verifier) and can be constructed under weaker assumption while losing little security than a corresponding simulation zero-knowledge protocol. In this paper a 4-round public-coin reduction zero-knowledge proof system for N P is presented and in practice this protocol works in 3 rounds since the first verifier’s message can be fixed once and for all. Keywords: zero-knowledge, non-interactive zero-knowledge, witness indistinguishability, zap, bit commitment

1

Introduction

The notion of zero-knowledge (ZK) was first put forward by Goldwasser, Micali and Rackoff [18] to illustrate situations where a prover reveals nothing other 

This research is supported by research grants of City University of Hong Kong (No. 7001232, 7001232 and N-CityU 102/01).

S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 303–317, 2003. c Springer-Verlag Berlin Heidelberg 2003 

304

Xiaotie Deng et al.

than the verity of a given statement to a even malicious verifier. The definitions of zero-knowledge were extensively investigated by Goldreich and Oren in [20] and the generality of this notion was demonstrated by Goldreich, Micali and Wigderson [19] by showing that any N P-statement can be proven in zero-knowledge provided that commitment schemes exist. Subsequently, related notions have been proposed, in particular, zero-knowledge argument [3], witness indistinguishability [14], zero-knowledge proof of knowledge[12], non-interactive zero-knowledge[4,5]. For a good survey in this field, the readers are referred to [15]. By now, zero-knowledge has played a central role in the field of cryptography and is the accepted methodology to define and prove security of various cryptographic tasks. Informally speaking, a protocol < P, V > for a language L is zero-knowledge if the view of a even malicious verifier in his interaction with the honest prover on input x ∈ L can be simulated by a simulator himself on x without any interactions. In the rest of this paper we denote by classic simulation based ZK simulation zero-knowledge. The view of the even malicious verifier in his real interactions with the honest prover is a transcript including all the messages exchanged between the honest prover and the malicious verifier. That is, all the messages sent by the honest prover and all the messages sent by the even malicious verifier1 . We remark that as a “privacy” criteria, zero-knowledge is a quite strong requirement. For some specific applications we may not need such a strong “privacy” requirement or we may need a protocol with some extra properties which are not guaranteed to be preserved for zero-knowledge protocols (for example, we may wish the “privacy” property of our protocol can be preserved under parallel composition, which is not held for zero-knowledge protocols[16].) For this purpose, a relaxation of zero-knowledge, witness indistinguishability (WI), was put forward by Feige and Shamir[14]. Loosely speaking, a witness indistinguishability protocol is a system where the view of any polynomial-size verifier is “computational independent” of the witness used by the (honest) prover as auxiliary private input. WI is a relaxation of zero-knowledge. It means all ZK protocols are also WI protocols. We remark that although WI is preserved under parallel composition but in general it is believed that ZK is a significantly stronger notion of security than WI [6]. That is a WI protocol may lose much security in comparison with a ZK protocol. According to the definition of ZK, to simulate the view of a malicious verifier the simulator needs to generate all the messages sent by the honest prover and all the messages sent by this malicious verifier. To simulate all the messages sent by the even malicious verifier the simulator needs to incorporate this verifier into his coding as a subroutine and normally treat it as an oracle. The difficulty occurs when simulating the messages sent by the honest prover since the simulator does not know the prover’s auxiliary private input. The normal method em1

In normal definitions, the verifier’s view in a real interaction includes all the messages sent by the honest prover and the verifier’s random-tape. The messages sent by the verifier are implicitly implied since they are determined by the verifier’s random-tape and the messages sent by the honest prover.

Reduction Zero-Knowledge

305

ployed by the simulator, especially for constant-round zero-knowledge protocol, is rewinding technique developed by Goldreich and Kahan[17]. To facilitate the rewinding technique it is normally required that there is a “determining” verifier step which determines some or all his subsequent behaviors in an execution of the protocol. This “determining” verifier step is typically implemented by commitment schemes and can be viewed as a “promise” made by the verifier that he will learn nothing in his interactions with the prover. Normally, without this determining step we do not know how to construct a zero-knowledge simulator. But in some cases this determining message seems to only facilitate the simulator and without it the malicious verifier does not seem to get more advantages. This is more obvious in the Feige-Lapidot-Shamir (FLS) paradigm[13] which was introduced in the context of non-interactive zero-knowledge (NIZK), and has been extensively used in recent advances of zero-knowledge, such as concurrent zero-knowledge[10,26,9,23], resettable zero-knowledge[7,6,24,25] and non-blackbox zero-knowledge[2]. We will give the description of the FLS paradigm and discuss the above phenomenon in detail in section 4. 1.1

Our Contributions

In this paper, we overcome above problems by introducing another relaxation of zero-knowledge, which we call it reduction zero-knowledge. Our approach is based on the belief that the messages sent by the honest prover play a much more important role in the malicious verifier’s ability on learning “knowledge” from his interactions with the honest prover. Informally speaking, a protocol 1 < P1 , V1 > is reduced to another protocol 2 < P2 , V2 > if all the messages the malicious verifier V1∗ can extract from the honest prover P1 (that is, all the messages sent by the honest prover) in protocol 1 can also be extracted by a malicious verifier V2∗ from the honest prover P2 in protocol 2. We say protocol 1 is reduction zeroknowledge if protocol 1 can be reduced to protocol 2 and protocol 2 is simulation zero-knowledge. Note that we do not require V2∗ to get the whole view of V1∗ . We just focus on the messages from the honest prover and ignore the messages from the malicious verifier. We remark that this treatment is reasonable to compare the “knowledge” extracting abilities for different malicious verifier in different protocols. On one hand, as discussed above we believe that the honest prover’s messages is critical for the malicious verifier to learn “knowledge”. On the other hand, zero-knowledge is a property of honest prover and it is hard or infeasible to compare the behaviors of different malicious verifiers in different protocols. Reduction zero-knowledge just lies between classic simulation zero-knowledge and witness indistinguishability. There are two major contributions of our reduction zero-knowledge. One is that it introduces reduction between different protocols and extends the approaches to characterize the nature of zero-knowledge. Note that reduction is a widely used paradigm in the field of computer science. Another is that reduction zero-knowledge can allow us to design more efficient protocols, especially to simplify the verifier greatly, for which although we do not know how to construct a zero-knowledge simulator but it is believed that it loses little security than a corresponding simulation zero-knowledge protocol.

306

1.2

Xiaotie Deng et al.

Organization of this Paper

In section 2, we recall the definitions and the tools we will use in this paper. In section 3, we introduce our new notion, reduction zero-knowledge, and clarify the relationship among ZK, WI and our new notion. In section 4, we construct a 4-round public-coin reduction zero-knowledge proof system for N P using the FLS paradigm. We conclude with discussions in section 5.

2

Preliminaries

Let us quickly recall the definitions and the tools used in this paper. Definition 1 (negligible function). A function f : N → [0, 1] is negligible if for all polynomials p(·) and for all sufficiently large k, f (k) < 1 . p(k) Definition 2 (probability ensembles). A probability ensemble X is a family X = {Xn }n≥1 in which Xn is a probability distribution on some finite domain. Definition 3 (computational indistinguishability). Two probability ensembles {Xn }n∈N and {Yn }n∈N are computationally indistinguishable if for every probabilistic polynomial time algorithm A and for all sufficiently large n, |Pr(A(Xn ) = 1) − Pr(A(Yn ) = 1)| is negligible in n. Definition 4 (statistical indistinguishability). The statistical difference between two distributions, X and Y , is defined by ,(X, Y ) =

1  · |Pr[X = α] − Pr[Y = α]| . 2 α

Two probability ensembles {Xn }n∈N and {Yn }n∈N are statistically indistinguishable if for sufficiently large n, ,(Xn , Yn ) is negligible in n. Note that if {Xn }n∈N and {Yn }n∈N are statistical indistinguishable then there is no algorithm (even with unbounded computational power) which can distinguish them with non-negligible probability. Definition 5 (interactive proof system). A pair of probabilistic machines, < P, V >, is called an interactive proof system for a language L if V is polynomialtime and the following conditions hold: – Completeness. For every x ∈ L, Pr(< P, V > (x) = 1) = 1. – Soundness. For all sufficiently large n and every x ∈ / L with length n and every interactive machine B (even with unbounded computational power), Pr(< B, V > (x) = 1) is negligible in n.

Reduction Zero-Knowledge

307

An interactive proof system is called a public-coin proof system if at each round the prescribed verifier can only toss coins (random string) and send their outcome to the prover. In this paper we will use two types of commitment schemes to be defined in below. Commitment schemes are a basic ingredient in many cryptographic protocols, especially in zero-knowledge protocols. They are used to enable a party to commit itself to a value while keeping it secret. In a latter stage the commitment is “opened” and it is guaranteed that the “opening” can yield only a single value determined in the committing phase. Commitment schemes can be viewed as the digital analogue of nontransparent sealed envelopes. Definition 6 (one-round perfect-binding bit commitment). A one-round perfect-binding bit commitment is a pair of probabilistic polynomial-time (PPT) algorithms, denoted (S, R), satisfying: – Completeness. ∀k, ∀v, let c = Csv (1k , v) and d = (v, sv ), where C is a PPT commitment algorithm while using sv as its randomness and d is R the corresponding decommitment to c, it holds that Pr[(c, d) ←− S(1k , v) : R(1k , c, v, d) = YES] = 1, where k is security parameter. – Computational hiding. For every v, u of equal p(k)-length, where p is a positive polynomial and k is security parameter, the random variables Csv (1k , v) and Csu (1k , u) are computationally indistinguishable. – Perfect binding. For every v, u of equal p(k)-length, the random variables Csv (1k , v) and Csu (1k , u) have disjoint support. That is, for every v, u and m, if Pr[Csv (1k , v) = m] and Pr[Csu (1k , u) = m] are both positive then u = v and sv = su . A one-round commitment scheme as above can be constructed based on any one-way function [15]. Definition 7 (hash-based commitment). A hash based commitment is a pair of PPT algorithms (S, R), along with the algorithm ADV , satisfying: R

– Completeness. ∀k, ∀v, Pr[(c, d) ←− S(1k , v) : R(1k , c, v, d) = YES] = 1, where k is security parameter, c = HC(1k , v) and HC is a PPT hash-based commitment algorithm used by S. – Statistical hiding. For every v, u of equal p(k)-length, where p is a positive polynomial and k is security parameter, the random variables HC(1k , v) and HC(1k , u) are statistically indistinguishable. – computational binding. For all PPT algorithm ADV , and all sufficiently large k, R

Pr[(c, v1 , v2 , d1 , d2 ) ←− ADV (1k ) : v1 = v2 is negligible in k.

and

R(1k , c, v1 , d1 ) = YES = R(1k , c, v2 , d2 )]

308

Xiaotie Deng et al.

The readers are referred to [11,21] for the constructions of such schemes, which are based on the assumption that collision-resistant hash functions exist. Protocol 2.1 (coin flipping over the telephone). [1] Coin flipping over the telephone is a direct application of commitment schemes and is the way to generate random string via interactions. Suppose Alice and Bob want a fair, common random-string: r in {0, 1}k . Using commitment schemes the approach is as follows: Alice uniformly selects a string v in {0, 1}k and sends y = Cs (1k , v) to Bob, where C is the one-round perfect binding commitment while using s as its randomness. Then Bob uniformly selects a string u in {0, 1}k and sends u in the clear to Alice. At last Alice decommits to y (that is to send (v, s) to Bob) def

and the final common random string is r = v ⊕ u. We remark that in the above protocol if Bob is honest then r is a truly random string. If Bob is malicious while Alice is honest then r is pseudorandom (that is, any PPT algorithm can not distinguish r from a truly random string in {0, 1}k with non-negligible probability). Definition 8 (zero-knowledge). Let < P, V > be an interactive proof system for a language L. We denote by viewvP∗ (x) a random variable describing the transcript of messages exchanged between the honest P and the (malicious) verifier V ∗ in an execution of the protocol on common input x (that is, all the messages sent by the honest prover and all the messages sent by the (malicious) verifier2 ). Then we say that < P, V > is zero-knowledge if for every probabilistic polynomial-time interactive machine V ∗ there exists a probabilistic (expected) polynomial-time machine S, such that the following two probability distributions are computationally indistinguishable: {viewvP∗ (x)}x∈L and {S(x)}x∈L . Machine S is called a zero-knowledge simulator for < P, V >. In the rest of this paper we refer to zero-knowledge in this definition as simulation zero-knowledge. We may possibly say that an interactive proof system < P, V > is zeroknowledge for a language L, if whatever can be efficiently computed by the verifier after interacting with P on input x ∈ L can also be efficiently computed from x without any interaction. We emphasize that zero-knowledge is a property of honest prover P and the above mentioned holds with respect to any efficient way of interacting with P , not necessarily the way defined by the verifier program V . That is, V ∗ characterizes all the potential (even dishonest) efficient verifiers. Recall that the definition of simulation zero-knowledge requires that for every V ∗ there exists a PPT simulator S such that for every x ∈ L the distribution ensembles of {viewvP∗ (x)}x∈L and {S(x)}x∈L are computationally indistinguishable. A stronger version of the normal simulation zero-knowledge is black-box zero-knowledge introduced in [20]. Definition 9 (black-box zero-knowledge). Let < P, V > be an interactive proof system for a language L. We say that < P, V > is black-box zero-knowledge 2

As noted previously, in normal definitions the verifier’s view in a real interaction includes all the messages sent by the honest prover and the verifier’s random-tape. The messages sent by the verifier are implicitly implied since they are determined by the verifier’s random-tape and the messages sent by the honest prover.

Reduction Zero-Knowledge

309

if there exists a probabilistic (expected) polynomial-time oracle machine S, such that for every probabilistic polynomial-time interactive machine V ∗ and for every x ∈ L, the following two probability distributions are computationally indistin∗ guishable: {viewvP∗ (x)}x∈L and {S V (x)}x∈L . Machine S is called a black-box simulator for < P, V >. Definition 10 (non-interactive zero-knowledge NIZK). Let N IP and N IV be two interactive machines and N IV is also probabilistic polynomial-time, and let N IσLen be a positive polynomial. We say that < N IT, N IV > is an NIZK proof system for an N P language L, if the following conditions hold: – Completeness. For any x ∈ L of length n, any σ of length N IσLen(n), and N P-witness w for x, it holds that R Pr[Π ←− N IP (σ, x, w) : N IV (σ, x, Π) = YES] = 1. – Soundness. ∀x ∈ / L of length n, R Pr[σ ←− {0, 1}N IσLen(n) : ∃ Π s. t. N IV (σ, x, Π) = YES] is negligible in n. – Zero-Knowledgeness. ∃ a PPT simulator N IS such that, ∀ sufficiently large n, ∀x ∈ L of length n and N P-witness w for x, the following two distributions are computationally indistinguishable: R [(σ  , Π  ) ←− N IS(x) : (σ  , Π  ] and R R [σ ←− {0, 1}N IσLen(n) ; Π ←− N IP (σ, x, w) : (σ, Π)]. Non-interactive zero-knowledge proof systems for N P can be constructed based on any one-way permutation[13]. An efficient implementation constructed based on any one-way permutation can be found in [22]. For more recent advances in NIZK the reader is referred to [8]. Definition 11 (witness indistinguishability WI). Let < P, V > be an interactive proof system for a language L ∈ N P, and let RL be the fixed N P witness relation for L. That is x ∈ L if there exists a w such that (x, w) ∈ RL . P (w) We denote by viewV ∗ (z) (x) a random variable describing the transcript of all messages exchanged between V ∗ and P in an execution of the protocol on common input x, when P has auxiliary input w and V ∗ has auxiliary input z. We say that < P, V > is witness indistinguishability for RL if for every PPT interactive machine V ∗ , and every two sequences W 1 = {wx1 }x∈L and W 2 = {wx2 }x∈L , so that (x, wx1 ) ∈ RL and (x, wx2 ) ∈ RL , the following two probability distriP (w1 )

x }x∈L, z∈{0, 1}∗ and butions are computationally indistinguishable: {x, viewV ∗ (z)

P (w2 )

x }x∈L, z∈{0, 1}∗ . {x, viewV ∗ (z)

WI is a relaxation of ZK. All ZK protocols are also WI protocols but the opposite direction is not guaranteed to be true. Actually, it is assumed that ZK is a significantly stronger notion of security than WI [6]. That is a WI protocol may lose much security in comparison with a ZK protocol. Protocol 2.2 (Dwork and Naor’s 2-round WI proof for N P) [9] (where they named it a zap).

310

Xiaotie Deng et al.

Let L be an N P-Complete language L and RL be its corresponding N P relation. On a security parameter k, let p be a positive polynomial and x ∈ L be the common input and w be the corresponding N P-witness. – Step 1. The verifier V uniformly selects (fixes once and for all) p(k) random strings RV = r1 , · · · , rp(k) with length N IσLen(k) each and sends them to prover P . – Step 2. The prover P uniformly selects a random string r with length N IσLen(k). Then for each i, 1 ≤ i ≤ p(k), P computes a NIZK proof with respect to the common random string r ⊕ ri . At last, P sends r and all p(k) these p(k) NIZK proofs {NIZK(x, r ⊕ ri )}i=1 to the verifier. A very interesting property of Dwork and Naor’s 2-round WI is that the RV selected by the verifier is fixed once and for all. That is, once the RV is selected then it is also fixed for all subsequent performances of the protocol. It means that RV can be viewed as a public-key of the verifier.

3

Definition of Reduction Zero-Knowledge

In this section we introduce another relaxation of simulation zero-knowledge, which we name it reduction zero-knowledge, and discuss the relationship among simulation ZK, WI and our new notion. Roughly speaking, a protocol 1 < P1 , V1 > is reduced to another protocol 2 < P2 , V2 > if the malicious verifier V2∗ in protocol 2 has the “same” “knowledge”extracting ability as that of V1∗ in protocol 1. To introduce reduction between different protocols, the difficulty is how to compare the knowledge-extracting ability of different malicious verifiers in different protocols. Note that the view of a (malicious) verifier V ∗ in an execution of the protocol < P, V > is a transcript including two parts. One is all the messages sent by the honest prover P and another part is all the messages sent by himself in response to messages received from the honest prover. Our approach is to ignore the influence of the messages sent by the malicious verifier on his “knowledge”-extracting ability from the honest prover. It is justified on the following grounds. On one hand, intuitively, the messages sent by the honest prover should play a more critical role for the malicious verifier to extract “knowledge” form the honest prover. On the other hand, zero-knowledge is a property of the honest prover and it is hard to compare the behaviors of different malicious verifiers in different protocols. The formal definition is given in below. Definition 12 (knowledge-extraction reduction). Let protocol 1 < P1 , V1 > and protocol 2 < P2 , V2 > be two protocols for the same language L ∈ N P. We say protocol 1 is knowledge-extraction reduced to protocol 2 if for every malicious verifier V1∗ there exists a malicious verifier V2∗ such that for each x ∈ L and its witness w, all the messages received by the malicious verifier V2∗ from the honest prover P2 (which uses w as his auxiliary private input) in an execution of protocol 2 on common input x and all the messages received by the malicious verifier V1∗

Reduction Zero-Knowledge

311

from the honest prover P1 (which uses w as his auxiliary private input) in an execution of protocol 1 on common input x are computationally indistinguishable. Definition 13 (reduction zero-knowledge). We say protocol 1 < P1 , V1 > is reduction zero-knowledge if it can be knowledge-extraction reduced to a protocol 2 < P2 , V2 > and protocol 2 is simulation zero-knowledge. Theorem 1. Any simulation zero-knowledge protocol is also reduction zeroknowledge and reduction zero-knowledge implies witness indistinguishability. That is, as a security criteria, reduction zero-knowledge is weaker than simulation zero-knowledge and stronger than witness indistinguishability. Proof. (sketch) First, it is can be easily verified that any simulation zero-knowledge protocol is also reduction zero-knowledge since any simulation zero-knowledge protocol is trivially knowledge-extraction reduced to itself. Next, we show that reduction zero-knowledge implies witness indistinguishability. By the definition of witness indistinguishability, a protocol for an N P-language L is witness indistinguishability if for any malicious verifier V ∗ the view of V ∗ in two independent executions of the protocol on the same common input x ∈ L while the honest prover uses different auxiliary witnesses are computationally indistinguishable. We remark that to prove a protocol is witness indistinguishability the messages sent by the malicious verifier himself in his view can be ignored. That is, if the messages received by the malicious verifier V ∗ from the honest prover in two independent executions of the protocol on the same common input x ∈ L while the honest prover using different auxiliary witnesses are computationally indistinguishable then this protocol is witness indistinguishability for L. The reason is that we can view V ∗ as a PPT next message function N MV ∗ which computes his next message on common input, his auxiliary input and the messages he has received from the honest prover. Then according to the definition of computational indistinguishability if the messages received by V ∗ from the honest prover in two independent execution of the protocol on the same common input x ∈ L while the honest prover using different auxiliary witnesses are computationally indistinguishable then the messages sent by the malicious verifier are also computationally indistinguishable. Now we show that for a reduction zero-knowledge protocol (for an N Planguage L ) the messages received by the malicious verifier from the honest prover in two independent execution of the protocol on the same common input x ∈ L while the honest prover using different auxiliary witnesses are computationally indistinguishable. Note that this will be enough to establish the theorem according to above discussions. The reason is that for a reduction zeroknowledge protocol there exists a simulation zero-knowledge protocol (for the same N P-language L) to which the reduction zero-knowledge protocol can be knowledge-extraction reduced. However, for any malicious verifier of the simulation zero-knowledge protocol, the messages received by this malicious verifier

312

Xiaotie Deng et al.

from the honest prover of the simulation zero-knowledge protocol in two independent executions of the simulation zero-knowledge protocol on the same common input x ∈ L while the honest prover using different auxiliary witnesses are computationally indistinguishable since simulation zero-knowledge is also witness indistinguishability. Then according to the definition of knowledge-extraction reduction, we conclude that the messages received by a malicious verifier of the reduction zero-knowledge protocol from honest prover of the reduction zeroknowledge in two independent executions of the reduction zero-knowledge protocol on the same common input x ∈ L while the honest prover using different auxiliary witness are also computationally indistinguishable.  We remark that although reduction zero-knowledge is a relaxation notion of the normal simulation zero-knowledge, but in contrast to witness indistinguishability it is assumed that a reduction zero-knowledge protocol loses little security than a corresponding simulation zero-knowledge protocol. This can be seen more clearly in the Feige-Lapidot-Shamir (FLS) paradigm which is to be addressed in next section.

4

Reduction Zero-Knowledge Proof System for N P

In this section, we first present the Feige-Lapidot-Shamir (FLS) paradigm and then give a 4-round public-coin reduction zero-knowledge for N P using the FLS paradigm. Furthermore, in practice our protocol works in 3-round since the first round messages sent by the verifier is fixed once and for all. 4.1

Feige-Lapidot-Shamir (FLS) Paradigm

The FLS paradigm is a scheme that converts a witness indistinguishability into a zero-knowledge protocol. It was introduced in the context of non-interactive zero-knowledge (NIZK) [13], but has been extensively used in recent advances of zero-knowledge, such as concurrent zero-knowledge[10,26,9,23], resettable zeroknowledge[7,6,24,25] and non-black-box zero-knowledge[2]. Let L be an N P language and denote by RL the corresponding N P relation for L. Suppose the common statement is x ∈ L and the prescribed prover has an auxiliary private input w such that (x, w) ∈ RL . The FLS paradigm consists of two phases and converts a witness indistinguishability into a zero-knowledge protocol as follows. In the first phase, called the generation phase, the prover and the verifier generate a string, denoted τ , which can be viewed as the transcript generated in the first phase, and fix a relation R for it. In this phase, both the prover and the verifier ignore the inputs (the common input and their auxiliary inputs) and so this phase can be performed even before knowing what is the theorem that will be proven. It means even a malicious verifier can not learn “knowledge” from the honest prover in this phase since the prover is oblivious of his auxiliary private input up to now. It is also required that even a malicious prover can not get a witness w for τ from his interactions with the honest verifier in this phase, such that (τ, w ) ∈ R .

Reduction Zero-Knowledge

313

In the second phase, called the WI proof phase, the prover proves (using a WI protocol) that either there exists a w such that (x, w) ∈ RL or there exists a w such that (τ, w ) ∈ R . In practice, in the second phase the honest prover just uses his private input w as the witness to prove that (x, w) ∈ RL . However, using the malicious verifier as a subroutine a simulator can get a witness w for τ such that (τ, w ) ∈ R and then proves (using the WI protocol) that he knows a witness w such that (τ, w ) ∈ R . The witness indistinguishability property is used to ensure that the malicious verifier can not tell the difference between the real interaction and the simulated one. 4.2

Reduction Zero-Knowledge for N P Using FLS Paradigm

To provide a reduction zero-knowledge proof system for N P, according to the definition of reduction zero-knowledge we need to construct two protocols: one is a simulation zero-knowledge proof system for N P, and to which another protocol can be knowledge-extraction reduced. For an N P-Complete language L, suppose RL is the corresponding N P relation for L and the common input is x ∈ L with length n. The prover has an auxiliary private input w such that (x, w) ∈ RL . Under security parameter n, The prover P uses a one-round perfect binding bit commitment C and the verifier V uses a a one-round hash-based bit commitment HC. Using FLS paradigm, the two protocols are presented on page 314. For both protocols, the first three rounds constitute the generation phase of FLS paradigm and the last round corresponds to the WI proof phase. For the two protocols presented on page 314, we have the following theorems. Theorem 2. Assuming collision-resistant hash functions and one-way permutations exist, Protocol 1 is a black-box zero-knowledge proof system for N P. Proof. The completeness can be easily verified, we focus on soundness and zeroknowledgeness. Soundness. For any common input x ∈ / L, since the hash-based commitment used by the verifier V is statistically-hiding then a malicious prover P ∗ (even with unbounded computational power) still can not learn v before he commits to the value t and o in Round 2 (except for a negligible probability). Then according to the perfect-binding property of the commitment scheme used by the prover, P ∗ also can not decommits to ct as o ⊕ v in Round 4 (except for a negligible probability). Also note that since x ∈ / L there also exists no witness for x ∈ L. It means P ∗ can not get a witness for y in Round 4 (except for a negligible probability). Then the soundness of Protocol 1 is followed from the soundness of Dwork and Naor’s two-round WI proof system. Zero-Knowledgeness. We remark that the same techniques introduced in [17], i. e. standard “rewinding” and approximation estimation of the probability on verifier’s correct decommitments, can be applied to show Protocol 1 is really black-box zero-knowledge. Roughly, while oracle accessing V ∗ , the black-box ∗ ∗ simulator S V works as follows. S V first runs V ∗ to get RV and hcv sent by

314

Xiaotie Deng et al. Protocol 1

< P1 , V 1 >

Protocol 2

< P2 , V 2 >

– Round 1. V1 first uniformly selects a string v in {0, 1}n and computes hcv = HC(1n , v). V1 then uniformly selects (fixes once and for all) p(n) random strings RV1 = r1 , · · · , rp(n) with length N IσLen(n) each just as does in the first round of Dwork and Naor’s two-round WI, where p is a positive polynomial. At last V1 sends (hcv , RV1 ) to the prover P1 .

– Round 1. V2 uniformly selects (fixes once and for all) p(n) random strings RV2 = r1 , · · · , rp(n) with length N IσLen(n) each just as does in the first round of Dwork and Naor’s two-round WI, where p is a positive polynomial. Then V2 sends RV2 to the prover P . Note that RV2 is fixed once and for all.

– Round 2. P1 uniformly selects two strings in {0, 1}n , t and o, and two corresponding random strings, st and so . Then P1 computes ct = Cst (1n , t) and co = Cso (1n , o). P1 then uniformly selects a random string r with length N IσLen(n) just as does in the second round of Dwork and Naor’s two-round WI. Finally P1 sends (ct , co , r) to V1 .

– Round 2. P2 uniformly selects two strings in {0, 1}n , t and o, and two corresponding random strings, st and so . Then P2 computes ct = Cst (1n , t) and co = Cso (1n , o). P2 then uniformly selects a random string r with length N IσLen(n) just as does in the second round of Dwork and Naor’s two-round WI. Finally P2 sends (ct , co , r) to V2 .

– Round 3. V1 decommits to hcv . That is V1 reveals v to P .

– Round 3. V2 uniformly selects a random string v in {0, 1}n and sends v to P .

– Round 4. P1 decommits to co (That is P1 will sends (o, so ) to V ). Then, with respect to (x, ct , co , so , o ⊕ v), P1 proves the following statement, y=“there exists a w such that (x, w) ∈ L OR there exists a string s such that ct = Cs (1n , v ⊕ o)”, using Dwork and Naor’s two-round WI . That is, for each i, 1 ≤ i ≤ p(n), P1 computes an NIZK proof for y with respect to common random string r ⊕ ri . At last, P1 sends p(n) (o, so , {NIZK(y, r ⊕ ri )}i=1 ) to V1 .

– Round 4. P2 decommits to co (That is P2 will sends (o, so ) to V2 ). Then, with respect to (x, ct , co , so , o ⊕ v), P2 proves the following statement, y=“there exists a w such that (x, w) ∈ L OR there exists a string s such that ct = Cs (1n , v ⊕ o)”, using Dwork and Naor’s two-round WI . That is, for each i, 1 ≤ i ≤ p(n), P2 computes an NIZK proof for y with respect to common random string r ⊕ ri . At last, P2 sends p(n) (o, so , {NIZK(y, r ⊕ ri )}i=1 ) to V2 .

– Verifier’s decision. If all the p(n) NIZK proofs are acceptable then V1 accepts x ∈ L, otherwise, rejects.

– Verifier’s decision. If all the p(n) NIZK proofs are acceptable then V2 accepts x ∈ L, otherwise, rejects.

Reduction Zero-Knowledge

315

V ∗ in Round 1. Then for arbitrary two “garbage” values, t and o- in {0, 1}n and two corresponding random strings st and so , S sends ct = Cst (1n , t ) and co = Cso (1n , o ) to V ∗ . Then V ∗ decommits hcv and reveals v to S. After getting v, S rewinds to Round 2 and randomly selects a string o and set t = o⊕v. Then for the corresponding random strings st and so S sends ct = Cst (1n , t) and co = Cso (1n , o) to V ∗ . If V ∗ decommits hcv correctly, that is V ∗ reveals v correctly again3 , then S will goes to Round 4 with st as his auxiliary witness and complete his simulation. The zero-knowledge property is ensured by the computational-hiding of the commitments used by the prover P and the witness indistinguishability of Dwork and Naor’s protocol.  Now, we consider Protocol 2 in comparison with Protocol 1. We remark that we do not know how to construct a zero-knowledge simulator for Protocol 2 and it is certainly impossible to construct a black-box zero-knowledge simulator for Protocol 2 assuming N P  BPP since Protocol 2 is public-coin and Goldreich and Krawczky have shown that only languages in BPP have a constant-round black-box zero-knowledge public-coin proof system[16]. The major difference between Protocol 1 and Protocol 2 is that in Protocol 1 the value v in Round 3 is determined by hcv (the “determining” message) in Round 1 but in Protocol 2 there is no this “determining” message and so the value v in Round 3 of Protocol 2 may maliciously be a function of the messages sent by the honest prover in Round 2. This is just the reason that the standard “rewinding” technique is failed in showing Protocol 2 is also zero-knowledge. But, we argue that it is somewhat unfair for Protocol 2 to say that the malicious verifier V2∗ gains “knowledge” from P2 (since we can not prove Protocol is zero-knowledge) while the malicious verifier V1∗ learns nothing from P1 , on the following grounds: On one hand, although the value v sent by V2∗ in Round 3 is not “determined” and may be maliciously dependent on the messages sent by the honest P2 in Round 2, but we argue that up to Round 3 V2∗ has not learnt any knowledge by sending a malicious value v in Round 3 since the first three rounds constitute the first phase of FLS paradigm and we have argued that a malicious verifier can not learn “knowledge” in the first phase of FLS paradigm. On the other hand, based on the above discussions one may further argue that although V2∗ can not learn “knowledge” in the first three rounds but the malicious value v may affect the final distribution of (x, ct , co , so , o ⊕ v) on which the Dwork and Naor’s two-round WI is finally applied in Round 4 and so V2∗ may malicious extract “knowledge” from the messages sent by P2 in Round 4. Here, we argue that the Blum’s coin-flipping over telephone protocol is applied to efface the affection caused by such a malicious v. Formally, we show in below that Protocol 2 is really reduction zero-knowledge. Theorem 3. Assuming one-way permutations exist, Protocol 2 is a reduction zero-knowledge proof system for N P. 3

We remark that to make the simulator work in expected polynomial-time, one needs to approximately estimate the probability for that V ∗ decommits correctly in Round 3, for more details the reader is referred to [17]

316

Xiaotie Deng et al.

Proof. For each malicious verifier V2∗ of Protocol 2 we construct a malicious verifier V1∗ for Protocol 1 who set RV1 in Round 1 just to be RV2 by running V2∗ . Then, for each common input x ∈ L and its corresponding witness w, we say the messages received by V2∗ from P2 (which uses w as his auxiliary private input) and the messages received by V1∗ from P1 (which uses the same w as his auxiliary private input) are computationally indistinguishable. First, in both protocols the messages sent by honest provers in Round 2 are computationally indistinguishable due to the computational-binding of the commitment scheme used by the honest prover. Second, for the messages sent by the honest provers P1 and P2 in Round 4 of both protocols the difference lies in the p(n) NIZK proofs. Note that the distributions of the common random strings used for these NIZK proofs in both protocols are identical since RV1 is set to be RV2 . Then the difference between the p(n) NIZK proofs sent by P2 and the p(n) NIZK proofs sent by P1 is just the distributions of o⊕v. However, according to the property of Blum’s coin flipping over the telephone protocol, the o⊕v’s in both protocols are all pseudorandom. It means that the distributions of o ⊕ v in the two protocols are just computationally indistinguishable. 

5

Conclusion and Contributions

In this paper, by introducing reduction between protocols we extend the approaches to define secure protocols in which an even malicious verifier still can not learn “knowledge” from the honest prover. Our new notion (reduction zeroknowledge) just lies between simulation zero-knowledge and witness indistinguishability. In contrast to normal simulation zero-knowledge, reduction zeroknowledge protocols can be made more efficient (especially for the verifier) and can be constructed under weaker assumption as seen in section 4. Although reduction zero-knowledge is weaker than simulation zero-knowledge, but in contrast to witness indistinguishability it is assumed that it loses little security in comparison with a corresponding simulation zero-knowledge protocol as seen in section 4. In many applications we assume that reduction zero-knowledge makes a nice substitute for the normal simulation zero-knowledge, especially in such settings in which stronger computational assumption is hard to achieve or (verifier’s) efficiency takes priority over “perfect” privacy (e. g. the distributed client-server setting in which the efficiency of the server is often the bottle-neck of the system).

Acknowledgements We are grateful to Ning Chen and Shirley H. C. Cheung for their valuable helps in forming this paper.

References 1. M. Blum. Coin Flipping by Telephone. In Proc. IEEE Spring COMPCOM, pp. 133-137. IEEE, 1982.

Reduction Zero-Knowledge

317

2. B. Barak. How to Go Beyond the Black-Box Simulation Barrier. In FOCS 2001. 3. G. Brassard, D. Chaum and C.C repeau. Minimum Disclosure Proofs of Knowledge. JCSS, Vol 37, No. 2, pp. 156-189, 1988. 4. M. Blum, A. D. Santis, S. Micali and G. Persiano. Non-interactive Zero-Knowledge. SIAM Journal on Computing, 20(6): 1084-1118, 1991. 5. M. Blum, P. Feldman and S. Micali. Non-interactive Zero-Knowledge and Its Applications. In STOC 1988, pp. 103-112. 6. B. Barak, O. Goldreich, S. Goldwasser and Y. Lindell. Resettably-Sound ZeroKnowledge and Its Applications. In FOCS’01. 7. R. Canetti, O. Goldreich, S. Goldwasser and S. Micali. Resettable Zero-Knowledge. In STOC 2000. 8. A. D. Santis, G. D. Crescenzo, R. Ostrovsky, G. Persiano and A. Sahai. Robust Non-Interactive Zero-Knowledge. In Crypto 2001, pp.566-598. 9. C. Dwork and M. Naor. Zaps and Their Applications. In FOCS 2000. 10. C. Dwork, M. Naor and A. Sahai. Concurrent Zero-Knowledge. In STOC 1998. 11. I. B. Damgard, T. P. Pedersen and B. Pfitzmann. On the Existence of Statistically Hiding Bit Commitment Schemes and Fail-Stop Signatures. Journal of Cryptology, 10(3): 163-194, 1997. 12. U. Feige, A. Fiat and A. Shamir. Zero-knowledge Proof of Identity. Journal of Cryptology, Vol. 1, pp. 77-94, 1988. 13. Feige, Lapidot and Shamir. Multiple Non-Interactive Zero-Knowledge Proofs Under General Assumptions. SIAM Journal on Computing, 29, 1999. 14. U. Feige and A. Shamir. Witness Indistinguishability and Witness Hiding Protocols. In STOC’90, pp. 77-94. 15. O. Goldreich. Foundation of Cryptography-Basic Tools. Cambridge Press, 2001. 16. O. Goldreich and H. Krawczky. On the Composition of Zero-Knowledge Proof Systems. SIAM Journal on Computing, Vol., 25, No. 1, pp. 1-32, 1994. 17. O. Goldreich and A. Kahan. How to Construct Constant-Round Zero-Knowledge Proof Systems for N P. Journal of Cryptology, Vol. 9, No.2, pp.167-189, 1996. 18. S. Goldwasser, S. Micali and C. Rackoff. The Knowledge Complexity of Interactive Proof System.SIAM J. Comput., Vol.18, NO.1, pp 186-208, 1989. 19. O. Goldreich, S. Micali and A. Wigderson. Proofs that Yield Nothing But Their Validity or All language in N P Have Zero-Knowledge Proof Systems. JACM, Vol. 38, No. 1, pp. 691-729, 1991. 20. O. Goldreich and Y. Oren. Definitions and Properties of Zero-Knowledge Proof Systems. Journal of Cryptology, 7(1):1-32, 1994. 21. S. Halevi and S. Micali. Practical and Provably-Secure Commitment Schemes From Collision-Free Hashing. In Crypto’96. 22. J. Kilian, E. Petrank. An Efficient Non-Interactive Zero-Knowledge Proof System for N P with General Assumptions. Journal of Cryptology, 11(2): 24, 1998. 23. J. Kilian, E. Petrank, R. Richardson. Concurrent and Resettable Zero-Knowledge in Poly-logarithmic Rounds. In STOC 2001. 24. S. Micali and L. Reyzin. Soundness in the Public-Key Model. In Crypto 2001. 25. S. Micali and L. Reyzin. Min-Round Resettable Zero-Knowledge in the Public-Key Model. In EuroCrypt 2001. 26. R. Richardson and J. Killian. On the Concurrent Composition of Zero-Knowledge Proofs. In EuroCrypt 1999.

A New Notion of Soundness in Bare Public-Key Model Shirley H.C. Cheung, Xiaotie Deng, C.H. Lee, and Yunlei Zhao Department of Computer Science City University of Hong Kong 83 Tat Chee Avenue, Kowloon Hong Kong [email protected] {csdeng,cschlee,csylzhao}@cityu.edu.hk

Abstract. A new notion of soundness in bare public-key (BPK) model is presented. This new notion just lies in between one-time soundness and sequential soundness and its reasonableness is justified in the context of resettable zero-knowledge when resettable zero-knowledge prover is implemented by smart card.

1

Introduction

The bare public-key (BPK) model was introduced by Canetti, Goldreich, Goldwasser and Micali in the context of resettable zero-knowledge (rZK) [3]. The BPK model simply assumes that all users have deposited a public-key in a file that is accessible by all users at all times. The only assumption of this file is that all entries in this file are guaranteed to be deposited before any interaction taking place by the users. The BPK model is very simple, and it is in fact a weak version of the frequently used public-key infrastructure (PKI) model, which underlies any public-key cryptosystem or digital signature scheme. Despite its apparent simplicity, the BPK model is quite powerful. While rZK protocols exist both in the standard and in the BPK model, only in the later case can they be constant-round, at least in a black-box sense [3,8,4]. Micali and Reyzin [8] first noted and clarified the soundness of protocols in BPK model. In BPK model, the verifier has a secret key SK, corresponding to its public key PK. Thus, a malicious prover could potentially gain some knowledge about SK from an interaction with the verifier, and this gained knowledge might help him to convince the verifier of a false theorem in a subsequent interaction. In [8] four notions of soundness in the public-key model were presen ted, each of which implies the previous one: 1. One-time soundness. A potential malicious prover is allowed a single interaction with the verifier per theorem statement. 

This research is supported by research grants of City University of Hong Kong (No. 7001023 and 7001232) and a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project No. CityU1056/01E).

S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 318–325, 2003. c Springer-Verlag Berlin Heidelberg 2003 

A New Notion of Soundness in Bare Public-Key Model

319

2. Sequential soundness. A potential malicious prover is allowed multiple but sequential interactions with the verifier. 3. Concurrent soundness. A potential malicious prover is allowed multiple interleaved interactions with the verifier. 4. Resettable soundness. A potential malicious prover is allowed to reset the verifier with the same random tape and interact with it concurrently. The four notions above are not only meaningful, but also distinct. That is, for each soundness notion, there exists a protocol that satisfies this soundness condition but not satisfy the next one [8]. 1.1

Our Contribution

In this paper, we note that besides the above four soundness notions, there exists another soundness notion in BPK model, which we name it weak sequential soundness. We prove that weak sequential soundness is distinct from the above four soundness notions and justify its reasonableness in the context of resettable zero-knowledge when resettable zero-knowledge prover is implemented by smart card. 1.2

Organization of This Paper

In Section 2, we present a formal definition of the weak sequential soundness and justify its reasonableness. In Section 3, we separate weak sequential soundness from the original four soundness notions presented by Micali and Reyzin. We conclude this paper by some discussions on further investigations in Section 4.

2

Weak Sequential Soundness in BPK Model

Roughly speaking, weak sequential soundness in BPK model means that a potential malicious prover is infeasible to convince an honest verifier of a false statement even he is allowed to interact with the verifier a priori bounded polynomial times per theorem statement. We present its formal definition in the following. For the formal definition of the original four soundness notions the reader is referred to [8]. 2.1

Honest Players in BPK Model

Let – F be a public file that consists of a polynomial-size collection of records (id, P Kid ), where id is a string identifying a verifier, and P Kid is its (alleged) public key. – P be an honest prover (for a language L). It is an interactive deterministic polynomial-time Turing machine (TM) that is given as inputs (1) a security parameter 1n , (2) an n-bit string x ∈ L, (3) an auxiliary input y, (4) a public file F , (5) a verifier identity id and, (6) a random tape w.

320

Shirley H.C. Cheung et al.

– V be an honest verifier. It is an interactive deterministic polynomial-time TM that works in two stages. In stage one (the key-generation stage), on input a security parameter 1n and random tape r, V outputs a public key P K and the corresponding secret key SK. In stage two (the verification stage), on input SK, an n-bit string x and a random tape ρ, V performs an interactive protocol with a prover, and outputs either “accept x” or “reject x”. 2.2

Weak Sequential Soundness

For an honest verifier V with public key P K and secret key SK, a (t, U )-weak sequential malicious prover P ∗ , for a positive polynomial t and a priori bounded polynomial U , be a probabilistic polynomial-time TM that, on first input 1n (security parameter) and P K, runs sequentially in at most t(n) rounds as follows. In each round i(1 ≤ i ≤ t(n)), P ∗ selects on the fly a common input xi (which may be equal to xj for 1 ≤ j ≤ i) and interacts with the verification stage of V (SK, xi , ρi ) with the following restriction that the same xi can not be used by P ∗ in different rounds more than U (n) times. We note that in different rounds V uses independent random tapes in his verification stage (that is, ρ1 , ρ2 , . . . , ρt(n) are independent random strings). We then say a protocol < P, V > satisfies weak sequential soundness if for any honest verifier V , for all positive polynomial t and all a priori bounded polynomial U , for all (t, U )-weak sequential malicious prover P ∗ , the probability that there exists an i, (1 ≤ i ≤ t(n)), such that V (SK, xi , ρi ) outputs “accept / L is negligible in n. xi ” while xi ∈ 2.3

Motivations, Implementations and Applications of Weak Sequential Soundness

As an extension and generalization of one-time soundness, roughly speaking, almost all the ways to implement one-time soundness presented in [8] can also be used to implement weak sequential soundness. A simple way is to just let the honest verifier to keep a counter for each common input on which he has been invoked. The upper-bound of each counter is set to be U (n) and an honest verifier refuses to take part in other interactions with respect to the same common input once the reading of th e corresponding counter reaches its upper-bound. The BPK model was introduced in the literature of resettable zero-knowledge and note that the major application of resettable zero-knowledge is that it makes it possible to implement zero-knowledge prover by those devices that may be possibly maliciously resetted to its initial condition or can not afford to generate fresh randomness for each invocation. A notable example of such devices is the widely used smart card. Resettable zero-knowledge also provides the basis for resettable identification protocols [2]. Then we consider the distributed client/server setting when the clients are smart cards. We remark that this setting is widely used in practice, especially in E-commerce over Internet. When a resettable identification scheme is executed in this setting we can view the identifier of each smart card as the common input. An adversary may hold many smart

A New Notion of Soundness in Bare Public-Key Model

321

cards but according to the definition of weak sequential soundness we require that each smart card can be used by the adversary at most a pri ori bounded polynomial times. Note that in practice each smart card has an expiration date that can be viewed as the correspondence to the a priori bounded polynomial required in weak sequential soundness. In this smart-card/server setting there is a central server that may be located in a central bank or other organizations and plays the verifier’s role. This central server keeps a record for each smart card and dynamically updates its information. It is easy for this central server to keep a counter in e ach record.

3

Separation of the Various Soundness Notions in BPK Model

In this section, we show that the weak sequential soundness is not only meaningful but also distinct from the original four soundness notions presented by Micali and Reyzin. We remark that as noted by Micali and Reyzin, separation of various soundness notions in public-key model is of conceptually important [8]. 3.1

Previous Results

In [8] Micali and Reyzin have proved the following theorems. Theorem 1 ([8]). If one-way functions exist, there is a compiler-type algorithm that, for any language L, and any interactive argument system for L satisfying one-time soundness, produces another interactive argument system for the same language L that satisfies one-time soundness but not weak sequential soundness. Here by interactive argument system we mean an interactive protocol in which the soundness of this protocol is held against probabilistic polynomialtime provers rather than computational power unbounded provers. We note that although weak sequential soundness is not considered in [8] but in their proof the malicious prover in the compiled protocol can convince the honest verifier of a false statement (common input) with probability 1 if he can invoke the honest verifier using the same common input twice. It means the compiled arguments system does not satisfy weak sequential soundness. Theorem 2 ([8]). If one-way functions exist, there is a compiler-type algorithm that, for any language L, and any interactive argument system for L satisfying sequential soundness, produces another interactive argument system for the same language L that satisfies sequential soundness but not concurrent soundness. Theorem 3 ([8]). There exists a compiler-type algorithm that, for any language L, and any interactive proof (or argument) system for L satisfying concurrent soundness, produces another interactive proof (respectively, argument) system for the same language L that satisfies concurrent soundness but not resettable soundness.

322

3.2

Shirley H.C. Cheung et al.

Our Result

We first introduce the main tool used in our construction: pseudorandom function. Definition 1 (Pseudorandom Function PRF [6]). In this paper we will use a stronger version of PRF in which the pseudorandomness is guaranteed against sub-exponential size adversaries rather than polynomial size ones. A function PRF: {0, 1}n × {0, 1}∗ → {0, 1}n is a pseudorandom function if ∃ α > 0 such that for all sufficiently large k (security parameter, which is a α polynomial of n) and all 2k -gate adversaries ADV, the following difference is negligible in k:   R Pr PRFKey ← {0, 1}n : ADVPRF(PRFKey,•) = 1 −   n ∗ R Pr F ← ({0, 1}n ){0,1} ×{0,1} : ADVF (•) = 1 . The value α is called the pseudorandomness constant. Such a PRF can be constructed assuming the security of RSA with large prime exponents against subexponentially-strong adversaries. The main result of this section is the following theorem. Theorem 4. Assuming the security of RSA with large exponents against subexponentially-strong adversaries, there is a compiler-type algorithm that, for any language L, and any interactive argument system for L satisfying weak sequential soundness, produces another interactive argument system for the same language L that satisfies weak sequential soundness but not sequential soundness. Proof. This proof uses “complexity leveraging” technique introduced in [3]. Let γ be the following constant: for all sufficiently large n, the length of the N P -witness y for x ∈ L of length n is upper-bounded by nγ . We then set  > αγ . For an N P -language L we construct an argument system for L on security parameter n while using a PRF with a (larger) security parameter k = n . This ensures that one can enumerate all potential N P -witnesses y for γ x ∈ L in time 2n , which is still less than the ti me it would take to break the γ α pseudorandomness of PRF because 2n < 2k . Let F be a pseudorandom function as defined in Definition 1 and U be the a priori bounded polynomial guaranteed in the definition of weak sequential soundness. Denote by Fs (x) = R1 . . . RU (|x|)+1 the output of F with seed s on input x , where for each i, 1 ≤ i ≤ U (|x|) + 1, |Ri | = |x|. Let x of length n be the theorem that the prover is trying to prove to the verifier. Given any interactive argument system (P, V ) for a language L in N P satisfying weak sequential soundness, we produce another interactive argument system (P  , V  ) for the same language L that satisfies weak sequential soundness but not sequential soundness. Add to Key Gen: Generate randomly an n-bit seed s and add s to the key SK.

A New Notion of Soundness in Bare Public-Key Model

323

Add P Step: Set β1 = β2 = · · · = βU (n)+1 = 0, and send (β1 . . . βU (n)+1 ) to the verifier V  . Add V Step: If Fs (x) = β1 . . . βU (n)+1 , accept and stop. Else randomly select i in {1, 2, . . . , U (n) + 1} and send (i, Ri ) to the prover P  . Note that a malicious prover can get V  to accept a false statement x with overwhelming probability after (U (n) + 1)2 sequential interactions with the honest verifier V  on the same common input x. However, if the malicious prover is restricted to use the same common input x at most a priori bounded polynomial, specifically U (n) times, in his sequential interactions with the honest verifier then we will prove in below that it is infeasible for the malicious prover to get the honest verifier accept an x while x∈ / L. First, we assume the F used by V  is a truly random function. Then, it can be easily seen that the malicious prover is infeasible to convince the honest verifier of a false statement if for each common input x, x is used by the malicious prover at most U (n) times in his interactions with the honest verifier. Now, we deal with the real case in which the honest verifier uses a pseudorandom function rather than a truly random function. The subtle problem here is that in our context during the interactions between the malicious prover and the honest verifier the malicious prover selects many common inputs on the fly. The problem is how to distinguish x ∈ L or x ∈ / L for each common input selected by the malicious prover on the fly. In [1], to overcome this problem they required their protocol to be also an argument of knowledge. What saves us here is the “complexity leveraging”. That is, for each common input selected by the γ malicious prover on the fly, we just enumerate all his N P -witnesses in time 2n and decide whether x ∈ L or not. Now we claim that the malicious prover is also infeasible to convince the honest verifier of a false statement in his weak sequential interactions with the honest verifier even the function F used by the honest verifier is a pseudorandom function. Otherwise, suppose the malicious prover can convince the honest verifier of a false statement with non-negligible probability in his weak sequential interactions when F is a pseudorandom function. Then we can construct a α distinguisher with size less than 2k that distinguishes F from a truly random function with non-negligible probability as follows. The distinguisher runs the malicious prover while oracle accessing to the honest verifier to simulate the real interactions between the malicious prover and the honest verifier. For each common input x selected by the malicious prover on the fly the simulator decides γ x ∈ L or not in time 2n and for each x ∈ L the distinguisher also feeds the corresponding N P witness to the malici ous prover. If during the simulation the distinguisher finds that the honest verifier accepts an x ∈ / L then he says the oracle is using a pseudorandom function. Note that the size of this distinguisher is γ α at most poly(n) • 2n < 2k which violates the pseudorandomness of the PRF F . ' & Denote by A < B if there exists a compiler-type algorithm that, for any language L, and any interactive argument system for L satisfying soundness no-

324

Shirley H.C. Cheung et al.

tion A, produces another interactive argument system for the same language L that satisfies soundness A but not soundness notion B. Then all the five soundness notions in BPK model can be presented as follows: One-time soundness < weak sequential soundness < sequential soundness < concurrent soundness < resettable soundness.

4

Future Investigations

Canetti et al. first gave a 5-round resettable black-box zero-knowledge argument with sequential soundness for N P in BPK model [3] and the round-complexity is further reduced to four by Micali and Reyzin [8]. Micali and Reyzin also showed that any (resettable or not) auxiliary-input zero-knowledge protocol with sequential soundness for a language outside BP P needs at least three rounds. Note that black-box zero-knowledge implies auxiliary-input zero-knowledge [5]. In a st ronger version, upper-bounded public-key (UPK) model in which the public-key of an honest verifier is restricted to be used at most a priori bounded polynomial times, Micali and Reyzin presented a 3-round black-box resettable zero-knowledge argument with sequential soundness for N P [7]. But we note that their protocol does not satisfy our weak sequential soundness. The reason is that in the definition of weak sequential soundness we do not restrict the number of times that the public-key of an honest verifier can be used. That is, the public-key of an honest verifier can be used by a malicious prover for any polynomial times. What is restricted in our weak sequential soundness is that for each common input selected by a malicious prover on the fly the same common input cannot be used more than a priori bounded polynomial times. If the public-key of an honest verifier can be used for any polynomial times then a malicious prover of Micali and Reyzin’s protocol [7] can easily cheat the honest verifier with non-negligible probability. Then, it comes a question whether 3round resettable zero-knowledge argument with weak sequential soundness for N P exists in BPK model. We conjecture that such a protocol does exist and have made some progress towards establishing this.

Acknowledgment The fourth author is grateful to Leonid Reyzin for his kindly encouragement and clarifications.

References 1. Boaz Barak, Oded Goldreich, Shafi Goldwasser, and Yehuda Lindell. ResettablySound Zero-Knowledge and its Applications. In IEEE Symposium on Foundations of Computer Science, pages 116–125, 2001. 2. Mihir Bellare, Marc Fischlin, Shafi Goldwasser, and Silvio Micali. Identification Protocols Secure against Reset Attacks. In B. Pfitzmann (Ed.): EUROCRYPT 2001, LNCS 2045, pages 495–511. Springer-Verlag, 2001.

A New Notion of Soundness in Bare Public-Key Model

325

3. Ran Canetti, Oded Goldreich, Shafi Goldwasser, and Silvio Micali. Resettable ZeroKnowledge. STOC 2000, pages 235–244, 2000. Available from http://www.wisdom.weizmann.ac.il/∼oded/. 4. Ran Canetti, Joe Kilian, Erez Petrank, and Alon Rosen. Black-box Concurrent Zero˜ knowledge Requires Ω(log n) Rounds. In ACM Symposium on Theory of Computing (33rd STOC 2001), pages 570–579, 2001. 5. O. Goldreich and Y. Oren. Definitions and Properties of Zero-Knowledge Proof Systems. Journal of Cryptology, 7(1):1–32, 1994. 6. Oded Goldreich, Shafi Goldwasser, and Silvio Micali. How to Construct Random Functions. Journal of the Association for Computing Machinery, 33(4):792–807, 1986. 7. Silvio Micali and Leonid Reyzin. Min-Round Resettable Zero-Knowledge in the Public-Key Model. In B. Pfitzmann (Ed.): EUROCRYPT 2001, LNCS 2045, pages 373–393. Springer-Verlag, 2001. Available from http://www.cs.bu.edu/∼reyzin/. 8. Silvio Micali and Leonid Reyzin. Soundness in the Public-Key Model. In J. Kilian (Ed.): CRYPTO 2001, LNCS 2139, pages 542–565. Springer-Verlag, 2001. Available from http://www.cs.bu.edu/∼reyzin/.

Robust Information-Theoretic Private Information Retrieval Amos Beimel and Yoav Stahl Computer Science Dept., Ben-Gurion University, Beer-Sheva 84105, Israel {beimel,stahl}@cs.bgu.ac.il

Abstract. A Private Information Retrieval (PIR) protocol allows a user to retrieve a data item of its choice from a database, such that the servers storing the database do not gain information on the identity of the item being retrieved. PIR protocols were studied in depth since the subject was introduced in Chor, Goldreich, Kushilevitz, and Sudan 1995. The standard definition of PIR protocols raises a simple question – what happens if some of the servers crash during the operation? How can we devise a protocol which still works in the presence of crashing servers? Current systems do not guarantee availability of servers at all times for many reasons, e.g., crash of server or communication problems. Our purpose is to design robust PIR protocols, i.e., protocols which still work correctly even if only k out of servers are available during the protocols’ operation (the user does not know in advance which servers are available). We present various robust PIR protocols giving different tradeoffs between the different parameters. These protocols are incomparable, i.e., for different values of n and k we will get better results using different protocols. We first present a generic transformation from regular PIR protocols to robust PIR protocols, this transformation is important since any improvement in the communication complexity of regular PIR protocol will immediately implicate improvement in the robust PIR protocol communication. We also present two specific robust PIR protocols. Finally, we present robust PIR protocols which can tolerate Byzantine servers, i.e., robust PIR protocols which still work in the presence of malicious servers or servers with corrupted or obsolete databases.

1

Introduction

A Private Information Retrieval (PIR) protocol allows a user to retrieve a data item of his choice from a database, such that the server storing the database does not gain information on the identity of the item being retrieved. For example, an investor might want to know the value of a certain stock in the stock-market without revealing which stock she is interested in. The problem was introduced by Chor, Goldreich, Kushilevitz, and Sudan [1], and has attracted a considerable amount of attention. It is convenient to model the database by an n-bit string x, where the user, holding some retrieval index i, wishes to learn the i-th data bit xi . This default setting can be easily extended to handle more general scenarios, e.g., of larger data items, several users, or several retrieved items per user. S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 326–341, 2003. c Springer-Verlag Berlin Heidelberg 2003 

Robust Information-Theoretic Private Information Retrieval

327

The definition of PIR protocols raises a simple question – what happens if one of the servers crashes during the operation? How can we devise a protocol which still works in the presence of crashing servers? Current systems do not guarantee availability of servers at all times for many reasons, e.g., crash of server or communication problems. Our purpose is to design robust PIR protocols. Given a database x which is replicated amongst  servers, and a parameter k ≤  which specifies the minimal number of servers that are available at any moment, the user in our protocol can retrieve xi by using the answers of any k servers. I.e., even if -k severs are unreachable while the protocol is being performed (e.g., they have crashed or they are disconnected), the user can still reconstruct xi . The user does not need to know in advance which servers are online and which servers will be online during the process. A trivial solution to this problem is to execute an independent PIR protocol   for each group of k servers. This yields a solution of whose complexity is k times the complexity   of the best known PIR protocol. Even for fairly small  and k, the factor k can be too expensive. Another trivial solution is that the user first checks which servers are available and then executes a regular PIR protocol with these servers. The problem with this solution is that we need two rounds of communication. Another problem is that servers can crash between the first round and the second round. Our goal is to design robust protocols in which the dependency of the communication complexity on  and k is polynomial. We present additional motivation examples. First, consider a database which is updated frequently. In this case, the servers might hold different versions of the database. If the user and servers execute a robust PIR protocol, and each server sends the version number of the database, then as long as a big enough subset of the servers hold the latest version of the database, then the user can recover the desired bit. Second, consider a system in which the servers do not have the same response time. Furthermore, the response time can vary according to the server’s load at a specific moment. Using a robust protocol, the user needs only the first k answers that it receives, i.e., it need not wait for slow servers. Related Work. Before proceeding, we give a brief overview of some relevant results on PIR. The simplest solution to the PIR problem is sending the entire database to the user. This solution is impractical for large databases. However, if the server is not allowed to gain any information about the retrieved bit, then the linear communication complexity of this solution is optimal [1]. To overcome this problem, [1] suggested that the user accesses replicated copies of the database kept on different servers, requiring that each server gains absolutely no information on the bit the user reads (thus, these protocols are called informationtheoretic PIR protocols). The best information-theoretic PIR protocols known to date are summarized below: (1) a 2-server protocol with communication complexity of O(n1/3 ) bits [1], (2) a k-server protocol, for any constant k > 1, with communication complexity of O(k 3 n1/(2k−1) ) bits [2] (improving on [1,3,4], see also [5]), (3) a k-server protocol, for any constant k > 1, with communica2 log log k ˜ tion complexity of O(2O(k) · n k log k ) bits [6], and (4) a protocol with O(log n) servers and communication complexity of O(log2 n log log n) bits [7,8,1] . In all

328

Amos Beimel and Yoav Stahl

these protocols it is assumed that the servers do not communicate with each other. t-private protocols, in which the user is protected against collisions of up to t servers, have been considered in [1,2,5]. Specifically, the best communication complexity of such protocol is O(n1/ (2k−1)/t ) [5]. For a more extensive discussion on PIR related work the reader can consult, e.g., [9]. One of the main tools we use is perfect hash families which were introduced by [10]. These families were first used in compiler design to prove lower bounds on the size of a computer program. In the last few years, perfect hash families have been applied to circuit complexity problems [11], derandomization of probabilistic algorithms [12], threshold cryptography [13,14], and other tasks in cryptography [15,16]. Perfect hash families are also considered from a combinatorial point of view [17,18,19,20,21,22,23]. A comprehensive overview on perfect hashing can be found in [24]. Our Results. We present several protocols with various features which address the robust PIR problem. These protocols are incomparable, i.e., for different values of n and k we get better results using different protocols. Our first result is a generic transformation from k-out-of-k PIR protocols to k-out-of- PIR protocols: we show that if there exists a perfect hash family H,k of size w,k (the definition of a perfect hash family appears in Definition 3) and if there exists a k-out-of-k PIR protocol with communication complexity PIRk (n) per server, then there exists a k-out-of- PIR protocol with communication complexity wk, ·PIRk (n) per server. Since this is a generic transformation, any improvement in the communication complexity of k-out-of-k PIR protocols (e.g., the recent result of [6]) directly translates to improved robust PIR protocols. We also present a generic transformation from t-private k-out-of-k PIR protocols to t-private k-out-of- PIR protocols. Our second result is a robust PIR protocol using the polynomial interpolation based PIR protocol of [7,8,1]. This protocol is a k-out-of- PIR protocol with communication complexity of O(kn1/k  log ). That is, the communication in this protocol is polynomial in  and k, however its dependency on n is worse than the protocols that can be obtained via the generic transformation. Our third protocol combines Shamir’s secret sharing scheme with the 2-server protocol of [1]. This results in a 2-out-of- protocol with communication complexity of O(n1/3 log ), that is, the same communication complexity that can be achieved using the generic protocol. We present this protocol as it is a more direct approach; we hope that this approach will be used in the future to construct more efficient protocols for larger values of k. Finally, we extend our discussion to robust PIR protocols which can tolerate Byzantine servers. That is, we require that the user can reconstruct the correct value of xi even if the answers of some servers are maliciously altered. We first show a generic transformation from robust PIR protocols to robust PIR protocols that tolerate Byzantine servers. We next show that there exists a robust k-outof- PIR protocol where the user can reconstruct the correct value of xi as long as it receives at least k answers of which at most k/3 are corrupted. The communication complexity in the protocol is O(kn1/ k/3  log ).

Robust Information-Theoretic Private Information Retrieval

329

Organization. In Section 2 we provide the necessary definitions, in Section 3 we show generic transformations from PIR protocols to robust PIR protocols, in Section 4 we show a specific construction of a k-out-of- robust PIR protocol, and in Section 5 we construct a 2-out-of- robust PIR protocol using Shamir’s secret sharing scheme. Finally, in Section 6 we present robust PIR protocols tolerating Byzantine servers.

2

Preliminaries

We start with some notation. By [k] we denote the set {1, . . . , k}. Let GF(q) denote the finite field with q elements, where q is a prime-power. Given a vector V , we denote V [j] as the j-th coordinate of V . 2.1

PIR Protocols

We define 1-round information-theoretic PIR protocols. A k-out-of- PIR protocol involves  servers S1 , . . . , S , each holding the same n-bit string x (the database), and a user who wants to retrieve a bit xi of the database. Definition 1 (Robust PIR). A k-out-of- PIR protocol P = (R, Q, A, C) consists of a probability distribution R and three algorithms: query algorithm Q(·, ·, ·), answering algorithm A(·, ·, ·), and a reconstruction algorithm C(·, ·, . . . , ·) (C has k + 3 arguments). At the beginning of the protocol, the user picks a random string r according to the distribution R. For j = 1, . . . , , it computes a query qj = Q(j, i, r) and sends it to server Sj . Each server responds with an answer aj = A(j, qj , x) (the answer is a function of the query and the database; without loss of generality, the servers are deterministic). Finally, the user, upon receiving any k answers aj1 , . . . , ajk , computes the bit xi by applying the reconstruction algorithm C(i, r, K, aj1 , . . . , ajk ), where K = {j1 , . . . , jk }. A k-out-of- protocol as above is a t-private robust PIR protocol, if it satisfies the following requirements: Correctness. The user always computes the correct value of xi from any k answers. Formally, for every i ∈ {1, ..., n}, every random string r, every set K = {j1 , . . . , jk } ⊆ {1, . . . , } and every database x ∈ {0, 1}n it holds that, C(i, r, K, A(j1 , Q(j1 , i, r), x), . . . , A(jk , Q(jk , i, r), x)) = xi . t-Privacy. Each collusion of up to t servers has no information about the bit that the user tries to retrieve: For every two indices i1 , i2 ∈ {1, . . . , n} and for every {j1 , . . . , jt } ⊆ {1, . . . , }, the distributions Q(j1 , i1 , r), . . . , Q(jt , i1 , r) : r ∈ R and Q(j1 , i2 , r), . . . , Q(jt , i2 , r) : r ∈ R are identical. We denote 1-private k-out-of- PIR protocol as k-out-of- PIR protocols. The main difference between the definition of PIR and robust PIR is in the correctness requirements. That is, the regular PIR protocols are k-out-of-k robust PIR protocols. Definition 2 (Communication Complexity). Given a k-out-of- PIR protocol, the communication per server is the number of bits communicated between

330

Amos Beimel and Yoav Stahl

the user and any single server on a database of size n, maximized over all choices of x ∈ {0, 1}n , i ∈ [n], and all random inputs. The total communication in the protocol is the number of bits communicated between the user and the  servers. The query complexity is the maximal number of bits sent from the user to any single server, and the answer complexity is the maximal number of answer bits sent by any server. 2.2

Shamir’s Secret-Sharing

Secret-sharing schemes are an important tool in the construction of several PIR protocols. See [5] for a discussion on the role of secret sharing in PIR protocols. In general terms, a secret sharing scheme enables a user to share a given secret amongst  users such that only subsets of at least t users can reconstruct the secret, and any subset of less than t users gets no information on the secret. We next describe Shamir’s secret sharing scheme [25] which we use in our protocols. Shamir’s Scheme [25]. Let F be a finite field with q >  elements, and let ω1 , . . . , ω be distinct nonzero elements of F . In order to share a secret s ∈ F using a t-out-of- sharing scheme, the dealer chooses t − 1 random elements a1 , . . . , at−1 , which together with the secret s define a univariate polynomial def p(Y ) = at−1 Y t−1 + at−2 Y t−2 + . . . + a1 Y + s. Observe that p(0) = s. The share of the j-th player is p(ωj ). Each set of at least t players can recover p(Y ) by interpolation, and hence can also reconstruct s = p(0). More formally, for every set {j1 , . . . , jt } there exist constants αj1 , . . . , αjt (independent of p(Y ) and s) ! t ωjd where αjh = d =h ωj −ω such that s = p(0) = h=1 αjh p(ωjh ). On the other jh d hand, every set of t − 1 players learns nothing on s from their shares. In the previous scheme we shared one element of the field; we extend this notion to sharing a vector of elements in the field. Given a vector V of length m, i.e., V ∈ F m we define the shares of the vector, denoted by V 1 , . . . , V  , where each V j is a vector in F m as follows: For each element V [a], where 1 ≤ a ≤ m, the user executes Shamir’s t-out-of- secret sharing scheme independently over the field F producing  shares sa1 , . . . , sa . We then define the vector V j as

s1j , . . . , sm j , i.e., the j-th share out of each set of shares.

3

Generic Transformations

In this section we present several generic transformations from PIR protocols to Robust PIR protocols. 3.1

A Replication Solution for 2-out-of- Robust PIR

We start with a generic transformation from 2-out-of-2 PIR protocols to 2-outof- PIR protocols proving the next theorem: Theorem 1. If there is a 2-out-of-2 PIR protocol with communication PIR2 (n) per server, then there is a robust 2-out-of- PIR protocol with communication PIR2 (n) log  per server.

Robust Information-Theoretic Private Information Retrieval

331

Proof. Let P be a 2-out-of-2 PIR protocol. Given the retrieval index i, the user executes the given PIR protocol P to produce log  independent pairs of queries {Q1 , . . . , Qlog  } for the retrieval of xi , each pair comprises of 2 queries, i.e., Qj = Qj [0], Qj [1] where Qj [a] is the query for server a. Each server S1 , . . . , S receives one query out of each pair of queries and answers this query. (We will describe the algorithm that chooses one query out of each pair later.) The queries received by each server guarantees that if the user receives correct answers from at least 2 servers then there exists an index m such that the user receives an answer for the queries Qm [0] and Qm [1] and thus he can reconstruct the bit xi (this is done independently of the answers that the user receives or does not receive for the other queries). We next explain which queries each server receives. Given a server Sj , we look at the representation bj1 bj2 . . . bjlog  of j as a binary number. The user sends the following log  queries to Sj – for each 1 ≤ a ≤ log  send the query Qa [bja ] to Sj , i.e., if bja = 0 send Qa [0] and if bja = 1 send Qa [1]. Each server, upon receiving the queries, replies independently to each query according to the PIR protocol. Since we assume that at least 2 servers are reachable, the user will receive answers from at least 2 servers, say server Sj1 and server Sj2 (j1 = j2 ). The binary representations of j1 and j2 differ in at least one bit; let m be the index of the first bit that differs between j1 and j2 , and without loss of generality, bjm1 = 0 and bjm2 = 1. The user takes the answer received from server Sj1 for the query Qm [0] and the answer received from server Sj2 for the query Qm [1] and reconstruct the desired bit xi . This scheme is secure since each server receives only one query out of each pair of queries and these pairs of queries are independent. In this protocol each server receives log  queries and answers each one of them, thus the complexity of the protocol is the number of queries multiplied by PIR2 (n), i.e., the total ' & communication is O(PIR2 (n) log ). Plugging the PIR protocol of [1] we get: Corollary 1. There exists a 2-out-of- PIR protocol with total communication 1 of O(n 3  log ). 3.2

A Generic k-out-of- Replication Solution

In this section we will generalize the solution presented in the previous section, and show a generic transformation from k-out-of-k PIR protocols to k-out-of- PIR protocols. The idea is similar to the 2-out-of- PIR protocol; however we need to be more careful in partitioning the queries. For this purpose we recall the following definition: Definition 3 (Perfect - [10]). A perfect hash family is a family , hash family of functions H,k = h1 , . . . , hw,k of the form: ha : {1, . . . , } → {1, . . . , k} such that for each subset A ⊆ {1, . . . , } , |A| = k, there exists an index a such that |ha (A)| = k (that is, ha restricted to A is one-to-one and on-to). The size of the family is the number of functions in the family, denoted by w,k .

332

Amos Beimel and Yoav Stahl

We have 3 parameters for a perfect hash family – k, , and wk, . The parameters k and  are part of the specification of the problem. On the other hand, we would like w,k – the number of functions in the perfect hash family – to be as small as possible, since w,k will directly affect our protocol’s complexity. Theorem 2. If there exists a perfect hash family H,k of size w,k and if there exists a k-out-of-k PIR protocol with communication PIRk (n) per server, then there exists a k-out-of- PIR protocol with communication w,k · PIRk (n) per server, thus total communication  · w,k · PIRk (n). Proof. Given a k-out-of-k PIR protocol P we do the following. Given i, the retrieval index,-the user uses P to produce w,k independent vectors of queries , Q1 , . . . , Qw,k for the retrieval of xi , each vector comprises of k queries, i.e., Qj = Qj [1], . . . , Qj [k], that is, the user executes w,k times the protocol P independently and the user holds the w,k query vectors. Each server receives from the user one query out of each vector of queries and answers this query. Since each server receives w,k PIR queries, which are completely independent, the server gains no knowledge on i. We show below how the user chooses which queries to send to each server. This choice of queries received by each server guarantees that if the user receives answers from at least k servers then it can reconstruct xi . Given a perfect hash family H,k , for each server Sj the user sends the following w,k queries – for each 1 ≤ a ≤ w,k , let Δ = ha (j), then the user sends Qa [Δ] to Sj , i.e., the user sends the Δ-th query out of the vector Qa . In other words, the perfect hash family determines which queries we need to take from each vector of queries Qa . Let Sj1 , . . . , Sjk be k servers from which the user receives answers. By the definition of perfect hashing, there is an index a such that |ha ({j1 , . . . , jk })| = k, i.e., the set {ha (j1 ), . . . , ha (jk )} is a permutation. We consider the answers {Qa [ha (j1 )], . . . , Qa [ha (jk )]} received from these servers to the following queries. Since these queries are distinct, we have k answers in a kout-of-k PIR protocol, and the user can reconstruct xi from the answers received for these queries. ' & The communication in the above protocol depends on the size of the perfect hash family. The explicit hash family of [26] has size log  · 2O(k) (this is basically optimal [10]). Using the protocol of [1] and the hash family of [26] we get: Corollary 2. There is a robust k-out-of- protocol with total communication 1 2O(k) n 2k−1  log . Applying the protocol of [6] and the hash family of [26] we get: Corollary 3. There is a robust k-out-of- protocol with total communication 2 log log k ˜ 2O(k) n k log k  log . We use the same approach taken in Theorem 2, only this time, instead of using “regular” k-out-of-k PIR protocol, we use a t-private k-out-of-k PIR protocol to produce the w,k independent query vectors.

Robust Information-Theoretic Private Information Retrieval

333

Theorem 3. If there is a perfect hash family H,k of size w,k and if there is a t-private k-out-of-k PIR protocol with communication PIRk,t (n) per server, then there is a t-private k-out-of- PIR protocol with communication w,k · PIRk,t (n) per server, thus total communication  · w,k · PIRk,t (n). Applying the protocol of [5] and the hash family of [26] we get: Corollary 4. There is a t-private k-out-of- protocol with total communication O( · log  · 2O(k) · n1/ (2k−1)/t ). 3.3

A Generalized Transformation

We now show a generalization of the previous transformation, where our goal is to reduce the dependency on k. As seen in [10] the size of every perfect hash family H,k is at least 2k , thus, we first generalize the notion of perfect hashing. , Definition 4. An α-perfect hash family H,k,α = h1 , . . . , hw,k,α (where α ≤ 1) is a family of functions ha : {1, . . . , } → {1, . . . , αk} such that for each subset A ⊆ {1, . . . , } , |A| = k, there exists an index a such that |ha (A)| = αk. Note that when α = 1 we get the standard definition of a perfect hash family. We now show how to use the α-perfect hash family in the construction of k-outof- PIR protocols. Theorem 4. If there is an α-perfect hash family Hk,,α of size w,k,α and if there is an αk-out-of-αk PIR protocol with communication PIR αk (n) per server, then there exists a k-out-of- PIR protocol with communication w,k,α ·PIR αk (n) per server, thus total communication  · w,k,α · PIR αk (n). Proof. This proof is similar to the one shown in Theorem 2, only this time we use an αk-out-of-αk PIR protocol and an α-perfect hash family. Let Sj1 , . . . , Sjk be k servers from which the user receives answers. Using the α-perfect hash property, let a be an index such that |ha ({j1 , . . . , jk })| = αk. This means that the user has αk distinct answers of an αk-out-of-αk PIR protocol, and the ' & user can reconstruct xi from the answers received for these queries. In the last proof we used, as our building block to construct a k-out-of- PIR protocol, an αk-out-of-αk PIR protocol (as opposed to Theorem 2 where we used a k-out-of-k PIR protocol). Since the communication complexity of PIR protocols decrease as k gets bigger and since we are using α < 1 then we will get a less efficient PIR protocol in its dependency on n; our hope is that w,k,α is considerably smaller thus the dependency on k will be better. For α = 1/ln k we get a family whose size is small. Claim. There exists an

1 ln k -perfect

k log  hash family of size O( log log k ).

Proof. We will prove the claim using a probabilistic proof. As a first step lets consider a specific subset A ⊆ {1, . . . , } , |A| = k, one hash function h chosen

334

Amos Beimel and Yoav Stahl

at random from the space of functions from {1, . . . , } to {1, . . . , αk}, and one index c ∈ {1, . . . , αk}. We now look at the probability

k &

αk ' α1 −1 αk − 1 1 1 Pr[∀j∈A h(j) = c] = ≤ 1− log ' & log k . In the above analysis, Inequality (1) could have been derived from the socalled coupon collector problem, see, e.g., [27, pages 57–63]. The analysis of the coupon collector problem implies that if we try to take α ≥ ln2k then for a given A of size k the probability that |h(A)| = αk would be exponentially small, thus the size of family we would construct using the above proof would be exponential in k. With the current state of the art of PIR protocols we cannot describe a more efficient transformation to robust PIR protocol using the ln1k -perfect hash family. If, for example, there exists a PIR protocol with communication poly(k) · 1 nO( k log k ) then we will get a robust protocol with communication complexity 1 of poly(k, ) · n 2k . Notice that the recent PIR protocols [6] are close to these requirements (however, they are not polynomial in k).

4

A k-out-of- Polynomial Interpolation Based PIR Protocol

In this section we construct a k-out-of- PIR protocol which uses the polynomial interpolation based PIR Protocol of [7,8,1]. We start with a technical lemma, and then present the protocol. 1

Lemma 1. Let d and m be integers such that m ≥ d · n d . There is a function E : {1, . . . , n} −→ {0, 1}m and an m-variant degree d polynomial Px such that in every field Px (E(i)) = xi for each 1 ≤ i ≤ n.

Robust Information-Theoretic Private Information Retrieval

335

Proof. Let E(1), . . . , E(n) ben distinct binary vectors of length m and weight d (such vectors exist since m d ≥ n), let E(i)a be the a-th bit of E(i) and define ! def n Px (Z1 , . . . , Zm ) = i=1 xi E(i)a =1 Za . ' & Lemma 2. There exists a robust k-out-of- PIR protocol with query complexity 1 O(kn k−1 log ) and answer complexity O(log ) per server. Proof. Let d = k − 1 and Px and E be as promised in Lemma 1. Given the retrieval index i, the user does the following: Calculates the vector E(i) =

y1 , . . . ym , where yj is the j-th bit of E(i). Now the user uses Shamir’s 2out-of- scheme [25], over a finite field with O() elements, to share E(i). That is, it chooses at random m polynomials {p1 , . . . , pm } (each of degree 1) such that pa (0) = ya for each 1 ≤ a ≤ m. Let ω1 , . . . , ω be distinct nonzero elements of the field, the user sends to server Sj the shares p1 (ωj ), . . . , pm (ωj ). We now consider the univariate polynomial: R(Y ) = Px (p1 (Y ), . . . , pm (Y )); the degree of this polynomial is d since R is constructed from the polynomial Px , whose degree is d, by replacing each variable Yj with a degree 1 polynomial. Given these definitions, R(0) = Px (p1 (0), . . . , pm (0)) = Px (y1 , . . . , ym ) = xi . Furthermore, the server Sj can compute R(ωj ) without knowing R since R(ωj ) = Px (p1 (ωj ), . . . , pm (ωj )) and since p1 (ωj ), . . . , pm (ωj ) are the shares that server Sj receives from the user. Thus, Sj computes R(ωj ) and sends it to the user. The user upon receiving any k answers R(ωj1 ), . . . , R(ωjk ) (from k different servers) reconstructs the polynomial R by interpolation (since the user has k points on a polynomial of degree d = k − 1) and computes R(0) = xi . Server Sj does not gain any information on i since Sj receives one share of the secret E(i) in a 2-out-of- secret sharing scheme. Thus, the protocol is private. The user sends m shares to each server, each share is of size O(log ). Each server sends an answer of length O(log ) and thus the total communication 1 ' & is O(m log ) = O(kn k−1  log ). In the above protocol, the answer complexity is larger than the query complexity. We will balance these complexities using the balancing technique of [1], yielding the following theorem: Theorem 5. There exists a k-out-of- PIR protocol with total communication 1 O(kn k  log ). Proof. This communication complexity is achieved by balancing the complexities of the queries and answers of the protocol described in Lemma 2, i.e., reducing the query complexity and increasing the answer complexity. This is n where α(n) done by looking at the database as a matrix of size α(n) × α(n) will be determined later. We consider each index i as a cell (i1 , i2 ) where i1 , i2 is the natural mapping of i according to the size of the matrix. To achieve the balancing, the user executes the above PIR protocol with retrieval index i2 and database of size α(n). Each server considers each row of the matrix as a database of size α(n), and sends the answer to the query it gets for each row.

336

Amos Beimel and Yoav Stahl

The user then takes the answers that it got for the row i1 and reconstructs xi1 ,i2 . The user sends one query to each server with database of size α(n), thus 1 the query complexity is O(log  · kα(n) k−1 ) per server. Each server sends one answer per row, each answer is of length O(log ); thus, the answer complexity n is log  · α(n) per server. To minimize the total communication complexity we require: log  ·

n α(n)

1

= log kα(n) k−1 . Taking α(n) = O(n

k−1 k

), yields a protocol

1

with total communication O(kn k  log ).

' &

A similar construction works for t-private robust protocols. Lemma 3. There exists a t-private k-out-of- PIR protocol with query complex1 ity O( kt n (k−1)/t log ) and answer complexity of O(log ) per server. / . 1 , m = Θ(dn d ), and Px and E be as promised in Lemma 1. Proof. Let d = k−1 t The protocol we construct is similar to the protocol described in the proof of Lemma 2 with the following differences: In the t-private protocol the user uses Shamir’s (t+1)-out-of- scheme, that is, the degree of the polynomials p1 , . . . , pm is t. Thus, the degree of R is dt =  k−1 t  · t ≤ k − 1. The user, upon receiving k answers R(ωj1 ), . . . , R(ωjk ) (from k different servers), reconstructs the polynomial R by interpolation (since the user has k points on a polynomial of degree k − 1) and computes R(0) = xi . Next we analyze the properties of the protocol. A coalition of t servers does not gain any information on i, since the user uses Shamir’s (t + 1)-out-of- secret sharing scheme. Thus, the protocol is t-private. As for the communication, the user sends m shares to each server, each share is of size O(log ). Each server sends an answer of length O(log ) thus the query complexity is O(m log ) = 1 ' & O( kt n (k−1)/t log ) per server. Theorem 6. There is a t-private k-out-of- PIR protocol with total communication



1 k (k−1)/t+1 k t/k n n  log  . O  log  = O t t The proof of the above theorem is omitted for lack of space.

5

A Robust PIR Protocols Using Shamir’s Secret Sharing

In this section we show how one can use Shamir’s secret sharing in order to produce robust PIR protocols. We first construct a k-out-of- PIR protocol with total communication complexity of O(n log ). This protocol is just a “warmup” (because the result is trivial), however, the ideas of this protocol are used to construct a 2-out-of- protocol whose complexity is O(n1/3  log ). Given the retrieval index i, the user computes the vectors V 1 , . . . , V   – the shares of Shamir’s scheme (as described in Section 2.2) over GF(2log  ) of the unit vector ei of length n, and sends V j to server Sj for each 1 ≤ j ≤ . Server

Robust Information-Theoretic Private Information Retrieval

337

The Two Server Protocol of [1] 1. The user selects three random vectors A11 , A12 , A13 ∈ {0, 1}m , and computes A2r = A1r ⊕ eir for r = 1, 2, 3. The user sends Aj1 , Aj2 , Aj3 to Sj for j = 1, 2. 2. Server Sj computes for every b ∈ [n1/3 ] def

aj1,b = Aj2 · xb,∗,∗ · Aj3 ,

def

aj2,b = Aj1 · x∗,b,∗ · Aj3 and

def

aj3,b = Aj1 · x∗,∗,b · Aj2 ,

,

-

and sends the 3n1/3 bits ajr,b : r ∈ {1, 2, 3} , b ∈ [n1/3 ] to the user. 0 (a1 ⊕ a2r,ir ). 3. The user outputs r=1,2,3 r,ir Fig. 1. The two server protocol of [1] with communication O(n1/3 ).

Sj , upon receiving V j , sends back to the user the following scalar multiplication: def aj = V j · x, i.e., the server computes the scalar product of the database and the vector V j and sends the result to the user. The user upon receiving k answers aj1 , . . . , ajk uses the appropriate constants αj1 , . . . , αjk (see Section 2.2) to perform the following computation (in the following proof all additions and multiplications are done in the GF(2log  )): k  h=1

αjh ajh =

k  h=1

αjh (V jh · x) = (

k 

αjh V jh ) · x = ei · x = xi .

h=1

Thus, the user can reconstruct xi from any k answers. We present a more efficient protocol that uses the above ideas combined with the 2-server protocol of [1]. This 2-out-of- protocol works with total communication of O(n1/3  log ). Lets first recall the protocols presented by [1]: Original Protocol (variant of [1]). Let n = m3 for some m, and consider the database as a 3-dimensional cube, i.e., every i ∈ [n] is represented as i1 , i2 , i3  where ir ∈ [n1/3 ] for r = 1, 2, 3. This is done using the natural mapping from [n] to [n1/3 ]3 . In Fig. 1 we describe the protocol. It can be checked that each bit, except for xi1 ,i2 ,i3 , appears an even number of times in the exclusive-or the user computes in Step 3, thus cancels itself. Therefore, the user outputs xi1 ,i2 ,i3 as required. Furthermore, the communication is O(n1/3 ). We can look at A2r = A1r ⊕ eir and A1r as two shares in a 2-out-of-2 sharing scheme of the unit vector eir . We use a similar approach to construct a robust protocol. There is one difference – we will use Shamir’s 2-out-of- secret sharing scheme in order to share the unit vector eir ; these shares are used to generate the queries for the protocol. We next define some notation concerning cubes. Definition 5. Let x be a three dimensional cube of height m. We denote xj1 ,∗,∗ as the matrix of all the elements of x where the first index of this element is j1 .

338

Amos Beimel and Yoav Stahl

Formally, we define xj1 ,∗,∗ as the matrix A where Ai1 ,i2 = xj1 ,i1 ,i2 . We define x∗,j1 ,∗ and x∗,∗,j1 similarly. We denote xj1 ,j2 ,∗ as the vector obtained from the 3-dimensional cube x by taking all the elements of x where the first index of this element is j1 and the second is j2 . Formally, we define xj1 ,j2 ,∗ as the vector A where Ai1 = xj1 ,j2 ,i1 . We define x∗,j1 ,j2 and xj1 ,∗,j2 similarly. Theorem 7. There exists a 2-out-of- PIR protocol with total communication of O(n1/3  log ). Proof. In this proof we consider the database x as a 3-dimensional cube and use Shamir’s 2-out-of- secret sharing scheme to construct our queries: Given  and the retrieval index i = i1 , i2 , i3 , the user computes the vector

U 1 , . . . , U  , the vector V 1 , . . . , V  , and the vector W 1 , . . . , W   as the shares in a Shamir’s 2-out-of- scheme of the unit vector ei1 , the unit vector ei2 , and the unit vector ei3 respectively. The user sends the query U j , V j , W j to server Sj for each 1 ≤ j ≤ . Server Sj upon receiving U j , V j , W j sends back to the user the following 3n1/3 numbers: For each 1 ≤ a ≤ n1/3 the server sends to the user: The set of numbers – V j · xa,∗,∗ · W j , the set of numbers – U j · x∗,a,∗ · W j , and the set of numbers U j · x∗,∗,a · V j , i.e., each number the server sends is a result of multiplications of a two-dimensional matrix produced from the cube with the vectors sent by the user. As in the regular 2-out-of-2 scheme the user takes one element out of each set of answers: The user upon receiving answers from 2 servers r and q considers the following 6 numbers: V r · xi1 ,∗,∗ · W r , U r · x∗,i2 ,∗ · W r , U r · x∗,∗,i3 · V r , V q · xi1 ,∗,∗ · W q , U q · x∗,i2 ,∗ · W q , U q · x∗,∗,i3 · V q . The following claim is similar to the fact that in the protocol of [1] the user reconstructs the correct bit xi . For lack of space the proof of the claim is omitted. Claim. There exists a linear combination of these 6 numbers that computes to the desired bit xi1 ,i2 ,i3 . Each server gets one share of Shamir’s 2-out-of- scheme. Since Shamir’s scheme is secure each server cannot gain any information about i from the share it received, and the protocol is secure. Each server sends and receives 3n1/3 elements of GF(2log  ) and the total communication is O(n1/3  log ). ' &

6

Dealing with Byzantine Servers

In previous sections we assumed that servers can crash, however they cannot reply with wrong answers. We next show solutions for the robust PIR problem tolerating some Byzantine servers, that is, some servers might be malicious servers or have a corrupted or obsolete database, these servers can return any answer to the user’s query. The user needs to be prepared for wrong answers from the servers and still reconstruct the right value of the desired bit xi .

Robust Information-Theoretic Private Information Retrieval

339

Definition 6 (Byzantine-Robust PIR). A b Byzantine-robust k-out-of- PIR protocol P = (R, Q, A, C) is defined as in Definition 1 where the correctness requirement is replaced by the following requirement: Correctness. The user always computes the correct value of xi from any k answers, of which at least k − b are correct: for every i ∈ {1, ..., n}, every random string r, every set K = {j1 , . . . , jk } ⊆ {1, . . . , }, every database x ∈ {0, 1}n , and every k answers {a1 , . . . , ak } such that | {aw : aw = A(jw , Q(jw , i, r), x)} | ≥ k − b, we have: C(i, r, K, a1 , . . . , ak ) = xi . The correctness holds even if the Byzantine servers cooperate. For the privacy we assume that the Byzantine servers do not cooperate. (Later we will discard this assumption.) Note that since we are talking about one-round PIR protocols then the definition is simple. For example, Byzantine servers will not learn any new information as a result of sending wrong answers. The first observation is that if the server receives answers from k servers, then, to enable the user to reconstruct the correct value of xi , more than half of the answers must be correct. This condition is also sufficient as shown by the following trivial protocol: Each server sends the entire database to the user. Given k answers, out of which less than k/2 are Byzantine, the user takes the value of xi which appears at least k/2 times. Next we show a generic transformation from robust protocols to robust protocols that tolerate Byzantine servers. Theorem 8. Let a be a parameter, 0 < a ≤ k, and assume there is an a-out-of- robust PIR protocol with total communication PIRa (n), then there is a (k − a)/2 Byzantine-robust k-out-of- PIR protocol with total communication PIRa (n). Proof. The user and the servers execute an a-out-of- robust PIR protocol. Assume that the user receives answers from a set B of servers of size at least k. Now, for each subset of size a of B, the user reconstructs xi (recall that the user can reconstruct xi from any a answers). The user finds a maximal subset A ⊆ B such that for every subset of A of size a the user reconstructs the same value of xi , and outputs this value as xi . We next prove that the user reconstructs the correct value of xi . Since there are at most (k − a)/2 Byzantine servers and the size of B is at least k, there are at least k − (k − a)/2 = (k + a)/2 honest servers in B; for every subset of the honest servers of size a the user reconstructs the correct value of xi . Hence, |A| ≥ (k + a)/2. Since there are at most (k − a)/2 Byzantine servers, A contains at least (k+a)/2−(k−a)/2 = a honest servers, thus the value of xi reconstructed ' & for this set (and any other subset of A) is the correct value of xi . k  In the previous protocol the user is required to reconstruct xi for a sets. We show a construction which overcomes this problem, this is a k-out-of- robust PIR protocol in which at most k/3 servers are Byzantine. In the following protocol the communication complexity is worse than in the generic protocol. Theorem 9. There is a k3 Byzantine-robust k-out-of- PIR protocol with total communication O(kn1/ k/3  log ).

340

Amos Beimel and Yoav Stahl

Proof. We use the k/3-out-of- protocol described in Section 4. In this protocol, the answers of the honest servers are points on a univariate polynomial R whose degree is k/3 − 1. The user needs to interpolate R from the answers of the servers. Since some of the servers are Byzantine, not all of the answers are points on R. Nevertheless, we now show that the user can still reconstruct R as it is the only polynomial on which at least 23 of the points (answers) reside. We know that at least 2k/3! of the servers are not Byzantine, thus all of these servers send points on R. Next, we show that at most 2 k/3 − 1 points from the answers lie on a polynomial B of degree at most k/3 − 1 which is not R: Since B and R are different polynomials of degree at most k/3 − 1, they can have at most k/3 − 1 common points. Since the honest servers send points on R, they can contribute at most k/3 − 1 points on B. Furthermore, the Byzantine parties can contribute at most k/3 points on B, thus we get at most 2 k/3 − 1 < 2k/3! points that lie on B. The user has to find 2k/3! points which reside on a polynomial of degree at most k/3 − 1 (i.e., on R) and use this polynomial to reconstruct xi . The user does this task using the decoding algorithm of the Reed-Solomon error correcting codes [28]. (For more information on error-correcting codes the reader can refer to, e.g., [29].) The communication complexity of the above protocol is the communication complexity of the k/3-out-of- protocol described in Section 4, i.e., O(kn1/ k/3  log ). ' & Since we consider Byzantine servers, the assumption that they do not cooperate is questionable, thus it might be more reasonable to consider b-private robust PIR protocols in the presence of b Byzantine servers. That is, robust PIR protocol where the privacy holds even if b Byzantine servers cooperate. We next show a corollary where we allow the Byzantine servers to cooperate. Corollary 5. Let a be a parameter where k/3 < a ≤ k, and define b = (k−a)/2. Assume there is a b-private a-out-of- robust PIR protocol with total communication PIRa,b (n), then there exists a b-private b Byzantine-robust k-out-of- PIR protocol with total communication PIRa,b (n).

Acknowledgements We thank Eyal Kushilevitz for helpful discussions.

References 1. Chor, B., Goldreich, O., Kushilevitz, E., Sudan, M.: Private information retrieval. In: 36th FOCS. (1995) 41–51. J. version: JACM, 45:965–981, 1998. 2. Ishai, Y., Kushilevitz, E.: Improved upper bounds on information theoretic private information retrieval. In: 31st STOC. (1999) 79 – 88 3. Ambainis, A.: Upper bound on the communication complexity of private information retrieval. In: 24th ICALP. Vol. 1256 of LNCS. (1997) 401–407 4. Itoh, T.: Efficient private information retrieval. IEICE Trans. Fundamentals of Electronics, Communications and Computer Sciences E82-A (1999) 11–20

Robust Information-Theoretic Private Information Retrieval

341

5. Beimel, A., Ishai, Y.: Information-theoretic private information retrieval: A unified construction. In: 28th ICALP. Vol. 2076 of LNCS. (2001) 912–926 1 6. Beimel, A., Ishai, Y., Kushilevitz, E., Raymond, J.F.: Breaking the O(n 2k−1 ) barrier for inforamtion-theoretic private information retrieval. In: 43rd FOCS. (2002) To Appear. 7. Beaver, D., Feigenbaum, J.: Hiding instances in multioracle queries. In: STACS ’90. Vol. 415 of LNCS. (1990) 37–48 8. Beaver, D., Feigenbaum, J., Kilian, J., Rogaway, P.: Locally random reductions: Improvements and applications. J. of Cryptology 10 (1997) 17–36 9. Stahl, Y.: Robust information-theoretic private information retrieval. Master’s thesis, Ben-Gurion University, Beer-Sheva (2002) 10. Mehlhorn, K.: Data structures and Algorithms. Volume 1. Sorting and Searching. Springer-Verlag (1984) 11. Newman, I., Wigderson, A.: Lower bounds on formula size of Boolean functions using hypergraph entropy. SIAM J. on Discrete Mathematics 8 (1995) 536–542 12. Alon, N., Naor, M.: Derandomization, witnesses for Boolean matrix multiplication and construction of perfect hash functions. Algorithmica 16 (1996) 434–449 13. Blackburn, S.R.: Combinatorial designs and their applications. Research Notes in Mathematics 403 (1999) 44–70 14. Blackburn, S.R., Burmester, M., Desmedt, Y., Wild, P.R.: Efficient multiplicative sharing schemes. In EUROCRYPT ’96. Volume 1070 of LNCS. (1996) 107–118 15. Fiat, A., Naor, M.: Broadcast encryption. In CRYPTO ’93. Volume 773 of LNCS. (1994) 480–491 16. Stinson, D., van Trung, T., Wei, R.: Secure frameproof codes, key distribution patterns, group testing algorithms and related structures. J. of Statistical Planning and Inference 86(2) (2000) 595–617 17. Alon, N.: Explicit construction of exponential sized families of k-independent sets. Discrete Math. 58 (1986) 191–193 18. Atici, M., Magliveras, S.S., Stinson, D.R., Wei, W.D.: Some recursive constructions for perfect hash families. J. Combin. Des. 4 (1996) 353–363 19. Blackburn, S.R.: Perfect hash families: Probabilistic methods and explicit constructions. J. of Combin. Theory - Series A 92 (2000) 54–60 20. Blackburn, S.R., Wild, P.R.: Optimal linear perfect hash families. J. Combinatorial Theory 83 (1998) 233–250 21. Fredman, M.L., Komlos, J.: On the size of separating systems and families of perfect hash functions. SIAM J. Alg. Discrete Methods 5 (1984) 61–68 22. Korner, J., Marton: New bounds for perfect hashing via information theory. European J. Combin. 9 (1988) 523–530 23. Stinson, D.R., Wei, R., Zhu, L.: New constructions for perfect hash families and related structures using combinatorial designs and codes. J. of Combinatorial Designs 8 (2000) 189–200 24. Czech, Z.J., Havas, G., Majewski, B.S.: Perfect hashing. Theoretical Computer Science 182 (1997) 1–143 25. Shamir, A.: How to share a secret. CACM 22 (1979) 612–613 26. Slot, C., van Emde Boas, P.: On tape versus core; an application of space efficient perfect hash functions to the invariance of space. In: 16thSTOC. (1984) 391–400 27. Motwani, R., Raghavan, P.: Randomized Algorithms. Cambridge University Press (1995) 28. Reed, I.S., Solomon, G.: Polynomial codes over certain finite fields. J. SIAM 8 (1960) 300–304 29. Macwilliams, F.R., Sloane, N.J.A.: The Theory of Error-Correcting Codes. Mathematical library. North-Holland (1978)

Trading Players for Efficiency in Unconditional Multiparty Computation B. Prabhu , K. Srinathan , and C. Pandu Rangan Department of Computer Science and Engineering, Indian Institute of Technology Madras, Chennai - 600036, India {prabhu,ksrinath}@cs.iitm.ernet.in, [email protected]

Abstract. In this paper, we propose a new player elimination technique and use it to design an efficient protocol for unconditionally secure multiparty computation tolerating generalized adversaries. Our protocol requires broadcast of O(nL2 log(|F|) bits (broadcast is simulated using Byzantine agreement) while the non-cryptographic linear secret sharing based protocols, without player elimination, invoke Byzantine agreement sub-protocol for O(mL3 log(|F|) bits, where m is the number of multiplication gates in the arithmetic circuit, over the finite field F, that describes the functionality of the protocol and L is the size of the underlying linear secret sharing scheme tolerating the given adversary structure. Keywords: secure multiparty computation, generalized adversaries.

1

Introduction

Consider the scenario where there are n players (or processors) P = {P1 , P2 , . . . Pn }, each Pi having a local input xi chosen from a finite field F. These players do not trust each other. Nevertheless, they wish to correctly compute a function f (x1 , . . . , xn ) of their local inputs, whilst keeping their local data as private as possible, even if a subset of the players misbehave. This is the well-known problem of secure multiparty computation. We consider the problem of unconditionally secure multiparty computation over a synchronous complete network of n players in which a non-trivial subset of the players may be corrupted. Analogous to [BGW88], every two players are connected via a secure and reliable communication channel (secure channels setting) without broadcast capability; however, in line with the literature, broadcast is simulated via a sub-protocol for Byzantine agreement (BA) among the players. The faulty players’ behavior is modeled using a computationally unbounded generalized Byzantine adversary characterized by an adversary structure (as in [HM00]). An adversary structure denoted by AD is a monotone set of subsets of the player set1 . The adversary can corrupt the players in any one of the sets in the adversary structure. Once a 

 1

This work was supported by Defence Research and Development Organization, India under project CSE01-02044DRDOHODX. Financial support from Infosys Technologies Limited, India is acknowledged. Throughout the paper we work only with maximal basis of the adversary structure.

S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 342–353, 2003. c Springer-Verlag Berlin Heidelberg 2003 

Trading Players for Efficiency in Unconditional Multiparty Computation

343

player is corrupted by the adversary, the player hands over all his private data to the adversary, and gives the adversary the control on all his subsequent moves. It was shown in [BGW88,RB89] that unconditional2 synchronous secure multiparty computation, without a broadcast channel, is possible in the threshold setting iff t < n3 3 . Analogously, in the non-threshold setting, [HM00] proved that the necessary and sufficient condition for existence of protocols for secure multiparty computation is that the union of no three sets in the adversary structure cover the player set (the Q(3) condition). Subsequently, more efficient protocols, based on linear secret sharing schemes, were designed in [CDM00]. However, the complicated exchanges of messages, zero-knowledge proofs and commitments in protocols like [BGW88,RB89,CDM00] might render them impractical. One of the most effective techniques to reduce the communication complexity is that of player elimination[HMP00,HM01]. This technique is based on detection of faulty players and eliminating them subsequently. Unfortunately, it is not always possible to pin-point the set of faulty players; the best that can be provably (always) achieved is a fault-localization of two players of which at least one is faulty. In the threshold setting, it is possible to simply eliminate both the players from the subsequent protocol execution without affecting the “one-third” faulty condition. This results in a substantial reduction in the communication complexity of the resulting protocol. In fact, when combined with the circuit randomization technique of [Bea91], it is possible to tolerate active adversaries with the same order of communication as required for the passive case [HM01]. However, such a simple player elimination does not always work in the non-threshold setting since the resulting adversary structure may violate Q(3) . We propose a new player elimination technique that circumvents the above problem. Our protocols heavily draw on the techniques of [Bea91,HMP00,CDM00,HM01]. We proceed in the lines of the best known threshold protocol of [HM01] using both the techniques of circuit randomization [Bea91] and player elimination [HMP00] and suitably modify/redesign the “ideas” to the non-threshold setting. We analyze the communication complexity of our protocols. The computation complexity is not analyzed because only the communication complexity is the bottleneck of such protocols. The considered protocols make extensive use of a BA primitive and hence their efficiency depends heavily on the communication complexity of the applied BA protocol. Therefore, we count the BA message complexity (BAC) and the message complexity (MC) separately.

2

The Protocol Construction

Definition 1 (MSP[KW93]). Let M be a matrix over a field F. The span of the matrix M, denoted by span(M ), is the linear subspace generated by the 2

3

As defined in [RB89], by “unconditional security”, we mean that the security is independent of any cryptographic assumptions but the protocol may fail with a negligible probability of error. We use the same definition of negligibility as [RB89]. In case the underlying network had a broadcast   capability, [RB89] describes an unconditionally secure protocol tolerating t < n2 faults.

344

B. Prabhu, K. Srinathan, and C. Pandu Rangan

rows of M , that is the set of vectors which are linear combinations of the rows of M . A Monotone Span Program (MSP) is defined as a triple (F, M, ψ) where F represents a finite field, M is a d × e matrix over F, and ψ : {1 . . . d} → {P1 . . . Pn } is a labeling function assigning each row of the matrix a label from the player set P = {P1 , . . . , Pn }. For any subset A of the player set P (A ⊂ P), MA denotes the matrix that consists of all rows in M labeled by players from A. A fixed non-zero vector t ∈ F e is designated as the target vector. An MSP is said to accept the input A if the target vector t belongs to span(MA ). An MSP is said to correspond to the structure A if it accepts exactly the sets in A. The size of of an MSP is the number of rows d in its matrix M . We now show how an MSP (F, Md×e , ψ) corresponding to the adversary structure AD , with the target vector t = (1, 0, 0, . . .)1×e , is used in secret sharing. To distribute the secret s, the dealer chooses a random vector of the form v = (s, ρ1 , ρ2 , . . . , ρe−1 ). For each row i the dealer sends to the player ψ(i) the share < Mi , v T >, where Mi1×e denotes the ith row of Md×e . Reconstruction of s by any element X in AS follows from the properties of MSP since there exists a recombination vector λX such that < λX , SX >= s where SX denotes the vector of shares of the players in X. For multiplying values shared using MSPs we use a strongly multiplicative MSP [CDM00]. (3)

Definition 2 (Q condition). Let P be a player set and P  ⊂ P. Let AD  and AD be the adversary structures corresponding to P and P  . AD is said (3)

to satisfy Q condition if for any two sets in AD , the complement of their union with respect to P  is not in the original adversary structure AD . ∀A, B ∈ (3) AD , P  \(A∪B) ∈ AD . Note that the Q condition subsumes the Q(3) condition. 2.1

Protocol Overview

In a nutshell, an agreed function is computed as follows: Let xi be the input of Pi . Let F be a finite field known to all parties with |F| > n, and let f : F n → F be the computed function. Namely, we restrict our discussion to functions where the input of each player, as well as the function value, are field elements. We assume that the players have an arithmetic circuit computing f ; the circuit consists of linear combination gates and multiplication gates of in-degree 2. All the computations in the sequel are done in F. In line with the protocol of [HM01] our protocol proceeds in two phases. In the first phase, called the preparation phase, the players share m random triples (a(i) , b(i) , c(i) ) with c(i) = a(i) b(i) , for each multiplication gate i = 1, . . . , m. The triples (a(i) , b(i) , c(i) ) are shared among the players in the current player set (which may vary during the preparation phase due to player elimination) according to the original adversary structure. The circuit is divided into blocks each with  multiplication gates. The generated triples are verified only at the end of each block for their correctness and if a fault is detected, a set of at most two players of which at least one is faulty, is identified. Suitable action is then taken (player elimination or addition to a suspect list) while ensuring

Trading Players for Efficiency in Unconditional Multiparty Computation

345

(3)

that the resulting adversary structure satisfies Q with respect to the original adversary structure AD . The actual evaluation of the function takes place in the second phase called the computation phase in which the function (circuit) is evaluated gate by gate using the triples generated in the preparation phase for multiplication gates. 2.2

Preparation Phase

In this phase the players jointly generate m shared triples (a(i) , b(i) , c(i) ) of the form c(i) = a(i) b(i) such that the adversary gains no information about the triples. We adapt the player elimination framework of [HMP00] to the nonthreshold setting for the generation of triples. The triples are generated in blocks of  = nm2 ! triples. Each block of triples is generated in parallel in a nonrobust manner. At the end of each block the generated triples are tested for consistency. If any inconsistency is detected then a set containing at least one faulty player is identified (fault localization), the players are suitably eliminated (as explained in the sequel) and the block is repeated. The verification process is carried out in two steps. In the first step the sharings of all the triples (a, b, c) in the block are verified for consistency with the MSP used (fault detection-I). If a fault is reported, the players localize the fault (fault localization-I). Else, the players jointly verify that the triples (a, b, c) in the block were shared such that c = ab (fault detection-II). If a fault is detected then a set containing a faulty player is identified (fault localization-II). Since no information about the triples is to be revealed during the detection step, n blinding triples are used to maintain privacy. Therefore, in each block,  + n triples are generated. The blinding triples, after being used to verify the consistency of the remaining  triples, are destroyed. After identifying a set containing a faulty player, the (3) set is eliminated if the resulting adversary structure does not violate Q with respect to the new player set. Else the pair is added to a list of suspect-pairs L 4 . Since the number of pairs in the list L is bounded by O(n2 ), at most n2 blocks of triples are repeated. We denote by P the original player set containing n players and by P  the current player set containing n players. Similarly AD refers to the original adversary structure and AD the current adversary structure. Without loss of generality we assume that P  = {P1 , . . . , Pn }. Let (F, M, ψ) be an optimal multiplicative MSP for the adversary structure AD . Let the size of M be L and λ be the fixed recombination vector for the matrix M ([CDM00]). Let pi be the number of shares received per secret by player Pi according to the MSP (F, M, ψ). L represents a list of player pairs exhibiting mutual distrust. Let Li be the set of players distrusted by player Pi . Note that the sharing of the triples are done using the same MSP (F, M, ψ) in all the blocks5 . The top level protocol of the preparation phase is as follows: 4 5

It is ensured that a pair already present in the list L is not reported again. (3) This is possible because the satisfaction of Q ensures that error correction is  possible tolerating AD while sharing is done according to AD .

346

B. Prabhu, K. Srinathan, and C. Pandu Rangan

Repeat until n2 blocks of (= 

m n2

) triples each are generated:

1. Generate  + n triples in parallel using the protocol in Table 1. 2. Check if the shares generated are consistent with the MSP (F , M, ψ) (Fault detection protocolI in Table 3). 3. If a fault was reported in Step 2, identify the set of players D to be eliminated using the protocol in Table 4. Eliminate the set of players D using the protocol in Table 7 and discard the generated block of triples. 4. If no fault was detected in Step 3, verify that correct values were shared by all the players (that is, the shares are consistent with c = ab) using the fault detection protocol in Table 5. If a fault is detected then identify the set of player(s) D to be eliminated using the protocol in Table 6. Eliminate the set of players D using the protocol in Table 7 and discard the generated block of triples. 5. If both the verification steps were successful (no fault was reported) then the first  triples of the block can be used.

Triple Generation. In this section we give a protocol to generate, in a nonrobust manner, a random triple (a, b, c) such that c = ab. At the top-level, the triple generation protocol runs as follows: First the players jointly generate sharings of two random values a and b according to the original adversary structure AD . To generate the sharing of a random value, every player shares a random value and the players then (locally) add the (corresponding) shares to get the required random sharing. To arrive at a sharing of c, each player (locally) multiplies the corresponding shares of a and b. However, this sharing of c corresponds to a much larger MSP. To reduce it to the original MSP, the players re-share each of their product shares. Now, the shares of c are obtained (locally) as weighted sum of these sub-shares where the weights belong to a fixed recombination vector(see [CDM00]). The triple generation protocol is given in Table 1. As explained earlier, a list L containing pairs of players, who have complained about each other in previous blocks and who can not be eliminated without (3) violating Q , is maintained. To ensure that a pair (Pi , Pj ) of players already in the list do not complain about each other in the verification step, every player recovers his shares of the secrets distributed by the players he distrusts from the shares of the rest of the players using the share adjustment protocol given in Table 2. If player Pi has distributed faulty shares only to those players in Li then every Pj ∈ Li can recover the correct shares using the share adjustment (3)

protocol since the adversary structure satisfies Q . If Pi has given wrong share to some player Pk ∈ Li then the same can be detected and a corresponding set localized. The privacy of the share adjustment protocol is guaranteed since xk ’s are chosen in random and are used to blind the original shares of the players. The protocol requires O(nL) field elements to be communicated, where L is the size of the MSP used. We now analyze the security of the triple generation protocol. In the generation of a and b if at least one of the players has honestly selected random values for a and b, then the adversary gets no information about a and b. If the sharings of a, b and the product shares c are consistent, then the shares of c can be correctly computed using the technique proposed in [CDM00] as given in Step 2d of the triple generation protocol. By the MSP property, the adversary does not get any information about a,b or c. Hence privacy is assured. The triple generation protocol requires (n2 L2 + L2 + 2n3 L + 2nL) field elements to be communicated.

Trading Players for Efficiency in Unconditional Multiparty Computation

347

Table 1. Protocol for generating a shared triple (a, b, c) with c = ab. 1. The players jointly generate shared values a and b: (a) Every player Pi ∈ P  selects two random e × 1 column vectors fi and gi . Let ai and bi be the first elements of the two vectors respectively. He then generates the shares aik and bik for k = 1, . . . , L using the projection of the MSP (F , M, ψ) over P  . (b) Each player Pi communicates the generated shares according to the labeling function ψ. That is, player Pi sends shares aik and bik to player Pj if ψ(Mk ) = Pj where Mk is the kth row of matrix M . (c) Every player Pi replaces his shares of aj and bj , for every Pj ∈ Li with the shares computed using the protocol in Table 2. (d) Each player Pj ∈ P  computes his shares of a as the sum of the corresponding shares of ai ’s that he received from players Pi ’s for i = 1, . . . , n . In a similar fashion the shares of b are also obtained by player Pj . That is, ajo =

n

i=1

aiw and bjo =

n biw , where i=1  j−1

ajo and bjo are shares of a and b respectively for o = 1, . . . , pj and w =

q=1

pq + o.

2. The players jointly compute a sharing of c: (a) Every player Pi in P  computes his set of product shares cik = aik bik for k = 1, . . . , pi . He then generates shares cikr of each secret cik for r = 1, 2, . . . , L. (b) Each player Pi communicates the generated shares according to the labeling function ψ. That is, player Pi sends the share cikr to player Pj if ψ(Mr ) = Pj where Mr is the r th row of matrix M . (c) Every player Pi replaces his shares of cjk for every Pj ∈ Li and for k = 1, . . . , pj with the shares computed using the protocol in Table 2. (d) Each player Pj ∈ P  computes his shares cjk of c for k = 1, . . . , pj as cjk =

n pi λw cirq , where λw i=1 r=1  i−1 and w =

v=1

is the wth element of the row vector λ, q =

j−1

v=1

pj + k

pi + r.

Table 2. Protocol for obtaining a player’s share of a secret from the rest of the shares. Let Pi be the dealer and ai the value shared. Let {ai1 , . . . , aiL } be the shares of ai . Let Pj be the player whose shares of ai have to be reconstructed. 1. Every player Pk ∈ (P \ (Lj ∪ Li )) shares a random value xk such that the shares of player Pj are zero. Let xk1 , . . . , xkL be the generated shares. Then xkq = 0 for q =

j

j−1

o=1

po +

1, . . . , po . o=1 2. Player Pk communicates the generated shares to the players in P \ (Lj ∪ Li ) according to the labeling function ψ. 3. Every player Pk ∈ (P \ (Lj ∪ Li )) computes a set of share vectors {[yqok |Pq ∈ P \ (Li ∪ Lj )]} as yqok = aio + xqo for o = player Pj .

k−1

u=1

pu + 1, . . . ,

4. Player Pj obtains the values of aio for o =

k

j−1 q=1

u=1

pu and communicates the vector set to

pq + 1, . . . ,

j q=1

pq as follows:

(a) Player Pj reconstructs the values of aio from each corresponding components of the vectors he received. That is, he reconstructs values for aio from the shares {yqok |Pk ∈ P \ (Li ∪ Lj )} for each Pq ∈ P \ (Li ∪ Lj ). (b) Player Pj finds a set of players G such that the values of aio obtained from {yqok |Pk ∈ P \ (Li ∪ Lj )} for each Pq ∈ G are same. If more than one such set exists, then Pj selects that set which does not exist in the adversary structure AD . The value of aio computed from this set is taken as the correct value.

Verification. The verification procedure is carried out in two steps. In the first step the sharings of  triples are checked for consistency with the MSP used. The remaining n triples are used for blinding and are destroyed at the end of the verification process. In this step every player verifies a random linear combination

348

B. Prabhu, K. Srinathan, and C. Pandu Rangan Table 3. Fault detection protocol-I.

1. Pv selects a random vector [r1 , . . . , r ] over the field F and sends it to each player Pj in P  \ Lv . 2. Each player Pj (Pj ∈ Lv ) computes and sends to Pv the following values for every Pi ∈ (P  \ Lv ): aiw (Σ) =





q=1

rq aiw (q) + aiw (+v) and biw

 (q) = rq biw + j−1 q=1 j

(Σ)

biw

(+v)

and cior (Σ) = rq cior (q) + cior (+v) for w, r = pq + 1, . . . , pq and q=1 q=1 q=1 o = 1, . . . , pi . 3. Pv verifies for each player Pi ∈ (P  \ Lv ) if the shares {aiq (Σ) |q = 1 . . . L, q not belonging to any player Pk ∈ Lv } are valid for the MSP (F , M, ψ). Similar verification is performed for the (Σ)

(Σ)

(Σ)

shares bi1 , . . . , biL and cio1 (Σ) , . . . , cioL for o = 1, . . . , pi . 4. Pv broadcasts a bit indicating whether a fault has been detected or not.

of the value a shared by each player for all triples. Similar verification is done for b and c. Note that a verifier Pv need not check the values of a player Pi whom he distrusts ((Pv , Pi ) ∈ L). Since every other player (who does not distrust Pi ) will verify Pi ’s values, if Pi ’s sharings were inconsistent one of the other honest players will be able to detect it. The detection protocol (see Table 3) proceeds as follows: Every verifier Pv distributes a random challenge vector of length  over the field F. The corresponding linear combinations of each of the values a,b and c for every originating player is reconstructed toward the challenging player. This is done by having every player communicate the linear combination (defined by the challenge vector) of the shares that he received from each player for each of the values a,b and c. The challenging player then checks the consistency of the received shares with the MSP used in the original sharing. If an inconsistency is found then the challenging player reports a fault. In order to preserve the privacy of the individual shares, a blinding value is added to each of the reconstruction toward a challenging player. While checking for consistency the challenging player ignores the shares communicated by the players he distrusts. That is, for each Pk ∈ Lv , the challenging player Pv does not check the consistency of the shares communicated by Pk . If the verifier Pv is honest then the set of players Lv are dishonest. Since they can not affect the shares communicated to other honest players they can be ignored while verifying (3) consistency. This is possible because the adversary structure AD satisfies Q . The probability that a honest player will detect a fault if one exists is no less than |F1 | . Let γ be the size of the largest set in the adversary structure. Then there are at least n − γ honest verifiers and hence the overall error probability is |F|−(n−γ) . This could be made arbitrarily small by choosing a correspondingly larger F. During the course of the protocol n(n + 2nL + L2 ) = n2  + 2n2 L + nL2 field elements are communicated. In addition n bits are broadcast. Once a fault is detected, the protocol proceeds to localize the fault to a set containing at most two players. One of the verifiers who has detected a fault in the previous step communicates the sharing in which he has detected a fault. The dealer of this sharing, Pi , then communicates the linear combination (defined by the challenge vector issued by the verifier in the detection step) of the vectors used for this sharing. The verifier Pv then finds a share such that

Trading Players for Efficiency in Unconditional Multiparty Computation

349

Table 4. Fault localization protocol-I. Let Pv be the verifier who has detected a fault in the previous step. 1. Pv selects one of the players Pi for whom he detected a fault and broadcasts i and the element (a, b or c) in which the fault was detected. If Pi ∈ Lv then D is set to {Pv }. Let us assume, without loss of generality, that the fault has been detected in a. (Σ) 2. Pi sends to Pv the linear combination of the vectors fi (chosen in Step 1a of Table 1): fi =



q=1

r q fi

(q)

+ fi

(+v)

.

3. Pv finds the smallest index k such that the kth share does not belong to any of the players in the set Li and aik (Σ) (received in Step 2 of the protocol in Table 3) is not consistent with the matrix M and fi

(Σ)

j−1

j

received from Pi in the previous step. Let Pj be the player who

communicated this share to Pv . That is,

q=1

pq + 1 ≤ k ≤

q=1

pq where Pj ∈ Li . Pv

broadcasts j among the players in P  . If Pj ∈ Lv , then D is set to {Pv }. If Pj ∈ Li , then D is set to {Pv , Pj }. Both Pi and Pj send the list aik (1) , . . . , aik () , a(ik) (+v) to Pv . verifies if the linear combination [r1 , . . . , r ] applied to the values 4. Pv aik (1) , . . . , aik () , a(ik) (+v) received from Pi is equal to the kth share of fi cording to the access structure AD : Is



q=1

rq aik

(q)

+ aik

(+v)

= M k × fi

(Σ)

(Σ)

ac-

where Mk is

the kth row of the matrix M . Otherwise, Pv broadcasts the index i, and D is set to {Pi , Pv }. Similarly, Pv verifies the values aik (1) , . . . , aik () , a(ik) (+v) received from Pj to be consistent with aik (Σ) received in Step 2 of the protocol in Table 3, and if found to be inconsistent broadcasts the index j. In this case, D is set to {Pj , Pv }. 5. Pv finds the smallest index q such that the values aik (q) received from Pi and Pj differ, and broadcasts q and both the values of aik (q) . Both Pi and Pj broadcast their value of aik (q) . 6. If the values of aik (q) broadcast by Pi and Pj differ, then D is set to {Pi , Pj }. If the value broadcast by Pi differs from the value that Pv broadcast as Pi ’s value, then D is set to {Pi , Pv }. Else, D is set to {Pj , Pv }.

it is in disagreement with the vector obtained above and identifies the owner of this share Pj . Both Pi and Pj send their versions of the share in dispute and the fault is localized to a pair of players among Pv , Pi and Pj (fault localizationI, Table 4)6 . If player Pi was faulty then every player Pj ∈ Li would have reconstructed his shares of the value distributed by Pi using the shares of the rest of the players (not in Li ∪ Lj ). Hence if the share of any player Pj ∈ Li is inconsistent with the vector obtained above, then a share belonging to another player Pk ∈ Li ∪ Lj can be found such that it is inconsistent with this vector. Then a pair of players among Pv , Pi and Pk can be localized as above. The localization step requires up to L + 2 + 2 field elements to be communicated. Also a maximum of 2 log(n)+2+log(+1)+4 log(|F|) bits need to be broadcast. The second verification step checks if every player shared the correct product shares in Step 2a of the triple generation protocol in Table 1. This step is performed only if the previous verification step was successful; that is, no fault was detected in the previous step. [CDM00] showed that from any multiplicative MSP (F, M, ψ), a new MSP (F, Mmul , ψmul ) can be derived efficiently such that if secrets a and b are shared according to the MSP (F, M, ψ), then the product 6

Note that the verifier can not report a player whom he already distrusts since he has been eliminated from consideration in the detection step. If a verifier does report this player again, then the verifier is identified to be faulty and can be eliminated.

350

B. Prabhu, K. Srinathan, and C. Pandu Rangan Table 5. Fault detection protocol-II.

1. Pv computes the product shares cik (Σ) from the sub shares cikr (Σ) received in Step 2 of the protocol in Table 3, for i = 1, . . . , n , k = 1, . . . , pi and r = 1, . . . , L. 2. Pv verifies if the product shares cik (Σ) , for i = 1, . . . , n , i ∈ Lv and k = 1, . . . , pi , are consistent with the MSP (F , Mmul , ψmul ). 3. Pv broadcasts a bit according to whether a fault has been detected or not.

Table 6. Fault localization protocol-II. 1. Every player Pi ∈ P  sends to Pv his factor shares aik , . . . , aik , aik (1)

() (+v) bik , bik

()

(+v)

(1)

and bik , . . . ,

for k = 1, . . . , pi .

2. Pv verifies for every q = 1, . . . , ,  + v if the shares {aio | i = 1, . . . , n , i ∈ Lv , o = 1, . . . , pi } (q)

(q)

are consistent with the MSP (F , M, ψ). If not, Pv finds an erroneous share aik such that Pi ∈ Lv and broadcasts i. (If more than one erroneous share exists, the share with the smallest player or share index is chosen). If Pi ∈ Lv then D is set to {Pv }. Else D is set to {Pi , Pv }. (q) Similar verification is performed for the shares {bio | i = 1, . . . , n , o = 1, . . . , pi } for q = 1, . . . , ,  + v. 3. Pv verifies the shares cik (Σ) computed in Step 1 of the protocol in Table 5 with the blinded product of aik ’s and bik ’s (received in Step 1). That is, Pv checks if the equation cik (Σ) =

l

ro aik bik + aik bik holds for all i ∈ {1, . . . , n }, i ∈ Lv and k ∈ {1, . . . , pi }. This o=1 will fail for at least one i and k such that Pi ∈ Lv (for a honest verifier Pv ). Pv broadcasts this index i. If Pi ∈ Lv then D is set to {Pv }. Else D is set to {Pi , Pv }. (o) (o)

(+v) (+v)

of the shares are consistent with the MSP (F, Mmul , ψmul ). Hence each player verifies if the product shares cik (Σ) , i = 1, . . . , n , k = 1, . . . , pi , which can be obtained from the sub-shares communicated in the first verification step, are consistent with the new MSP for i = 1, . . . , n and k = 1, . . . , pi . If an inconsistency is found then the verifier reports a fault (fault detection-II, Table 5). As in the first verification step, a verifier need not check the shares of the players he distrusts. This step does not require any communication of field elements but n bits are broadcast during the protocol. If a fault was detected in the second verification step, the protocol in Table 6 is executed by the players to isolate the faulty player(s). One of the verifiers who reported a fault gets the shares of a and b ({aik }, {bik }, i = 1, . . . , n , k = 1, . . . , pi ) and checks for their consistency with the MSP used. If no fault is found he checks their product (randomized using the challenge vector) with the product shares cik (Σ) . He then finds a share such that these two are not equal and the fault is localized to the verifier and the owner of this share. As in the previous verification step, the verifier need not check the shares of the players he already distrusts. This protocol has the same error probability as the first localization step. The protocol requires 2L( + 1) field elements to be communicated and log(n) bits to be broadcast. Player Elimination. In the verification step, if a fault is detected, a set D containing at most two players of which at least one is faulty is identified. It may not always be possible to eliminate this set of players as this may cause a

Trading Players for Efficiency in Unconditional Multiparty Computation

351

Table 7. Player elimination protocol. Let D be the set of player(s) to be eliminated. 1. Initialization: Set B = AD . If |BD | = 1 then set P  = P  \ BD and stop. Else set B = BD . 2. Do for every pair (Pq , Pr ) ∈ L: (a) Set P  =P  \ {Pq , Pr } and A D = B|{P ,P } . q

r

(3)

(b) If A with respect to P  and P, then set P  = P  , AD = A D satisfies Q D and L = L \ {(Pq , Pr )}. For all pairs (Pq , Po ) ∈ L, set L = L \ {(Pq , Po )} and Lo = Lo \ {Pq }. Similarly for all pairs (Pr , Po ) ∈ L, set L = L \ {(Pr , Po )} and Lo = Lo \ {Pr }. Else set P  =P  \ {Pq } and A D = B|{P } . q

(3)

(c) If A with respect to P  and P, then set P  = P  and AD = A D satisfies Q D . For all pairs (Pq , Po ) ∈ L, set L = L \ {(Pq , Po )} and Lo = Lo \ {Pq }. Else set P  =P  \ {Pr }  and AD = B|{P } . r

(3)

with respect to P  and P, then set P  = P  and AD = A (d) If A D satisfies Q D . For all pairs (Pr , Po ) ∈ L, set L = L \ {(Pr , Po )} and Lo = Lo \ {Pr }. 3. If D = {Pi } then set P  = P  \ {Pi } and AD = B|{P } and for all players Pk such that i

Pi ∈ Lk set Lk = Lk \ {Pi } and L = L \ {(Pi , Pk )} and stop. 4. If D is of the form {Pi , Pj }: (a) Set P  =P  \ {Pi , Pj } and A D = B|{P ,P } . i

j

(3)

(b) If A with respect to P  and P, then: set P  = P  and AD = A D satisfies Q D . For all pairs (Pi , Pk ) ∈ L, set L = L \ {(Pi , Pk )} and Lk = Lk \ {Pi }. For all pairs (Pj , Pk ) ∈ L, set L = L \ {(Pj , Pk )} and Lk = Lk \ {Pj } and stop. (c) Set P  =P  \ {Pi } and A D = B|{P } . (3)

i

with respect to P  and P, then: set P  = P  and AD = A (d) If A D satisfies Q D . For all pairs (Pi , Pk ) ∈ L, set L = L \ {(Pi , Pk )} and Lk = Lk \ {Pi } and stop. (e) Set P  =P  \ {Pj } and A D = B|{P } . j

(3)

(f) If A with respect to P  and P, then: set P  = P  and AD = A D satisfies Q D . For all pairs (Pj , Pk ) ∈ L, set L = L \ {(Pj , Pk )} and Lk = Lk \ {Pj } and stop. (g) Set L = L ∪ {(Pi , Pj )}, Li = Li ∪ {Pj } and Lj = Lj ∪ {Pi }.

(3)

violation of the Q property. In such cases, the set is added to a list L containing such pairs (in such a way that the list can be publicly agreed upon) and it is ensured that the pairs in L do not recur in the verification step. For a collection of sets A, let A{Pi ,Pj } represent the collection of those sets which contain player Pi or player Pj . We denote by A|{Pi ,Pj } the collection of sets formed by erasing players Pi and Pj from the sets in A. That is A|{Pi ,Pj } = {A \ {Pi , Pj }|A ∈ A}. Once a set D containing a faulty player is identified the adversary structure is projected on to the players in D since the rest of the sets are now known to be free from corruption. If this projection yields a singleton set, then this set represents the remaining corrupted players and can be eliminated from the (3) player set. Otherwise, any pair (Pq , Pr ) ∈ L whose removal does not violate Q is now eliminated. If D = {Pi , Pj }, the players Pi and Pj are eliminated if the (3)

adversary structure so formed does not violate Q . Any pairs containing Pi or Pj , if present in L, are removed from it. If eliminating both the players is not possible, then it is checked if one of the players can be eliminated without (3) violating Q . The player set, adversary structure and the list L are updated accordingly. If none of the players in D can be eliminated then the pair is added to the list L. This protocol (Table 7) requires no communication or broadcast.

352

B. Prabhu, K. Srinathan, and C. Pandu Rangan Table 8. Complexity Analysis of the protocol.

Protocol

Message Complexity Broadcast Complexity # Calls (field elements) (bits) (max) Generate Triples n2 L2 + L2 + 2n3 L + 2nL — 2n2 ( + n) Fault Detection I n2  + 2n2 L + nL2 n 2n2 Fault Localization I L + 2 + 2 2 log(n) + 2 + log( + 1) + 4 log(|F |) n2 Fault Detection II — n 2n2 Fault Localization II 2L( + 1) log(n) n2 2n log(n) + 2n + 3n log(|F |) 2 Give Input(worst) 2L nI +L2 log(|F |) Multiply 2nL — m Get Output L — no

2.3

Computation Phase

At the end of the preparation phase we will be left with a new set of n players P  = {P1 , . . . , Pn } and a corresponding adversary structure AD . In the computation phase every player first verifiably shares his input with all the players in the new player set P  according to the original adversary structure AD using the input-sharing protocol of [CDM00]. Then using the triples generated in the previous phase, the arithmetic circuit is evaluated, gate-by-gate. In case of a linear combination gate, local computation suffices due to the linearity of sharing. Multiplication gates can be evaluated using the technique of [Bea91]. In essence, we do the following: let x and y be the inputs to the multiplication gate. Let dx = x − a and dy = y − b. The players in the new player set P  reconstruct the values dx and dy . The product xy can be rewritten as: xy = dx dy + dx b + dy a + c. This equation is linear in a, b and c and hence can be computed from the shares of a, b and c and the values dx and dy without any communication7 . Each multiplication gate requires two secret reconstructions. The protocol requires 2nL field elements to be communicated. No broadcast is required.

3

Complexity Analysis

The message and broadcast complexities of each of the sub-protocols are listed in Table 8. The table also specifies the number of times each sub-protocol is invoked in the worst case. Here, nI is the total number of inputs to the function to be computed and no is the number of outputs of the function. The cumulative message complexity when m sufficiently exceeds n is O(nI L2 + mn2 L2 + nO L) field elements. The corresponding broadcast complexity is O(nI L2 log(|F|)+n3 ) bits. In the absence of player elimination, it is usually required that each multiplication gate be evaluated with O(L) verifiable secret sharings, necessitating an overall communication of O(mL3 ) field elements and a broadcast of O(mL3 log(|F|) bits. Since the broadcast of every bit requires Ω(n2 ) bits8 (if not more), the use of player elimination results in a gain by a factor of O( mL n ) in the total communication complexity. 7

8

The linear function can be applied directly to the shares since the inputs and the triples are all shared according to the same MSP (F , M, ψ). The best known unconditional Byzantine agreement protocol tolerating generalized adversaries [FM98], communicates O(n5 ) bits for each bit broadcast.

Trading Players for Efficiency in Unconditional Multiparty Computation

4

353

Conclusion

In the threshold setting, there have been repeated evidences that the technique of player elimination results in a substantial improvement in the communication complexity of secure protocols. In this work, we have shown that dexterous application of the player elimination technique results in protocols with improved efficiency tolerating generalized adversaries as well.

References Bea91.

D. Beaver. Efficient multiparty protocols using circuit randomization. In CRYPTO ’91, volume 576 of LNCS, pages 420–432, 1991. BGW88. M. Ben-Or, S. Goldwasser, and A. Wigderson. Completeness theorems for non-cryptographic fault-tolerant distributed computation. In 20th ACM STOC, pages 1–10, 1988. CDM00. R. Cramer, I. Damgard, and U. Maurer. Efficient general secure multiparty computation from any linear secret sharing scheme. In EUROCRYPT’00, volume 1807 of LNCS, 2000. FM98. M. Fitzi and U. Maurer. Efficient byzantine agreement secure against general adversaries. In DISC ’98, volume 1499 of LNCS, pages 134–148, 1998. HM00. M. Hirt and U. Maurer. Player simulation and general adversary structures in perfect multiparty computation. Journal of Cryptology, 13(1):31–60, April 2000. Preliminary version appeared in 16th ACM PODC, pages 25–34, 1997. HM01. M. Hirt and U. Maurer. Robustness for free in unconditional multi-party computation. In CRYPTO ’01, volume 2139 of LNCS, 2001. HMP00. M. Hirt, U. Maurer, and B. Przydatek. Efficient multi-party computation. In ASIACRYPT’00, volume 1976 of LNCS, 2000. KW93. M. Karchmer and A. Wigderson. On span programs. In 8th IEEE Structure in Complexity Theory, pages 102–111, 1993. RB89. T. Rabin and M. Ben-Or. Verifiable secret sharing and multiparty protocols with honest majority. In 21st ACM STOC, pages 73–85, 1989.

Secret Sharing Schemes on Access Structures with Intersection Number Equal to One Jaume Mart´ı-Farr´e and Carles Padr´ o Dept. Matem` atica Aplicada IV, Universitat Polit`ecnica de Catalunya C. Jordi Girona, 1-3, M` odul C3, Campus Nord, 08034 Barcelona, Spain {jaumem,matcpl}@mat.upc.es

Abstract. The characterization of ideal access structures and the search for bounds on the optimal information rate are two important problems in secret sharing. These problems are studied in this paper for access structures with intersection number equal to one, that is, access structures such that there is at most one participant in the intersection of any two minimal qualified subsets. Examples of such access structures are those defined by finite projective planes and those defined by graphs. In this work, ideal access structures with intersection number equal to one are completely characterized and bounds on the optimal information rate are provided for the non-ideal case. Keywords: Cryptography, secret sharing schemes, information rate, ideal schemes.

1

Introduction

A secret sharing scheme Σ is a method to distribute a secret value k ∈ K among a set of participants P in such a way that only the qualified subsets of P are able to reconstruct the value of k from their shares. Secret sharing was introduced by Blakley [1] and Shamir [15]. A comprehensive introduction to this topic can be found in [16]. A secret sharing scheme is said to be perfect if the non-qualified subsets can not obtain any information about the value of the secret. We are going to consider only unconditionally secure perfect secret sharing schemes. The access structure of a secret sharing scheme is the family of qualified subsets, Γ ⊂ 2P . In general, access structures are considered to be monotone, that is, any superset of a qualified subset must be qualified. Then, the access structure Γ is determined by the family of minimal qualified subsets, Γ0 , which is called the basis of Γ . We assume that every participant belongs to at least one minimal qualified subset. The rank and the corank of Γ are, respectively, the maximum and the minimum number of participants in a minimal qualified subset. The intersection number of Γ is the maximum number of participants in the intersection of two minimal qualified subsets. 

This work was partially supported by the Spanish Ministerio de Ciencia y Tecnolog´ıa under project TIC 2000-1044.

S. Cimato et al. (Eds.): SCN 2002, LNCS 2576, pp. 354–363, 2003. c Springer-Verlag Berlin Heidelberg 2003 

Secret Sharing Schemes on Access Structures

355

The first works about secret sharing [1,15] considered only schemes with a (t, n)-threshold access structure, whose basis is formed by all subsets with exactly t participants from a set of n participants. Further works considered the problem of finding secret sharing schemes for more general access structures. Ito, Saito and Nishizeki [10] proved that there exists a secret sharing scheme for any access structure, and Brickell [5] introduced the vector space construction which provides secret sharing schemes for a wide family of access structures, the vector space access structures. While in the threshold schemes proposed by Blakley [1] and Shamir [15] and in the vector space schemes given by Brickell [5] the shares have the same size as the secret, in the schemes constructed in [10] for general access structures the shares are, in general, much larger than the secret. Since the security of a system depends on the amount of information that must be kept secret, the size of the shares given to the participants is a key point in the design of secret sharing schemes. Therefore, one of the main parameters in secret sharing is the information rate ρ(Σ, Γ, K) of the scheme, which is defined as the ratio between the length (in bits) of the secret and the maximum length of the shares given to the participants. That is, ρ(Σ, Γ, K) = log | K |/ maxp∈P log | Sp |, where Sp is the set of all possible values of the share sp corresponding to the participant p. In a secret sharing scheme the length of any share is greater than or equal to the length of the secret, so the information rate can not be greater than one. Secret sharing schemes with information rate equal to one are called ideal . We say that an access structure Γ ⊂ 2P is an ideal access structure if there exists an ideal secret sharing scheme for Γ . For example, threshold access structures and the vector space ones are ideal. It is not possible in general to find an ideal secret sharing scheme for a given access structure Γ . So, we may try to find a secret sharing scheme for Γ with information rate as large as possible. The optimal information rate of an access structure Γ is defined by ρ∗ (Γ ) = sup(ρ(Σ, Γ, K)), where the supremum is taken over all possible sets of secrets K with | K | ≥ 2 and all secret sharing schemes Σ with access structure Γ and set of secrets K. Of course, the optimal information rate of an ideal access structure is equal to one. The above considerations lead to two problems that have received considerable attention: to characterize the ideal access structures, and to find bounds on the optimal information rate. Due to the difficulty of finding a general solution, those problems have been studied in several particular classes of access structures: access structures on sets of four [16] and five [12] participants, access structures defined by graphs [2,3,4,6,7,8,17], bipartite access structures [14] and the access structures with three or four minimal qualified subsets [13]. There exist remarkable coincidences in the results obtained for all these classes of access structures. Namely, the ideal access structures coincide with the vector space ones, and there is no access structure Γ whose optimal information rate is such that 2/3 < ρ∗ (Γ ) < 1. A natural question that arises at this point is to determine to which extent these results can be generalized.

356

Jaume Mart´ı-Farr´e and Carles Padr´ o

In the present paper, we study those problems in another family of access structures: the access structures with intersection number equal to one. We prove that also in this family the ideal access structures coincide with the vector space ones and there is no access structure with optimal information rate between 2/3 and 1. Besides, we completely characterize the ideal access structures with intersection number equal to one and we provide some bounds on the optimal information rate for the non-ideal case. These results include those previously obtained for access structures defined by graphs. The access structure Γ = Γ G defined by a graph G with vertex set V (G) and edge set E(G) is the access structure on the set of participants P = V (G) having basis Γ0 = E(G). Observe that Γ has rank and corank equal to two and intersection number equal to one. Therefore, the family of access structures we consider in this paper can be seen as a generalization of those defined by graphs, which have received considerable attention in the past. The access structures associated to a finite projective plane are other interesting examples of access structures with intersection number equal to one. The set of participants is in this case the set of points of the plane, while the lines are the minimal qualified subsets. For instance, an access structure with rank and corank equal to three, seven participants and seven minimal qualified subsets is obtained from the Fano plane, the projective plane of order two. Our main result, Theorem 1, states that there are relatively few ideal access structures with intersection number equal to one. Namely, we prove that they are one of the following: the access structures defined by complete multipartite graphs, the one associated to the Fano plane, together with three other access structures related to it, and those defined by a star. The organization of the paper is as follows. The above mentioned access structures are introduced in Sect.2 and we prove there that they are vector space access structures. Section 3 is devoted to characterize the ideal access structures having intersection number equal to one by proving that they are precisely the access structures considered in Sect.2. Finally, some bounds on the optimal information rate are presented in Sect.4.

2

Some Vector Space Access Structures

An access structure Γ on a set of participants P is said to be a vector space access structure over a finite field K if there exist a vector space E over K and a map ψ : P ∪ {D} −→ E \ {0}, where D ∈ / P is called the dealer , such that A ∈ Γ if and only if the vector ψ(D) can be expressed as a linear combination of the vectors in the set ψ(A) = {ψ(p) : p ∈ A}. For instance, the threshold access structures and those defined by a complete multipartite graph Kn1 ,...,n are vector space access structures [16]. Any vector space access structure can be realized by an ideal scheme (see [5] or [16] for proofs). The purpose of this section is to point out some vector space access structures with intersection number equal to one: the access structures Γ S(p0 ) defined by a star (Proposition 1); the access structure Γ2 associated to the Fano plane

Secret Sharing Schemes on Access Structures

357

(Proposition 2); and its related access structures Γ2,1 , Γ2,2 and Γ2,3 (Propositions 3, 4 and 5). In fact, as we will prove latter, these access structures together with those defined by a complete multipartite graph are the unique ideal access structures with intersection number equal to one. It is said that an access structure Γ on a set of participants P is a star access structure if there is a participant belonging to all minimal qualified subsets. That is, if there exists p0 ∈ P such that A ∩ A = {p0 } for any two different minimal qualified subsets A, A ∈ Γ0 . In such a case we denote Γ0 = S(p0 ) and Γ = Γ S(p0 ). Observe that the intersection number of a star access structure is equal to one. Proposition 1. Let Γ S(p0 ) be a star access structure on the set of participants P. Then, Γ S(p0 ) is a vector space access structure. Proof. Let Γ = Γ S(p0 ) be a star with basis Γ0 = S(p0 ) = {A1 , . . . , Ar }. For any i = 1, . . . , r we select a participant pi ∈ Ai \ {p0 } and we identify all participants in Ai \ {p0 } to pi . We obtain in this way the access structure Γ1 on the set 1 = {p0 , p1 , . . . , pr } having basis Γ10 = {{p0 , p1 }, . . . , {p0 , pr }}. of participants P Since Γ10 = K1,r is a complete multipartite graph, then Γ1 is a vector space access structure and, hence, it is not difficult to prove that Γ is so. ' & A finite projective plane of order n consists of n2 + n + 1 points and n2 + n + 1 lines with n + 1 points on each line, n + 1 lines through each point, each pair of lines meeting at one point, and dually each pair of points lying on one line. The finite projective plane of order n = 2 is called the Fano plane. See [9] for a complete introduction to finite projective planes. For any finite projective plane, we can consider its associated access structure Γn , whose set of participants P is the set of n2 + n + 1 points of the plane and whose basis (Γn )0 consists of its n2 + n + 1 lines. Notice that Γn has intersection number equal to one and rank and corank equal to n + 1. Proposition 2. Let Γ2 be the access structure associated to the Fano plane. That is, Γ2 is the access structure on the set P = {p1 , p2 , p3 , p4 , p5 , p6 , p7 } of seven participants with basis (Γ2 )0 = {{p1 , p2 , p3 },{p1 , p4 , p7 },{p1 , p5 , p6 }, {p2 , p4 , p6 },{p2 , p5 , p7 },{p3 , p4 , p5 },{p3 , p6 , p7 }}. Then, Γ2 is a vector space access structure. Proof. Let K be a finite field of characteristic two and ψ : P ∪ {D} → K4 the map defined by ψ(D) = (1, 0, 0, 0), ψ(p1 ) = (1, 0, 1, 0), ψ(p2 ) = (0, 1, 1, 0), ψ(p3 ) = (0, 1, 0, 0), ψ(p4 ) = (1, 1, 1, 1), ψ(p5 ) = (0, 0, 1, 1), ψ(p6 ) = (0, 0, 0, 1), and ψ(p7 ) = (1, 1, 0, 1). It is not hard to check that if A ⊂ P then, A ∈ Γ2 if and only if the vector ψ(D) can be expressed as a linear combination of the vectors in the set ψ(A) = {ψ(p) : p ∈ A}. So, Γ2 is a vector space access structure over any finite field of characteristic two. ' & Finally, we introduce some access structures related to the Fano plane that are also vector space access structures with intersection number equal to one.

358

Jaume Mart´ı-Farr´e and Carles Padr´ o

Proposition 3. Let Γ2,1 be the access structure on the set of six participants P = {p1 , p2 , p3 , p4 , p5 , p6 } with basis (Γ2,1 )0 = {{p1 , p2 , p3 }, {p1 , p5 , p6 }, {p2 , p4 , p6 }, {p3 , p4 , p5 }}. Then, Γ2,1 is a vector space access structure. Proof. Let Γ be an access structure on a set of participants P. Let p0 ∈ P. On the set of participants P \ {p0 } we define the access structure Γ | p0 as Γ | p0 = {A ⊂ P \ {p0 } such that A ∈ Γ }. It is easy to show that if Γ is a K-vector space access structure, then Γ | p0 is so. In our case we have that ' & Γ2,1 = Γ2 | p7 . Hence, Γ2,1 is a vector space access structure. Proposition 4. Let Γ2,2 be the access structure on the set of six participants P = {p1 , p2 , p3 , p4 , p5 , p6 } with basis (Γ2,2 )0 = {{p1 , p2 , p3 }, {p1 , p5 , p6 }, {p2 , p4 , p6 }, {p3 , p4 , p5 }, {p1 , p4 }, {p2 , p5 }, {p3 , p6 }}. Then, Γ2,2 is a vector space access structure. Proof. It is well known that an access structure Γ is a vector space access structure if and only if its dual access structure Γ ∗ is so [11]. Therefore, Γ2,2 = (Γ2,1 )∗ is a vector space access structure. ' & Proposition 5. Let Γ2,3 be the access structure on the set of five participants P = {p1 , p2 , p3 , p4 , p5 } with basis (Γ2,3 )0 = {{p1 , p2 , p3 }, {p3 , p4 , p5 }, {p1 , p4 }, {p2 , p5 }}. Then, Γ2,3 is a vector space access structure. Proof. This proof works as the one of Proposition 3. Namely, since Γ2,2 is a & ' vector space access structure hence it follows that Γ2,3 = Γ2,2 | p6 is so.

3

Characterization of Ideal Access Structures

The aim of this section is to prove Theorem 1, which provides a complete characterization of ideal access structures with intersection number equal to one. Besides, this theorem states that ideal access structures coincide with the vector space ones and with those having optimal information rate greater than 2/3. Therefore, there does not exist in this family any access structure with optimal information rate between 2/3 and 1. Let us introduce first some definitions. Let Γ be an access structure on a set of participants P. We say that Γ is connected if for any pair of participants p, q ∈ P there exist A1 , . . . , A ∈ Γ0 such that p ∈ A1 , q ∈ A , and Ai ∩ Ai+1 = ∅ for any 1 ≤ i ≤  − 1. For any subset Q ⊂ P, the access structure induced by Γ on the subset Q is defined by Γ (Q) = {A ∈ Γ : A ⊂ Q}. It is clear that, for any access structure Γ on the set of participants P, there exists an unique partition P = P1 ∪ · · · ∪ Pr such that the induced access structures Γ (P1 ), . . . , Γ (Pr ) are connected and Γ = Γ (P1 ) ∪ · · · ∪ Γ (Pr ). In this situation we say that Γ (P1 ), . . . , Γ (Pr ) are the connected components of Γ . Notice that ρ∗ (Γ ) ≤ min{ρ∗ (Γ (P1 )), . . . , ρ∗ (Γ (Pr ))}, and it is not difficult to prove that, if Γ (P1 ), . . . , Γ (Pr ) are vector space access structures over a finite field K, then the access structure Γ is so.

Secret Sharing Schemes on Access Structures

359

Theorem 1. Let Γ be an access structure on a set of participants P with intersection number equal to one. Then, the following conditions are equivalent: 1. 2. 3. 4.

Γ is a vector space access structure. Γ is an ideal access structure. ρ∗ (Γ ) > 2/3. Every connected component of Γ is either an access structure defined by a complete multipartite graph Γ Kn1 ,...,n , or a star Γ S(p0 ), or the access structure associated to the Fano plane Γ2 , or one of its related access structures Γ2,1 , Γ2,2 or Γ2,3 .

The rest of this section is devoted to prove this theorem. A vector space access structure is ideal and, hence, its optimal information rate is equal to one. Therefore we have that (1) implies (2) and that (2) implies (3). Furthermore, from the results in Sect.2 it follows that (4) implies (1). So, to finish the proof of Theorem 1 we only have to prove that (3) implies (4). Namely, we must demonstrate that if Γ is a connected access structure on a set of participants P with intersection number equal to one and optimal information rate ρ∗ (Γ ) > 2/3, then Γ is one of the following access structures: a complete multipartite graph Γ Kn1 ,...,n , a star Γ S(p0 ), the access structure associated to the Fano plane Γ2 , or one of its related access structures Γ2,1 , Γ2,2 or Γ2,3 . In order to prove it we distinguish two cases. The first one, which is solved by Proposition 6, deals with access structures with corank greater than two. The second one considers access structures with corank equal to two and is proved by Proposition 7. Some lemmas are needed to prove those propositions. These lemmas determine some forbidden situations in an ideal access structure with intersection number equal to one. In order to do it we use the independent sequence method , which was introduced by Blundo, De Santis, De Simone and Vaccaro in [2, Theorem 3.8] and was generalized by Padr´ o and S´ aez in [14, Theorem 2.1]. The independent sequence method works as follows. Let Γ be an access structure / Γ be a sequence of subon a set of participants P. Let ∅ = B1 ⊂ · · · ⊂ Bm ∈ sets of P that is made independent by a subset A ⊂ P, that is to say, there / Γ for any exist X1 , . . . , Xm ⊂ A such that Bi ∪ Xi ∈ Γ and Bi−1 ∪ Xi ∈ i = 1, . . . , m where B0 is the empty set. Then, ρ∗ (Γ ) ≤ |A|/(m + 1) if A ∈ Γ , / Γ. while ρ∗ (Γ ) ≤ |A|/m whenever A ∈ Let us start with the case that the corank of Γ is greater than two. To solve this case we need the next three lemmas. Lemma 1. Let Γ be an access structure on a set of participants P having basis Γ0 , with intersection number equal to one, corank(Γ ) ≥ 3 and optimal information rate ρ∗ (Γ ) > 2/3. Let us assume that there exist three minimal qualified subsets A1 , A2 , A3 ∈ Γ0 and three different participants p12 , p23 , p13 ∈ P such that Ai ∩ Aj = {pij } if 1 ≤ i < j ≤ 3. Then, (A1 ∪ A2 ∪ A3 ) \ {p12 , p23 , p13 } ∈ Γ . Proof. We are going to prove first that (A1 ∪ A2 ∪ A3 ) \ {p12 , p23 } ∈ Γ . Let us suppose that it is false. In this case, we can consider the subsets B1 = (A2 ∪

360

Jaume Mart´ı-Farr´e and Carles Padr´ o

{p13 })\{p12 , p23 }, B2 = (A1 ∪A2 )\{p12 , p23 } and B3 = (A1 ∪A2 ∪A3 )\{p12 , p23 }. It is not difficult to check that the subsets B1 ∪ {p12 , p23 }, B2 ∪ {p12 } and B3 ∪{p23 } are qualified. On the other hand, B1 ∪{p12 } = (A2 ∪{p13 })\{p23 } and B2 ∪ {p23 } = (A1 ∪ A2 ) \ {p12 } are not qualified. In effect, since the intersection number of Γ is equal to one, any minimal qualified subset C ∈ Γ0 such that C ⊂ (A2 ∪ {p13 }) \ {p23 } or C ⊂ (A1 ∪ A2 ) \ {p12 } has at most two elements, which is a contradiction with corank(Γ ) ≥ 3. Therefore, the sequence ∅ = B1 ⊂ / Γ is made independent by the set A = {p12 , p23 } ∈ / Γ and, hence, B2 ⊂ B3 ∈ ρ∗ (Γ ) ≤ 2/3, a contradiction. Let us prove now that (A1 ∪ A2 ∪ A3 ) \ {p12 , p23 , p13 } ∈ Γ . Let A ∈ Γ0 be a minimal qualified subset such that A ⊂ (A1 ∪ A2 ∪ A3 ) \ {p12 , p23 }. We only have / A. If p13 ∈ A, then A has at most two elements, because to prove 3 that p13 ∈ A = i=1 (A ∩ Ai ) and the intersection number of Γ is equal to one. But this is a contradiction with corank(Γ ) ≥ 3. ' & Lemma 2. Let Γ be an access structure on a set of participants P having basis Γ0 , with intersection number equal to one, corank(Γ ) ≥ 3 and optimal information rate ρ∗ (Γ ) > 2/3. Let us assume that there exist three minimal qualified subsets A1 , A2 , A3 ∈ Γ0 and three different participants p12 , p23 , p13 ∈ P such that Ai ∩ Aj = {pij } if 1 ≤ i < j ≤ 3. Then, |Ai | = 3 for every i = 1, 2, 3. Proof. By Lemma 1, there exists a minimal qualified subset A ∈ Γ0 such that A ⊂ (A1 ∪ A2 ∪ A3 ) \ {p12 , p23 , p13 }. Since Γ has intersection number equal to one and its corank is at least three, A = {α1 , α2 , α3 } where A ∩ Ai = {αi }. We can apply now Lemma 1 to the minimal qualified subsets A1 , A2 and A. Then, there exists a minimal qualified subset B ∈ Γ0 such that B ⊂ (A1 ∪ A2 ∪ A) \ {p12 , α1 , α2 }. As before, we have that B = {β1 , β2 , α3 }, where B ∩ A1 = {β1 }, B ∩ A2 = {β2 } and B ∩ A = {α3 }. We apply now Lemma 1 to the minimal qualified subsets A1 , A and B and we see that there exists a minimal qualified subset C ∈ Γ0 such that C ⊂ (A1 ∪ A ∪ B) \ {α1 , α3 , β1 } and, then, |C| = 3. Observe that C ∩ A = {α2 } and C ∩ B = {β2 } and, hence, α2 , β2 ∈ A2 ∩ C. Since α2 = β2 and the intersection number of Γ is equal to one, we have that A2 = C ' & and, then, A2 has exactly three elements. By symmetry, |A1 | = |A3 | = 3. Lemma 3. Let Γ be an access structure on a set of participants P having basis Γ0 , with intersection number equal to one, corank(Γ ) ≥ 3 and optimal information rate ρ∗ (Γ ) > 2/3. Let us assume that there exist three different minimal qualified subsets A1 , A2 , A3 ∈ Γ0 such that A1 ∩ A2 = ∅, A3 ∩ (A1 ∪ A2 ) = ∅ and A1 ∩ A3 = A2 ∩ A3 . Then, Ai ∩ A3 = ∅ for every i = 1, 2. Proof. Since A3 ∩ (A1 ∪ A2 ) = ∅, we can assume that A1 ∩ A3 = ∅. We have to prove that A2 ∩ A3 = ∅. Let us consider {a} = A1 ∩ A2 and {c} = A1 ∩ A3 . Since A1 ∩ A3 = A2 ∩ A3 and Γ has intersection number equal to one, we have that a = c. We have to prove that there exists a participant b = a, c such that {b} = A2 ∩ A3 .

Secret Sharing Schemes on Access Structures

361

We are going to prove first that (A1 ∪ A2 ∪ A3 ) \ {a, c} ∈ Γ . Let us suppose that this is false. In this case, we consider the subsets B1 = A1 \ {a, c}, B2 = (A1 ∪ A2 ) \ {a, c} and B3 = (A1 ∪ A2 ∪ A3 ) \ {a, c}. It is not difficult to check that the set {a, c} makes independent the sequence ∅ = B1 ⊂ B2 ⊂ B3 ∈ / Γ by taking X1 = {a, c}, X2 = {a} and X3 = {c}. Therefore, ρ∗ (Γ ) ≤ 2/3, a contradiction. Let A4 ∈ Γ0 be a minimal qualified subset such that A4 ⊂ (A1 ∪ A2 ∪ A3 ) \ {a, c}. Since corank(Γ ) ≥ 3 and the intersection number of Γ is equal to one, we have that A4 = {α1 , α2 , α3 }, where αi ∈ Ai \ (Aj ∪ Ak ) if {i, j, k} = {1, 2, 3}. Observe that we can apply Lemma 2 to the subsets A1 , A3 and A4 and we get |A1 | = |A3 | = 3. Therefore A1 = {a, c, α1 } and A3 = {c, α3 , b} for some participant b. From Lemma 1, {a, α2 , b} = (A1 ∪ A3 ∪ A4 ) \ {c, α1 , α3 } ∈ Γ . Since ' & a, α2 ∈ A2 , we have that A2 = {a, α2 , b} and, hence, {b} = A2 ∩ A3 . Proposition 6. Let Γ be a connected access structure on a set of participants P with intersection number equal to one, corank(Γ ) ≥ 3, and optimal information rate ρ∗ (Γ ) > 2/3. Then, Γ is either a star Γ S(p0 ), or the access structure associated to the Fano plane Γ2 , or the access structure Γ2,1 . Proof. Let us suppose that Γ is not a star. Then, there exists three minimal qualified subsets A1 , A2 , A3 ∈ Γ0 such that A1 ∩ A2 = ∅, A1 ∩ A3 = ∅ and A1 ∩ A2 = A1 ∩ A3 . From Lemma 3, we have that A2 ∩ A3 = ∅. Applying Lemmas 1 and 2, it follows that A1 = {p1 , p2 , p3 }, A2 = {p3 , p4 , p5 } and A3 = {p1 , p5 , p6 } and, besides, A4 = {p2 , p4 , p6 } ∈ Γ0 . If Γ0 = {A1 , A2 , A3 , A4 }, then Γ = Γ2,1 . The proof is concluded by checking that Γ = Γ2 , the Fano plane, if Γ0 = {A1 , A2 , A3 , A4 }. Let us suppose that there exists a fifth minimal qualified subset A5 ∈ Γ0 . Since Γ is connected, we can suppose that p1 ∈ A5 . / A5 because the intersection number of Γ is equal Observe that p2 , p3 , p5 , p6 ∈ to one. We can apply Lemmas 3 and 2 to A1 , A2 and A5 and we get that A5 ∩ A2 = ∅ and |A5 | = 3. Therefore, there exists a participant p7 = p1 , . . . , p6 such that A5 = {p1 , p4 , p7 }. Besides, from Lemma 1, A6 = {p2 , p5 , p7 } ∈ Γ0 . Let us apply now Lemma 1 to the subsets, A1 , A3 and A6 to obtain that A7 = {p3 , p6 , p7 } ∈ Γ0 . It is not difficult to check that {A1 , . . . , A7 } = (Γ2 )0 , the basis of the access structure associated to the Fano plane. We finish by proving that Γ0 = {A1 , . . . , A7 }. In effect, let us suppose that there exists another minimal qualified subset A8 ∈ Γ0 . By symmetry, we can suppose that p1 ∈ A8 . Then, / A8 for any i = 2, . . . , 7. since the intersection number of Γ is equal to one, pi ∈ ' & A contradiction is obtained by applying Lemma 3 to A1 , A2 and A8 . The characterization of the ideal access structures with intersection number equal to one is concluded by the following proposition. Proposition 7. Let Γ be a connected access structure on a set of participants P with intersection number equal to one, corank(Γ ) = 2, and optimal information rate ρ∗ (Γ ) > 2/3. Then, Γ is either a complete multipartite graph Γ Kn1 ,...,n , or a star Γ S(p0 ), or the access structure Γ2,2 , or the access structure Γ2,3 .

362

Jaume Mart´ı-Farr´e and Carles Padr´ o

The proof is obtained by a similar method as in the previous case: it is based on some lemmas determining some forbidden situations in such access structures. The complete proof will appear in the full version of the paper.

4

Bounds on the Optimal Information Rate

In this section we present bounds on the optimal information rate for non-ideal access structures with intersection number equal to one. Our result generalizes the one for access structures defined by graphs. We need the following notation. Let Γ be an access structure on a set of participants P. We define the degree of a participant p in the access structure Γ as degΓ (p) = |{A ∈ Γ0 : p ∈ A}|. Proposition 8. Let Γ be an access structure with intersection number equal to one on a set of participants P. Assume that Γ is not realizable by an ideal secret sharing scheme. Then, 2 corank(Γ ) ≥ ρ∗ (Γ ) ≥ · 3 (rank(Γ ) − 1) max{degΓ (p) : p ∈ P} + 1 Proof. We assume that Γ is not realizable by an ideal secret sharing scheme. Hence applying Theorem 1 it follows that ρ∗ (Γ ) ≤ 2/3. Next we prove the lower bound by using a suitable decomposition of Γ . Let us denote P = {p1 , . . . , pn }. For  every i = 1, . . . , n, let us consider Γ0,i = {A ∈ Γ0 : pi ∈ A} and Pi = A∈Γ0,i A. It is easy to check that Γ0,i is a star S(pi ) for any i = 1, . . . , n. Then, from Proposition 1 the access structure Γi on Pi having basis Γ0,i is a vector space access structure. It is clear that the substructures Γ0,i form a decomposition of Γ , that is to say, Γ0 = Γ0,1 ∪· · ·∪Γ0,r . Hence, from [17, Theorem 2.1], it follows that ρ∗ (Γ ) ≥ min{λA : A ∈ Γ0 }/ max{rp : p ∈ P}, where λA = | {i ∈ {1, . . . , r} : A ∈ Γ0,i } | and rp = | {i ∈ {1, . . . , r} : p ∈ Pi } |. It is not difficult to check that λA = |A| for every A ∈ Γ0 , and that rp ≤ (rank(Γ ) − 1) degΓ (p) + 1 for every participant p ∈ P. Therefore, the lower bound follows. ' &

References 1. Blakley, G.R.: Safeguarding cryptographic keys. AFIPS Conference Proceedings 48 (1979) 313–317 2. Blundo, C., De Santis, A., De Simone, R., Vaccaro, U.: Tight bounds on the information rate of secret sharing schemes. Des. Codes Cryptogr. 11 (1997) 107–122 3. Blundo, C., De Santis, A., Gargano, L., Vaccaro, U.: On the information rate of secret sharing schemes. Advances in Cryptology CRYPTO’92. Lecture Notes in Comput. Sci. 740 148–167 4. Blundo, C., De Santis, A., Stinson, D.R., Vaccaro, U.: Graph decompositions and secret sharing schemes. J. Cryptology 8 (1995) 39–64 5. Brickell, E.F.: Some ideal secret sharing schemes. J. Combin. Math. and Combin. Comput. 9 (1989) 105–113

Secret Sharing Schemes on Access Structures

363

6. Brickell, E.F., Davenport, D.M.: On the classification of ideal secret sharing schemes. J. Cryptology 4 (1991) 123–134 7. Brickell, E.F., Stinson, D.R.: Some improved bounds on the information rate of perfect secret sharing schemes. J. Cryptology 5 (1992) 153–166 8. Capocelli, R.M., De Santis, A., Gargano, L., Vaccaro, U.: On the size of shares of secret sharing schemes. J. Cryptology 6 (1993) 157–168 9. Dembowski, P.: Finite geometries. Reprint of the 1968 original. Classics in Mathematics. Springer-Verlag, Berlin, 1997 10. Ito, M., Saito, A., Nishizeki, T.: Secret sharing scheme realizing any access structure. Proc. IEEE Globecom’87 (1987) 99–102 11. Jackson, W.-A., Martin, K.M.: Geometric secret sharing schemes and their duals. Des. Codes Cryptogr. 4 (1994) 83–95 12. Jackson, W.-A., Martin, K.M.: Perfect secret sharing schemes on five participants. Des. Codes Cryptogr. 9 (1996) 267–286 13. Mart´ı-Farr´e, J., Padr´ o, C.: Secret sharing schemes with three or four minimal qualified subsets. Cryptology ePrint Archive (2002) Report 2002/050, http://eprint.iacr.org/ 14. Padr´ o, C., S´ aez, G.: Secret sharing schemes with bipartite access structure. IEEE Trans. Inform. Theory 46 (2000) 2596–2604 15. Shamir, A.: How to share a secret. Commun. of the ACM 22 (1979) 612–613 16. Stinson, D.R.: An explication of secret sharing schemes. Des. Codes Cryptogr. 2 (1992) 357–390 17. Stinson, D.R.: Decomposition constructions for secret-sharing schemes. IEEE Trans. Inform. Theory 40 (1994) 118–125

Author Index

Ateniese, Giuseppe

199

Barreto, Paulo S.L.M. Beimel, Amos 326 Bı¸cak, Ali 174 Borissov, Yuri 164

257 Ostrovsky, Rafail

Camenisch, Jan 268 Catuogno, Luigi 219 Cheung, Shirley H.C. 318 Curtmola, Reza 199 Davis, Darren 199 Dedi´c, Nenad 88 Deng, Xiaotie 303, 318 Desmedt, Yvo 290 Di Crescenzo, Giovanni 74, 119 Dodis, Yevgeniy 55 Granboulan, Louis Itkis, Gene

Niedermeyer, Frank 133 Nikova, Svetla 164

234

Padr´ o, Carles 354 Pandu Rangan, C. 342 Prabhu, B. 342 Preneel, Bart 164 Reyzin, Leonid

Katz, Jonathan 29 Kornievskaia, Olga 119 Kozlov, Anton 241 Kurnio, Hartono 146 Kushilevitz, Eyal 1 Lee, C.H. 303, 318 Lynn, Ben 257 Lysyanskaya, Anna 268

45

Vadhan, Salil 88 Vandewalle, Joos 164 Visconti, Ivan 219 Wang, Huaxiong 146 Wang, Yongge 290 Yanami, Hitoshi Yung, Moti

Mart´ı-Farr´e, Jaume Maurer, Ueli 14 Medeiros, Breno de

55, 88, 241

Safavi-Naini, Rei 146 Schindler, Werner 133 Scott, Michael 257 Sel¸cuk, Ali Aydın 174 Shimoyama, Takeshi 186 Srinathan, K. 342 Stahl, Yoav 326 Tsudik, Gene

102

29

186

29

354 199

Zhao, Yunlei 303, 318 Zhu, Hong 303