Public-Key Cryptography – PKC 2020: 23rd IACR International Conference on Practice and Theory of Public-Key Cryptography, Edinburgh, UK, May 4–7, 2020, Proceedings, Part II (Security and Cryptology) 3030453871, 9783030453879

The two-volume set LNCS 12110 and 12111 constitutes the refereed proceedings of the 23rd IACR International Conference o

179 62 16MB

English Pages 680 [666] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Organization
Contents – Part II
Contents – Part I
Lattice-Based Cryptography
The Randomized Slicer for CVPP: Sharper, Faster, Smaller, Batchier
1 Introduction
1.1 Contributions
1.2 Working Heuristics
2 Preliminaries
2.1 Notation
2.2 Spherical Geometry
2.3 Lattices
2.4 Solving CVPP with the Randomized Slicer
3 The Random Walk Model
4 Numerical Approximations
4.1 Discretization
4.2 Convex Optimization
4.3 Numerical Results
5 An Exact Solution for the Randomized Slicer
6 Memoryless Nearest Neighbour Searching
7 Bounded Distance Decoding with Preprocessing
8 Application to Graph-Based NNS
References
Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices: KEMs and Signatures of Smaller Sizes
1 Introduction
1.1 Comparison with NIST Round2 Lattice-Based PKEs/KEMs
1.2 Comparison with NIST Round2 Lattice-Based Signatures
1.3 Organizations
2 Preliminaries
2.1 Notation
2.2 Definitions
2.3 High/Low Order Bits and Hints
3 An Improved KEM from AMLWE
3.1 Design Rationale
3.2 The Construction
3.3 Provable Security
3.4 Choices of Parameters
4 An Improved Signature from AMLWE and AMSIS
4.1 Design Rationale
4.2 The Construction
4.3 Provable Security
4.4 Choices of Parameters
5 Known Attacks Against AMLWE and AMSIS
5.1 Concrete Security of KEM
5.2 Concrete Security of SIG
A Definitions of Hard Problems
References
MPSign: A Signature from Small-Secret Middle-Product Learning with Errors
1 Introduction
1.1 Contributions
1.2 Comparison with Prior Works
2 Preliminaries
2.1 Polynomials and Matrices
2.2 Gaussian Distributions
2.3 Polynomial and Middle-Product Learning with Errors
2.4 Cryptographic Definitions
3 Hardness of Middle-Product LWE with Small Secrets
4 An Attack on Inhomogeneous PSIS with Small Secrets
5 A Signature Scheme Based on Small Secrets MPLWE
5.1 The Identification Scheme
5.2 The Signature Scheme
6 Concrete Parameters
7 Implementation
References
Proofs and Arguments II
Witness Indistinguishability for Any Single-Round Argument with Applications to Access Control
1 Introduction
1.1 Our Witness Indistinguishability Transformation
1.2 Application: Succinct Single-Round Access Control
1.3 Technical Overview of Our WI Transformation
2 Witness Indistinguishability for Any Argument System
2.1 Preliminaries
2.2 Private Remote Evaluation
2.3 Making Single-Round Protocols Witness Indistinguishable
3 Succinct Single-Round Access Control Scheme
3.1 Delegation for Batch-NP Families
3.2 Known Batch Delegation Schemes
3.3 Our Scheme
3.4 Proof of Theorem 3.7 for Our Construction
References
Boosting Verifiable Computation on Encrypted Data
1 Introduction
1.1 Ensuring Correctness of Privacy-Preserving Computation
1.2 Our Contributions
1.3 Organization
2 Notation and Definitions
2.1 Commitment Schemes
2.2 SNARKs – Succinct Non-Interactive Arguments of Knowledge
3 Proof Systems for Arithmetic Function Evaluation over Quotient Polynomial Rings
3.1 Formal Description of Our Rq- Scheme
3.2 Security Analysis
4 Applications to Computing on Encrypted Data
4.1 Verifiable Computation
4.2 Our VC Scheme
4.3 Preserving Privacy of the Inputs Against the Verifier
5 Bivariate Polynomial Commitment
5.1 Computational Assumptions
5.2 Knowledge Commitment for Bivariate Polynomials
6 CaP-SNARK for Bivariate Polynomial Evaluation
6.1 Relations for Bivariate Polynomial Partial Evaluation
6.2 Our BivPE- Scheme for Bivariate Polynomial Evaluation
7 CaP-SNARK for Simultaneous Evaluations
7.1 Commitment for Multiple Univariate Polynomials
7.2 Succinct Proof of Multiple Evaluations in a Point k
7.3 Efficiency and Comparison
8 Security Analysis of Our CaP BivPE-
References
Isogeny-Based Cryptography
Lossy CSI-FiSh: Efficient Signature Scheme with Tight Reduction to Decisional CSIDH-512
1 Introduction
1.1 Background
1.2 Our Contribution
2 Preliminaries
2.1 Identification Protocols
2.2 Digital Signature Schemes
2.3 Pseudorandom Functions
2.4 Fiat-Shamir Transformation
2.5 Class Group Actions and Hardness Assumption
3 Base Lossy Identification Protocol from CSIDH-512
3.1 Hardness Assumption: Decisional CSIDH
3.2 Construction of Base Lossy Identification Protocol
3.3 Security of Base Lossy Identification Protocol IDBasels
3.4 Lossy Soundness Amplification of IDBasels
4 Optimized Lossy Identification Protocol from CSIDH-512
4.1 Hardness Assumption: Fixed-Curve Multi-decisional CSIDH
4.2 Enlarging Challenge Space of Base Lossy Identification Protocol
4.3 (Almost) Doubling Challenge Space of Lossy Identification Scheme IDEnChls
4.4 Lossy Soundness Amplification of IDDenChls
5 Lossy CSI-FiSh: Tightly Secure Signature from CSIDH-512
5.1 Construction of Lossy CSI-FiSh
5.2 Instantiations and Comparison to CSI-FiSh
6 Conclusions and Open Problems
References
Threshold Schemes from Isogeny Assumptions
1 Introduction
2 Preliminaries
2.1 Shamir's Secret Sharing and Threshold Cryptosystems
2.2 Hard Homogeneous Spaces
3 Threshold Schemes from HHS
3.1 Threshold Group Action
3.2 Threshold HHS ElGamal Decryption
3.3 Threshold Signatures
4 Instantiations Based on Isogeny Graphs
4.1 Supersingular Complex Multiplication
4.2 CSIDH and CSI-FiSh
4.3 Instantiation of the Threshold Schemes
5 Conclusion
References
Multiparty Protocols
Topology-Hiding Computation for Networks with Unknown Delays
1 Introduction
1.1 Contributions
1.2 Related Work
2 The Probabilistic Unknown Delay Model
2.1 Impossibility of Stronger Models
2.2 Adversary
2.3 Communication Network and Clocks
2.4 Additional Related Work
3 Protocols for Restricted Classes of Graphs
3.1 Synchronous THC from Random Walks
3.2 Protocol for Cycles
3.3 Protocol for Trees
4 Protocol for General Graphs
4.1 Preprocessing
4.2 Computation
4.3 Computing the Eulerian Cycle
A Adversarially-Controlled Delays Leak Topology
A.1 Adversarially-Controlled Delay Indistinguishability-based Security Definition
A.2 Proof that Adversarially-Controlled Delays Leak Topology
B PKCR* Encryption
B.1 Construction of PKCR* Based on DDH
C The Function Executed by the Hardware Boxes
References
Sublinear-Round Byzantine Agreement Under Corrupt Majority
1 Introduction
1.1 Our Results and Contributions
2 Preliminaries
2.1 Protocol Execution Model
2.2 Byzantine Agreement
3 Technical Roadmap: Nearly Round-Optimal BA for Corrupt Majority
3.1 Warmup: Any Constant Fraction of Static Corruption
3.2 Achieving Adaptive Security and Removing the Leader Election Oracle
3.3 Organization of the Subsequent Formal Sections
4 Formal Description of Fmine-Hybrid Protocol
4.1 Ideal Functionality Fmine for Random Eligibility Determination
4.2 Formal Protocol in the Fmine-Hybrid World
4.3 Analysis in the Fmine-Hybrid World
5 Removing the Idealized Functionality Fmine
5.1 Preliminary: Adaptively Secure Non-interactive Zero-Knowledge Proofs
5.2 Adaptively Secure Non-interactive Commitment Scheme
5.3 Removing Fmine with Cryptography
References
Bandwidth-Efficient Threshold EC-DSA
1 Introduction
2 Preliminaries
2.1 The Elliptic Curve Digital Signature Algorithm
2.2 Building Blocks from Class Groups
2.3 Algorithmic Assumptions
3 Threshold EC-DSA Protocol
3.1 ZKAoK Ensuring a CL Ciphertext Is Well Formed
3.2 Interactive Set Up for the CL Encryption Scheme
3.3 Resulting Threshold EC-DSA Protocol
4 Security
4.1 Simulating the Key Generation Protocol
4.2 Simulating the Signature Generation
4.3 The Simulation of a Semi-correct Execution
4.4 Non Semi-correct Executions
4.5 Concluding the Proof
5 Further Improvements
5.1 An Improved ZKPoK Which Kills Low Order Elements
5.2 Assuming a Standardised Group
6 Efficiency Comparisons
References
Secure Computation and Related Primitives
Blazing Fast OT for Three-Round UC OT Extension
1 Introduction
1.1 Our Contributions
1.2 More Discussion on Related Works
2 Preliminaries
3 Technical Overview
3.1 Overview of KOS
3.2 Relaxation in the OT Functionality
3.3 Usage in KOS OT Extension
3.4 Optimized OT Protocol in the Observable RO Model
3.5 Circumventing the Impossibility Result of ch11C:GMMM18
4 Weakening the Oblivious Transfer Functionality
5 Oblivious Transfer Extension Using OT
5.1 Security Proof
5.2 Efficiency
6 Implementing Instances of FSF-rOT
6.1 Security Proof
6.2 Efficiency
7 Implementation and Evaluation
References
Going Beyond Dual Execution: MPC for Functions with Efficient Verification
1 Introduction
1.1 Results in the 1-bit Leakage Model
1.2 Extending Dual Execution to Other Protocols
2 Preliminaries
2.1 Verifiable Secret Sharing (VSS)
2.2 Secure Computation with 1-bit Leakage
2.3 Garbled Circuits
2.4 The ch12BeaverMR90 Garbling
3 Dual Execution with Efficient Verification
4 Additively Secure Protocols with Program Checkers
4.1 Additive Attacks and AMD Circuits
4.2 Additive Security of BMR Distributed Garbling
4.3 Compiling Additively Secure Protocols
5 Perfect Matching Protocol Secure up to Additive Attacks
References
MonZ2ka: Fast Maliciously Secure Two Party Computation on Z2k
1 Introduction
2 Preliminaries
2.1 Notation
2.2 Linearly-Homomorphic Encryption for Messages in Z2n
2.3 Commitments
2.4 Security Model
2.5 Value-Representation in SPDZ2k
3 Offline Phase
3.1 On the Impossibility of Enhanced-CPA Security in Z2n: Comparing with Overdrive Offline Phase
4 Joye-Libert Cryptosystem and Companion Protocols
4.1 Zero-Knowledge Proof of Correct Multiplication
4.2 Zero-Knowledge Proof of Correct Multiplication of Two Committed (or Encrypted) Values
5 Efficiency Analysis
References
Post-Quantum Primitives
Generic Authenticated Key Exchange in the Quantum Random Oracle Model
1 Introduction
1.1 Our Contributions
2 Preliminaries
2.1 Public-Key Encryption
2.2 Key Encapsulation
2.3 Quantum Computation
3 The FO Transformation: QROM Security with Correctness Errors
3.1 Modularisation of TPunc
3.2 Transformation FOm and Correctness Errors
3.3 CCA Security Without Disjoint Simulatability
4 Two-Message Authenticated Key Exchange
5 Transformation from PKE to AKE
5.1 IND-StAA Security Without Disjoint Simulatability
References
Threshold Ring Signatures: New Definitions and Post-quantum Security
1 Introduction
1.1 Limitations of Previous Work
1.2 Our Contribution
2 Related Work
3 Preliminaries
3.1 Threshold Ring Signatures in Presence of Active Adversaries
4 Post-quantum Secure Threshold Ring Signatures
5 Post-quantum Security of TRS
5.1 Proofs
6 Trapdoor Commitments from OWFs
6.1 On the Notion of Binding in Presence of Quantum Adversaries
References
Tight and Optimal Reductions for Signatures Based on Average Trapdoor Preimage Sampleable Functions and Applications to Code-Based Signatures
1 Introduction
2 Preliminaries
3 Digital Signatures and EUF-CMA Security Model in a Classical/Quantum Setting
4 Family of ATPSF
4.1 Constructing a Signature Scheme from ATPSF
5 One-Wayness, Collision Resistance and the Claw with Random Function Problem
5.1 Definitions
5.2 Relating These Different Advantages
6 Tight Reduction to the Claw Problem, with ATPSF
6.1 Proof of Our Main Theorem
7 Quantum Security Proof in the QROM
7.1 The Quantum Random Oracle Model
7.2 Tight Quantum Security of SF
8 Applying the Result to Code-Based Signatures Based on ATPSF
8.1 Canonical Construction of Code-Based ATPSF
8.2 Relating Hardness of Breaking the CBATPSF with the Hardness of Breaking Standard Code-Based Problems
8.3 Wave Instantiation
9 Conclusion
References
Cryptanalysis and Concrete Security
Faster Cofactorization with ECM Using Mixed Representations
1 Introduction
2 Preliminaries
2.1 The Elliptic Curve Method
2.2 Montgomery Curves
2.3 Twisted Edwards Curves
2.4 The Best of Both Worlds
2.5 Parameterization
3 Scalar Multiplication
3.1 Generation of Double-Base Expansions
3.2 Generation of Double-Base Chains
3.3 Generation of Lucas Chains
4 Combination of Blocks for ECM Stage 1
4.1 Bos–Kleinjung Algorithm
4.2 Our Algorithm
5 Results and Comparison
6 Conclusion
A Counting Double-Base Expansions
References
Improved Classical Cryptanalysis of SIKE in Practice
1 Introduction
2 Preliminaries: van Oorschot-Wiener's Collision Search
2.1 The CSSI Problem
2.2 The Meet-in-the-middle Claw Finding Algorithm
2.3 Solving CSSI with van Oorschot-Wiener
2.4 Complexity Analysis of van Oorschot-Wiener
3 Parallel Collision Search for Supersingular Isogenies
3.1 Solving SIKE Instances
3.2 Applying van Oorschot-Wiener to SIKE
3.3 Partial Isogeny Precomputation
3.4 Fast Collision Checking
4 Implementation
5 Analysis of SIKE Round-2 Parameters
5.1 Concrete Security of SIKE Round-2 Parameters
5.2 Concrete Security of SIKEp434
References
A Short-List of Pairing-Friendly Curves Resistant to Special TNFS at the 128-Bit Security Level
1 Introduction
2 The Special Tower Number Field Sieve
2.1 Estimation of TNFS Cost
2.2 Special Polynomial Selection
3 Complete Families of Pairing-Friendly Curves
3.1 Brezing–Weng Constructions of Pairing-Friendly Curves
3.2 Reducing the Possibilities
3.3 Security Estimate for the Finite Field
4 Optimal Ate Pairing Computation: Miller Loop
4.1 Prime Embedding Degrees 11 and 13
4.2 Even Embedding Degrees 10 and 14
4.3 Comparison
5 Overview of the 192-Bit Security Level
6 Conclusion
References
Privacy-Preserving Schemes
Privacy-Preserving Authenticated Key Exchange and the Case of IKEv2
1 Introduction
1.1 Privacy in AKE Protocols
1.2 A New Security Model
1.3 Comparison with TOR and Practical Motivation
1.4 IPsec IKEv2 Is PPAKE
1.5 On the Challenge of Constructing PPAKE
1.6 Contributions
1.7 Related Works
1.8 Building Blocks
2 PPAKE in Practice: Generic Construction, Comparison and Limitations
3 Security Model for PPAKE
3.1 Computational Model for Key Exchange
3.2 Adversarial Model for Key Exchange
3.3 Original Key Partnering
3.4 Security and Privacy Model
3.5 Additional Considerations
4 Internet Protocol Security (IPsec)
5 IKEv2 Is a Secure PPAKE Protocol
5.1 Proof for Initiator-Adversaries
5.2 Additional Considerations
6 Summary and Future Work
References
Linearly-Homomorphic Signatures and Scalable Mix-Nets
1 Introduction
1.1 State of the Art
1.2 Our Approach
1.3 Organization
2 Computational Assumptions
2.1 Classical Assumptions
2.2 Unlinkability Assumption
3 Linearly-Homomorphic Signatures
3.1 Definition and Security
3.2 Our One-Time Linearly-Homomorphic Signature
3.3 Notations and Constraints
3.4 FSH Linearly-Homomorphic Signature Scheme
4 Mix-Networks
4.1 General Description
4.2 Difficulties
4.3 Our Scheme
4.4 Constant-Size Proof
4.5 Efficiency
5 Security Analysis
5.1 Proof of Soundness
5.2 Proof of Privacy: Unlinkability
6 Applications
6.1 Electronic Voting
6.2 Message Routing
References
Efficient Redactable Signature and Application to Anonymous Credentials
1 Introduction
1.1 Our Contribution
1.2 Organisation
2 Preliminaries
3 Redactable Signatures
3.1 Syntax
3.2 Security Model
4 Short Redactable Signatures
4.1 Our Construction
4.2 Achieving Unlinkability
4.3 Security Analysis
5 Anonymous Credentials
5.1 Syntax
5.2 Security Model
6 Our Anonymous Credentials System
6.1 Our Construction
6.2 Security Analysis
7 Efficiency
8 Conclusion
References
Author Index
Recommend Papers

Public-Key Cryptography – PKC 2020: 23rd IACR International Conference on Practice and Theory of Public-Key Cryptography, Edinburgh, UK, May 4–7, 2020, Proceedings, Part II (Security and Cryptology)
 3030453871, 9783030453879

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

LNCS 12111

Aggelos Kiayias Markulf Kohlweiss Petros Wallden Vassilis Zikas (Eds.)

Public-Key Cryptography – PKC 2020 23rd IACR International Conference on Practice and Theory of Public-Key Cryptography Edinburgh, UK, May 4–7, 2020 Proceedings, Part II

Lecture Notes in Computer Science Founding Editors Gerhard Goos Karlsruhe Institute of Technology, Karlsruhe, Germany Juris Hartmanis Cornell University, Ithaca, NY, USA

Editorial Board Members Elisa Bertino Purdue University, West Lafayette, IN, USA Wen Gao Peking University, Beijing, China Bernhard Steffen TU Dortmund University, Dortmund, Germany Gerhard Woeginger RWTH Aachen, Aachen, Germany Moti Yung Columbia University, New York, NY, USA

12111

More information about this series at http://www.springer.com/series/7410

Aggelos Kiayias Markulf Kohlweiss Petros Wallden Vassilis Zikas (Eds.) •





Public-Key Cryptography – PKC 2020 23rd IACR International Conference on Practice and Theory of Public-Key Cryptography Edinburgh, UK, May 4–7, 2020 Proceedings, Part II

123

Editors Aggelos Kiayias University of Edinburgh Edinburgh, UK

Markulf Kohlweiss University of Edinburgh Edinburgh, UK

Petros Wallden University of Edinburgh Edinburgh, UK

Vassilis Zikas University of Edinburgh Edinburgh, UK

ISSN 0302-9743 ISSN 1611-3349 (electronic) Lecture Notes in Computer Science ISBN 978-3-030-45387-9 ISBN 978-3-030-45388-6 (eBook) https://doi.org/10.1007/978-3-030-45388-6 LNCS Sublibrary: SL4 – Security and Cryptology © International Association for Cryptologic Research 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The 23rd IACR International Conference on Practice and Theory of Public-Key Cryptography (PKC 2020) was held during May 4–7, 2020, in Edinburgh, Scotland, UK. This conference series is organized annually by the International Association of Cryptologic Research (IACR). It is the main annual conference with an explicit focus on public-key cryptography sponsored by IACR. The proceedings are comprised of two volumes and include the 44 papers that were selected by the Program Committee. A total of 180 submissions were received for consideration for this year’s program. Three submissions were table rejected due to significant deviations from the instructions of the call for papers. Submissions were assigned to at least three reviewers, while submissions by Program Committee members received at least four reviews. The review period was divided in three stages, the first one reserved for individual reviewing that lasted five weeks. It was followed by the second stage, where the authors were given the opportunity to respond to the reviews. Finally in the third stage, which lasted about 5 weeks, the Program Committee members engaged in discussion taking into account the rebuttal comments submitted by the authors. In addition to the rebuttal, in a number of occasions, the authors of the papers were engaged with additional questions and clarifications. Seven of the papers were conditionally accepted and received a final additional round of reviewing. The reviewing and paper selection process was a difficult task and I am deeply grateful to the members of the Program Committee for their hard and thorough work. Additionally, my deep gratitude is extended to the 252 external reviewers who assisted the Program Committee. The submissions included two papers with which the program chair had a soft conflict of interest (they included in their author list researchers based at the University of Edinburgh). For these two papers, the chair abstained from the management of the discussion and delegated this task to a Program Committee member. I am grateful to Helger Lipmaa for his help in managing these two papers. I would like to also thank Shai Halevi for his web submission and review software which we used for managing the whole process very successfully. The invited talk at PKC 2020, entitled “How low can we go?” was delivered by Yuval Ishai. I would like to thank Yuval for accepting the invitation and contributing to the program this year as well as all the authors who submitted their work. I would like to also thank my good colleagues and co-editors of these two volumes, Markulf Kohlweiss, Petros Wallden, and Vassilis Zikas who served as general co-chairs this year. A special thanks is also due to Dimitris Karakostas who helped with the website of the conference, Gareth Beedham who assisted in various administrative tasks, and all

This proceedings volume was prepared before the conference took place and it reflects its original planning, irrespective of the disruption caused by the COVID-19 pandemic.

vi

Preface

PhD students at the School of Informatics who helped with the conference organization. Finally, I am deeply grateful to our industry sponsors, listed in the conference’s website, who provided generous financial support. May 2020

Aggelos Kiayias

Organization

The 23rd IACR International Conference on Practice and Theory in Public-Key Cryptography (PKC 2020) was organized by the International Association for Cryptologic Research and sponsored by the Scottish Informatics and Computer Science Alliance.

General Chairs Markulf Kohlweiss Petros Wallden Vassilis Zikas

University of Edinburgh, UK University of Edinburgh, UK University of Edinburgh, UK

Program Chair Aggelos Kiayias

University of Edinburgh, UK

Program Committee Gorjan Alagic Gilad Asharov Nuttapong Attrapadung Joppe Bos Chris Bruszka Liqun Chen Kai-Min Chung Dana Dachman-Soled Sebastian Faust Dario Fiore Marc Fischlin Georg Fuchsbauer Steven Galbraith Junqing Gong Kyoohyung Han Aggelos Kiayias Stephan Krenn Benoît Libert Helger Lipmaa Ryo Nishimaki Miyako Okhubo Emmanuela Orsini Omkant Pandey

UMD, USA Bar-Ilan University, Israel AIST, Japan NXP, Germany TU Hamburg, Germany University of Surrey, UK Academia Sinica, Taiwan UMD, USA TU Darmstadt, Germany IMDEA Software Institute, Spain TU Darmstadt, Germany ENS Paris, France Auckland University, New Zealand CNRS and ENS, France Coinplug, South Korea University of Edinburgh, UK AIT, Austria CNRS and ENS de Lyon, France Simula UiB, Norway NTT Secure Platform Lab, Japan NICT, Japan KUL, Belgium Stonybrook University, USA

viii

Organization

Charalampos Papamanthou Christophe Petit Thomas Prest Carla Ràfols Arnab Roy Simona Samardjiska Yongsoo Song Rainer Steinwandt Berk Sunar Atsushi Takayasu Serge Vaudenay Daniele Venturi Frederik Vercauteren Chaoping Xing Thomas Zacharias Hong Sheng Zhou

UMD, USA University of Birmingham, UK PQ Shield Ltd., USA University of Bristol, UK Universitat Pompeu Fabra, Spain Radboud University, The Netherlands Microsoft Research, USA Florida Atlantic University, USA Worcester Polytechnic Institute, USA University of Tokyo, Japan EPFL, Switzerland Sapienza Università di Roma, Italy KUL, Belgium Nanyang Technological University, Singapore University of Edinburgh, UK VCU, USA

External Reviewers Aydin Abadi Behzad Abdolmaleki Masayuki Abe Kamalesh Acharya Shashank Agrawal Younes Talibi Alaoui Erdem Alkim Miguel Ambrona Myrto Arapinis Thomas Attema Shi Bai Foteini Baldimtsi Fatih Balli Subhadeep Banik Khashayar Barooti Andrea Basso Balthazar Bauer Carsten Baum Ward Beullens Rishiraj Bhattacharyya Nina Bindel Olivier Blazy Carl Bootland Colin Boyd Andrea Caforio Sergiu Carpov

Ignacio Cascudo Wouter Castryck Andrea Cerulli Rohit Chatterjee Hao Chen Long Chen Rongmao Chen Jung Hee Cheon Ilaria Chillotti Gwangbae Choi Heewon Chung Michele Ciampi Aloni Cohen Ran Cohen Alexandru Cojocaru Simone Colombo Anamaria Costache Craig Costello Wei Dai Dipayan Das Poulami Das Thomas Debris-Alazard Thomas Decru Ioannis Demertzis Amit Deo Yarkin Doroz

Organization

Yfke Dulek F. Betül Durak Stefan Dziembowski Fabian Eidens Thomas Eisenbarth Naomi Ephraim Andreas Erwig Leo Fan Xiong Fan Antonio Faonio Pooya Farshim Prastudy Fauzi Tamara Finogina Danilo Francati Cody Freitag Eiichiro Fujisaki Jun Furukawa Ameet Gadekar Chaya Ganesh Wei Gao Pierrick Gaudry Romain Gay Huijing Gong Alonso Gonzalez Alonso González Cyprien de Saint Guilhem Mohammad Hajiabadi Shuai Han Abida Haque Patrick Harasser Carmit Hazay Javier Herranz Kristina Hostakova Dongping Hu Loïs Huguenin-Dumittan Shih-Han Hung Ilia Iliashenko Mitsugu Iwamoto Kiera Jade Aayush Jain Christian Janson David Jao Jinhyuck Jeong Dingding Jia Yanxue Jia Charanjit Jutla

Dimitris Karakostas Nada El Kassem Shuichi Katsumata Marcel Keller Thomas Kerber Nguyen Ta Toan Khoa Ryo Kikuchi Allen Kim Dongwoo Kim Duhyeong Kim Jiseung Kim Miran Kim Taechan Kim Mehmet Kiraz Elena Kirshanova Fuyuki Kitagawa Susumu Kiyoshima Karen Klein Dimitris Kolonelos Ilan Komargodski Venkata Koppula Toomas Krips Mukul Kulkarni Péter Kutas Norman Lahr Nikolaos Lamprou Fei Li Jiangtao Li Zengpeng Li Zhe Li Xiao Liang Wei-Kai Lin Yeo Sze Ling Orfeas Thyfronitis Litos Julian Loss Zhenliang Lu Vadim Lyubashevsky Fermi Ma Yi-Hsin Ma Bernardo Magri Christian Majenz Nathan Manohar William J. Martin Chloe Martindale Ramiro Martínez Daniel Masny

ix

x

Organization

Simon Masson Takahiro Matsuda Sogol Mazaheri Simon-Philipp Merz Peihan Miao Takaaki Mizuki Fabrice Mouhartem Yi Mu Pratyay Mukherjee Koksal Mus Michael Naehrig Khoa Nguyen Ariel Nof Luca Notarnicola Adam O’Neill Erdinc Ozturk Tapas Pal Alain Passelègue Alice Pellet–Mary Ray Perlner Thomas Peters Zaira Pindado Rafael del Pino Federico Pintore Antoine Plouviez Yuriy Polyakov Chen Qian Luowen Qian Yuan Quan Sebastian Ramacher Joost Renes Thomas Ricosset Felix Rohrbach Mélissa Rossi Dragos Rotaru Sujoy Sinha Roy Cyprien Delpech de Saint-Guilhem Yusuke Sakai Katerina Samari Kai Samelin Olivier Sanders Benjamin Schlosser Jacob Schuldt Peter Schwabe Jae Hong Seo Ido Shahaf

Yu-Ching Shen Kazumasa Shinagawa Janno Siim Javier Silva Luisa Siniscalchi Daniel Slamanig Azam Soleimanian Yongha Son Claudio Soriente Pierre-Jean Spaenlehauer Florian Speelman Akshayaram Srinivasan Shravan Srinivasan Martijn Stam Igors Stephanovs Noah Stephens-Davidowitz Christoph Striecks Shifeng Sun Koutarou Suzuki Alan Szepieniec Katsuyuki Takashima Rajdeep Talapatra Qiang Tang Titouan Tanguy Phuc Thai Radu Titiu Junichi Tomida Nikos Triandopoulos Yiannis Tselekounis Jorge L. Villar Christine van Vredendaal Sameer Wagh Michael Walter Yuntao Wang Yuyu Wang Yohei Watanabe Gaven Watson Florian Weber Charlotte Weitkämper Weiqiang Wen Benjamin Wesolowski Jeroen van Wier Jan Winkelmann Fredrik Winzer Keita Xagawa Chaoping Xing

Organization

Shota Yamada Takashi Yamakawa Avishay Yanai Rupeng Yang Eylon Yogev Kazuki Yoneyama Chen Yuan Alexandros Zacharakis

xi

Michal Zajac Bingsheng Zhang Yupeng Zhang Zhenfei Zhang Yi Zhao Haibin Zheng Arne Tobias Ødegaard Morten Øygarden

This proceedings volume was prepared before the conference took place and it reflects its original planning, irrespective of the disruption caused by the COVID-19 pandemic.

Contents – Part II

Lattice-Based Cryptography The Randomized Slicer for CVPP: Sharper, Faster, Smaller, Batchier . . . . . . Léo Ducas, Thijs Laarhoven, and Wessel P. J. van Woerden Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices: KEMs and Signatures of Smaller Sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . Jiang Zhang, Yu Yu, Shuqin Fan, Zhenfeng Zhang, and Kang Yang MPSign: A Signature from Small-Secret Middle-Product Learning with Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shi Bai, Dipayan Das, Ryo Hiromasa, Miruna Rosca, Amin Sakzad, Damien Stehlé, Ron Steinfeld, and Zhenfei Zhang

3

37

66

Proofs and Arguments II Witness Indistinguishability for Any Single-Round Argument with Applications to Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zvika Brakerski and Yael Kalai Boosting Verifiable Computation on Encrypted Data . . . . . . . . . . . . . . . . . . Dario Fiore, Anca Nitulescu, and David Pointcheval

97 124

Isogeny-Based Cryptography Lossy CSI-FiSh: Efficient Signature Scheme with Tight Reduction to Decisional CSIDH-512. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ali El Kaafarani, Shuichi Katsumata, and Federico Pintore Threshold Schemes from Isogeny Assumptions . . . . . . . . . . . . . . . . . . . . . . Luca De Feo and Michael Meyer

157 187

Multiparty Protocols Topology-Hiding Computation for Networks with Unknown Delays . . . . . . . Rio LaVigne, Chen-Da Liu-Zhang, Ueli Maurer, Tal Moran, Marta Mularczyk, and Daniel Tschudi

215

Sublinear-Round Byzantine Agreement Under Corrupt Majority . . . . . . . . . . T.-H. Hubert Chan, Rafael Pass, and Elaine Shi

246

xiv

Contents – Part II

Bandwidth-Efficient Threshold EC-DSA . . . . . . . . . . . . . . . . . . . . . . . . . . Guilhem Castagnos, Dario Catalano, Fabien Laguillaumie, Federico Savasta, and Ida Tucker

266

Secure Computation and Related Primitives Blazing Fast OT for Three-Round UC OT Extension. . . . . . . . . . . . . . . . . . Ran Canetti, Pratik Sarkar, and Xiao Wang Going Beyond Dual Execution: MPC for Functions with Efficient Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Carmit Hazay, Abhi Shelat, and Muthuramakrishnan Venkitasubramaniam MonZ2k a: Fast Maliciously Secure Two Party Computation on Z2k . . . . . . . . Dario Catalano, Mario Di Raimondo, Dario Fiore, and Irene Giacomelli

299

328

357

Post-Quantum Primitives Generic Authenticated Key Exchange in the Quantum Random Oracle Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kathrin Hövelmanns, Eike Kiltz, Sven Schäge, and Dominique Unruh Threshold Ring Signatures: New Definitions and Post-quantum Security . . . . Abida Haque and Alessandra Scafuro Tight and Optimal Reductions for Signatures Based on Average Trapdoor Preimage Sampleable Functions and Applications to Code-Based Signatures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . André Chailloux and Thomas Debris-Alazard

389 423

453

Cryptanalysis and Concrete Security Faster Cofactorization with ECM Using Mixed Representations . . . . . . . . . . Cyril Bouvier and Laurent Imbert

483

Improved Classical Cryptanalysis of SIKE in Practice . . . . . . . . . . . . . . . . . Craig Costello, Patrick Longa, Michael Naehrig, Joost Renes, and Fernando Virdia

505

A Short-List of Pairing-Friendly Curves Resistant to Special TNFS at the 128-Bit Security Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aurore Guillevic

535

Contents – Part II

xv

Privacy-Preserving Schemes Privacy-Preserving Authenticated Key Exchange and the Case of IKEv2 . . . . Sven Schäge, Jörg Schwenk, and Sebastian Lauer

567

Linearly-Homomorphic Signatures and Scalable Mix-Nets . . . . . . . . . . . . . . Chloé Hébant, Duong Hieu Phan, and David Pointcheval

597

Efficient Redactable Signature and Application to Anonymous Credentials. . . Olivier Sanders

628

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

657

Contents – Part I

Functional Encryption Fast, Compact, and Expressive Attribute-Based Encryption . . . . . . . . . . . . . Junichi Tomida, Yuto Kawahara, and Ryo Nishimaki

3

Adaptive Simulation Security for Inner Product Functional Encryption . . . . . Shweta Agrawal, Benoît Libert, Monosij Maitra, and Radu Titiu

34

Verifiable Inner Product Encryption Scheme. . . . . . . . . . . . . . . . . . . . . . . . Najmeh Soroush, Vincenzo Iovino, Alfredo Rial, Peter B. Roenne, and Peter Y. A. Ryan

65

A New Paradigm for Public-Key Functional Encryption for Degree-2 Polynomials. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Romain Gay

95

Identity-Based Encryption Master-Key KDM-Secure IBE from Pairings . . . . . . . . . . . . . . . . . . . . . . . Sanjam Garg, Romain Gay, and Mohammad Hajiabadi Hierarchical Identity-Based Encryption with Tight Multi-challenge Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roman Langrehr and Jiaxin Pan

123

153

Obfuscation and Applications The Usefulness of Sparsifiable Inputs: How to Avoid Subexponential iO . . . . Thomas Agrikola, Geoffroy Couteau, and Dennis Hofheinz

187

Witness Maps and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Suvradip Chakraborty, Manoj Prabhakaran, and Daniel Wichs

220

Encryption Schemes Memory-Tight Reductions for Practical Key Encapsulation Mechanisms . . . . Rishiraj Bhattacharyya

249

Toward RSA-OAEP Without Random Oracles . . . . . . . . . . . . . . . . . . . . . . Nairen Cao, Adam O’Neill, and Mohammad Zaheri

279

xviii

Contents – Part I

Public-Key Puncturable Encryption: Modular and Compact Constructions . . . Shi-Feng Sun, Amin Sakzad, Ron Steinfeld, Joseph K. Liu, and Dawu Gu

309

Secure Channels Flexible Authenticated and Confidential Channel Establishment (fACCE): Analyzing the Noise Protocol Framework . . . . . . . . . . . . . . . . . . . . . . . . . Benjamin Dowling, Paul Rösler, and Jörg Schwenk

341

Limits on the Efficiency of (Ring) LWE Based Non-interactive Key Exchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Siyao Guo, Pritish Kamath, Alon Rosen, and Katerina Sotiraki

374

PAKEs: New Framework, New Techniques and More Efficient Lattice-Based Constructions in the Standard Model . . . . . . . . . . . . . . . . . . . Shaoquan Jiang, Guang Gong, Jingnan He, Khoa Nguyen, and Huaxiong Wang

396

Basic Primitives with Special Properties Constraining and Watermarking PRFs from Milder Assumptions. . . . . . . . . . Chris Peikert and Sina Shiehian Bringing Order to Chaos: The Case of Collision-Resistant Chameleon-Hashes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David Derler, Kai Samelin, and Daniel Slamanig

431

462

Proofs and Arguments I Concretely-Efficient Zero-Knowledge Arguments for Arithmetic Circuits and Their Application to Lattice-Based Cryptography . . . . . . . . . . . . . . . . . Carsten Baum and Ariel Nof

495

Updateable Inner Product Argument with Logarithmic Verifier and Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vanesa Daza, Carla Ràfols, and Alexandros Zacharakis

527

On Black-Box Extensions of Non-interactive Zero-Knowledge Arguments, and Signatures Directly from Simulation Soundness . . . . . . . . . . . . . . . . . . Masayuki Abe, Miguel Ambrona, and Miyako Ohkubo

558

On QA-NIZK in the BPK Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Behzad Abdolmaleki, Helger Lipmaa, Janno Siim, and Michał Zając

590

Contents – Part I

xix

Lattice-Based Cryptography Improved Discrete Gaussian and Subgaussian Analysis for Lattice Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nicholas Genise, Daniele Micciancio, Chris Peikert, and Michael Walter

623

Almost Tight Security in Lattices with Polynomial Moduli – PRF, IBE, All-but-many LTF, and More . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qiqi Lai, Feng-Hao Liu, and Zhedong Wang

652

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

683

Lattice-Based Cryptography

The Randomized Slicer for CVPP: Sharper, Faster, Smaller, Batchier L´eo Ducas1 , Thijs Laarhoven2 , and Wessel P. J. van Woerden1(B) 1 2

CWI, Amsterdam, The Netherlands [email protected] TU/e, Eindhoven, The Netherlands

Abstract. Following the recent line of work on solving the closest vector problem with preprocessing (CVPP) using approximate Voronoi cells, we improve upon previous results in the following ways: – We derive sharp asymptotic bounds on the success probability of the randomized slicer, by modelling the behaviour of the algorithm as a random walk on the coset of the lattice of the target vector. We thereby solve the open question left by Doulgerakis–Laarhoven–De Weger [PQCrypto 2019] and Laarhoven [MathCrypt 2019]. – We obtain better trade-offs for CVPP and its generalisations (strictly, in certain regimes), both with and without nearest neighbour searching, as a direct result of the above sharp bounds on the success probabilities. – We show how to reduce the memory requirement of the slicer, and in particular the corresponding nearest neighbour data structures, using ideas similar to those proposed by Becker–Gama–Joux [Cryptology ePrint Archive, 2015]. Using 20.185d+o(d) memory, we can solve a single CVPP instance in 20.264d+o(d) time. – We further improve on the per-instance time complexities in certain memory regimes, when we are given a sufficiently large batch of CVPP problem instances for the same lattice. Using 20.208d+o(d) memory, we can heuristically solve CVPP instances in 20.234d+o(d) amortized time, for batches of size at least 20.058d+o(d) . Our random walk model for analysing arbitrary-step transition probabilities in complex step-wise algorithms may be of independent interest, both for deriving analytic bounds through convexity arguments, and for computing optimal paths numerically with a shortest path algorithm. As a side result we apply the same random walk model to graph-based nearest neighbour searching, where we improve upon results of Laarhoven [SOCG 2018] by deriving sharp bounds on the success probability of the corresponding greedy search procedure. Keywords: Lattices · Closest vector problem with preprocessing · Approximate Voronoi cells · Iterative slicer · Graph-based nearest neighbours

c International Association for Cryptologic Research 2020  A. Kiayias et al. (Eds.): PKC 2020, LNCS 12111, pp. 3–36, 2020. https://doi.org/10.1007/978-3-030-45388-6_1

4

L. Ducas et al.

1

Introduction

Lattice Problems. Following Shor’s breakthrough work on efficient quantum algorithms for problems previously deemed sufficiently hard to base cryptography on [26], researchers have began looking for alternatives to “classical” cryptosystems such as RSA [25] and Diffie–Hellman [10]. Out of these candidates for “post-quantum” cryptography [8], lattice-based cryptography has emerged as a leading candidate, due to its efficiency, versatility, and the conjecture that the underlying lattice problems may be hard to solve quantumly as well [23]. The security of most lattice-based cryptographic schemes can be traced back to either the shortest vector problem (SVP) or variants of the closest vector problem (CVP), which ask to either return the shortest non-zero vector in a lattice, or the closest lattice vector to a given target vector. These variants include approxCVP, where we need to return a somewhat close lattice vector, and bounded distance decoding (BDD), where we are guaranteed that the target lies close to the lattice. As parameters for cryptographic schemes are commonly based on the estimated complexities of state-of-the-art methods for these problems, it is important to obtain a good understanding of the true hardness of these and other lattice problems. The current fastest approaches for solving these problems are based on lattice sieving [1,2,6] and lattice enumeration [3,4,14,15,17], where the former offers a better asymptotic scaling of the time complexity in terms of the lattice dimension, at the cost of an exponentially large memory consumption. The Closest Vector Problem with Preprocessing (CVPP). The closest vector problem with preprocessing (CVPP) is a variant of CVP, where the solver is allowed to perform some preprocessing on the lattice at no additional cost, before being given the target vector. Closely related to this is batch-CVP, where many CVP instances on the same lattice are to be solved; if an efficient global preprocessing procedure can be performed using only the lattice as input, and that would help reduce the costs of single CVP instances, then this preprocessing cost can be amortized over many problem instances to obtain a faster algorithm for batch-CVP. This problem of batch-CVP most notably appears in the context of lattice enumeration for solving SVP or CVP, as a fast batch-CVP algorithm would potentially imply faster SVP and CVP algorithms based on a hybrid of enumeration and such a CVPP oracle [13,15]. Voronoi Cells and the Iterative Slicer. One method for solving CVPP is the iterative slicer by Sommer–Feder–Shalvi [27]. Preprocessing consists of computing a large list of lattice vectors, and a query is processed by “reducing” the target vector t with this list, i.e. repeatedly translating the target by some lattice vector until the shortest representative t in the coset of the target vector is found. The closest lattice vector to t is then given by t − t , which lies at distance t  from t. For this method to provably succeed, the preprocessed list needs to contain all O(2d ) so-called Voronoi relevant vectors of the lattice, which together define the boundaries of the Voronoi cell of the lattice. This leads to a 4d+o(d) algorithm by bounding the number of reduction steps by 2d+o(d) [21], which was

The Randomized Slicer for CVPP

5

later improved to an expected time of 2d+o(d) by randomizing the algorithm such that the number of expected steps is polynomially bounded [9]. Approximate Voronoi Cells and the Randomized Slicer. The large number of Voronoi relevant vectors of a lattice, needed for the iterative slicer to be provably successful, makes the straightforward application of this method impractical and does not result in an improvement over the best (heuristic) CVP complexities without preprocessing. Therefore we fall back on heuristics to analyse latticebased algorithms, as they often better represent the practical complexities of the algorithms than the proven worst-case bounds. For solving CVPP more efficiently than CVP, Laarhoven [18] proposed to use a smaller preprocessed list of size 2d/2+o(d) containing all lattice vectors up to some radius, while heuristically retaining a constant success probability of finding the closest vector with the iterative slicer. Doulgerakis–Laarhoven–De Weger [12] formalized this method in terms of approximate Voronoi cells, and proposed an improvement based on rerandomizations; rather than hoping to find the shortest representative in the coset of the target in one run of the iterative slicer, which would require a preprocessed list of size at least 2d/2+o(d) , the algorithm uses a smaller list and runs the same reduction procedure many times starting with randomly sampled members from the coset of the target vector. The success probability of this randomized slicing procedure, which depends on the size of the list, determines how often it has to be restarted, and thus plays an important role in the eventual time complexity of the algorithm. Doulgerakis–Laarhoven–De Weger (DLW) only obtained a heuristic lower bound on the success probability of this randomized slicer, and although Laarhoven [20] later improved upon this lower bound in the low-memory regime, the question remained open what is the actual asymptotic success probability of this randomized slicing procedure, and therefore what is the actual asymptotic time complexity of the current state-of-the-art heuristic method for solving CVPP. 1.1

Contributions

Success Probability Asymptotics via Random Walks. Our main contribution is solving the central open problem resulting from the approximate Voronoi cells line of work – finding sharp asymptotics on the success probability of the randomized slicer. To find these sharp bounds, in Sect. 3 we show how to model the flow of the algorithm as a random walk on the coset of the lattice corresponding to the target vector, and we heuristically characterise transition probabilities between different states in this infinite graph when using a list of the αd+o(d) shortest lattice vectors. The aforementioned problem of finding the success probability of the slicer then translates to: what is the probability in this graph of starting from a given initial state and ending at any target state of norm at most γ? From DLW [12] we know that we almost always reach a state of norm at most some β = f (α) ≥ γ – reaching this state occurs with probability at least 1/poly(d). However, reaching a state β  < β occurs only with exponentially small probability 2−Θ(d) . Now, whereas the analysis of DLW

6

L. Ducas et al.

List L: α

t

γ

0

β

t Fig. 1. The iterative slicer as a random walk over the coset t +L using the list of lattice vectors L = L ∩ B(0, α).

can be interpreted as lower-bounding the success probability by attempting to reach the target norm in a single step after reaching radius β, we are interested in the arbitrary-step transition probabilities from β to at most γ, so as to obtain sharp bounds (Fig. 1). As every path in our graph from β to γ has an exponentially small probability in d, the total success probability is dominated by that of the highest probable path for large d; which after an appropriate log-transform boils down to a shortest path in a graph. Therefore obtaining the success probability of the randomized slicer is reduced to determining a shortest path in this infinite graph. We show in Sect. 4 how we can approximately compute this shortest path numerically, using a suitably dense discretization of the search space or using convex optimization. In Sect. 5 we go a step further by proving an exact analytic expression of the shortest path, which results in sharp asymptotics on the success probability of the randomized slicer for the general case of approx-CVP. Heuristic claim 1 (Success probability of the randomized slicer). Given a list L of the αd+o(d) shortest lattice vectors as input, the success probability of one iteration of the randomized slicer for γ-CVPP equals: d/2+o(d) n   (α2 + xi−1 − xi )2 2 (1) α − Pα2 ,γ 2 = 2xi−1 i=1 with n defined by Eq. (39) and xi as in Definition 7 depending only on α and γ. Running the randomized slicer for O(P−1 α2 ,γ 2 ) iterations, we expect to solve γCVPP with constant probability. Together with a (naive) linear search over the

The Randomized Slicer for CVPP

7

21.2

DLW'19

Time complexity

21.0

CVPP complexities without nearest neighbor search

20.8

Laa'16

20.6 Optimal 2

Laa'19

0.4

Optimal / DLW'19

20.2 20 0 d 2

20.1 d

20.2 d

20.3 d

20.4 d

20.5 d

20.6 d

Space complexity (= List size) Fig. 2. Query complexities for solving CVPP without nearest neighbour techniques. The blue curve refers to [20], the red curve to [12], the green curve to [18], and the black curve is the result of our refined analysis. The red point indicates the point where red and black curves merge into one. (Color figure online)

preprocessed list, this directly leads to explicit time and space complexities for a plain version of the randomized slicer for solving CVPP, described in Fig. 2. When using a large list of size at least 20.1437d+o(d) from the preprocessing phase of CVPP, we derive that one step is optimal, thus obtaining the same asymptotic complexity as DLW. When using less than 20.1437d+o(d) memory we gradually see an increase in the optimal number of steps in the shortest path, resulting in ever-increasing improvements in the resulting asymptotic complexities for CVPP as compared to DLW. Using a similar methodology the asymptotic scaling of our exact analysis 1 when using poly(d) memory matches the 2 2 d log2 d+o(d log d) time complexity lower bound of Laarhoven [20]. We do stress that to make this rigorous one should do a more extensive analysis of the lower order terms. In Sect. 7 we further show how to adapt the graph slightly to analyse the success probability of the iterative slicer for the BDD-variant of CVP, where the target lies unusually close to the lattice. Improved Complexities with Nearest Neighbour Searching. The main subroutine of the iterative slicer is to find lattice vectors close to a target in a large list, also known as the nearest-neighbour search problem (NNS). By preprocessing the list and storing more data we could find a close vector much

8

L. Ducas et al.

21.2 d

Time complexity

21.0 d

CVPP complexities with nearest neighbor search

DLW'19

20.8 d 20.6 d 20.4 d

Optimal

Laa'19 Laa'16

2

0.2 d

Optimal / DLW'19 2

0d

20 d

20.1 d

20.2 d

20.3 d

20.4 d

20.5 d

20.6 d

Space complexity (≥ List size) Fig. 3. Query complexities for solving CVPP with nearest neighbour techniques, but without the improved memory management described in Sect. 6. Similar to Fig. 2 the curves meet at a memory complexity of approximately 20.1436d .

faster than the naive way of trying them all. Here we obtain a trade-off between the size of the NNS data structure and the eventual query complexity. Heuristic claim 2 (Improved complexities for γ-CVPP). Given a list L of theαd+o(d) shortest  lattice vectors as input and a nearest neighbour parameter u ∈ ( (α2 − 1)/α2 , α2 /(α2 − 1)), we can solve CVPP in space and time S and T, where: d/2+o(d) α √ , S= α − (α2 − 1)(αu2 − 2u α2 − 1 + α)  d/2+o(d) √ 1 α + u α2 − 1 √ T= · . Pα2 ,γ 2 −α3 + α2 u α2 − 1 + 2α 

(2)

(3)

Figure 3 shows the resulting exact trade-offs for exact CVPP, as well as the previous lower bounds of [12,20]. Improved Memory Usage for the NNS Data Structure. When the number of NNS queries matches the list size there is a way to do the NNS preprocessing on the fly; obtaining significantly lower query times while using negligible extra memory [6,7]. Normally this observation is only helpful for batch-CVPP and not for a single CVPP instance, however the randomized slicer naturally reduces to batch-CVPP by considering all target rerandomizations as a batch

The Randomized Slicer for CVPP

9

of targets. In Sect. 6 we exploit this to obtain better CVPP complexities when using NNS; improving significantly on the state-of-the-art as shown in Fig. 4. Heuristic claim 3 (Improved memory usage for CVPP with NNS). Given a list L of the αd+o(d) ≤ 20.185d shortest lattice vectors as input we can solve a single CVPP instance with the following complexities: S=α

d+o(d)

,

T =

1 Pα2 ,1

 ·

 α·

2 · (1 − 1/α2 )  1− 1 + 1 − 1/α2

−d+o(d) .

(4)

Heuristic claim 4 (Improved memory usage for batch-CVPP). Given a list L of the αd+o(d) shortest lattice vectors and a batch of at least B CVPP instances, with B = max(1, αd · Pα2 ,1 ).

(5)

Then we can solve this entire batch of CVPP instances with the following amortized complexities per CVPP instance: S=α

d+o(d)

,

T =

1 Pα2 ,1

 ·

 α·

2 · (1 − 1/α2 )  1− 1 + 1 − 1/α2

−d+o(d) .

(6)

In particular, one can heuristically solve a batch of 20.058d+o(d) CVP instances in time 20.292d+o(d) and space 20.208d+o(d) . Note that this is a stronger result than DLW, which claimed it is possible to solve 2Θ(d) CVP instances in time and space 20.292d+o(d) . In contrast, the best complexities for a single instance of CVP are time 20.292d+o(d) and space 20.208d+o(d) , thus the algorithm proposed by DLW significantly increases the memory requirement for the batch of CVP instances. We show that we can also solve an exponentialsized batch of CVP instances without significantly increasing either the time or the memory. Application to Graph-Based Nearest Neighbour Searching. Besides deriving sharp asymptotics for the randomized slicer, the random walk model may well be of independent interest in the context of analysing asymptotics of other complex step-wise algorithms, and we illustrate this by applying the same model to solve a problem appearing in the analysis of graph-based nearest neighbour searching in [19]: what is the success probability of performing a greedy walk on the k-nearest neighbour graph, attempting to converge to the actual nearest neighbour of a random query point? We formalize the transition probabilities in this context, and show how this leads to improved complexities for lattice sieving with graph-based nearest neighbour searching for solving SVP.

10

L. Ducas et al.

21.2 d

Time complexity

21.0 d

CVPP complexities with memoryless NNS

20.8 d Normal NNS 20.6 d

Minimum batch size 20.4 d Memoryless 2

0.2 d

20 d 0 d 2

Single CVPP

NNS

Batch CVPP 20.1 d

20.2 d

20.3 d

20.4 d

20.5 d

20.6 d

Space complexity (≥ List size) Fig. 4. Query complexities for solving CVPP and batch-CVPP with nearest neighbour techniques, and with the improved memory management outlined in Sect. 6, making the memory-wise overhead of the nearest neighbour data structure negligible, either for a single target (below space 20.185d ) or for batch-CVPP for sufficiently large batches (between space 20.185d and 20.5d ). The black curve equals the black curve from Fig. 3, the orange curve shows optimized complexities for CVPP using memoryless NNS whenever possible, and the red curve shows the optimized per-instance complexities for batch-CVPP for sufficiently large batch sizes; if the batch size exceeds the quantity indicated by the dashed red curve, then the amortized complexity is given by the solid red curve. (Color figure online)

1.2

Working Heuristics

While some of our intermediate results are entirely formal, the eventual conclusion on the behaviour of the iterative slicer also relies on heuristics. We restrict the use of “Theorem”, “Lemma”, and “Corollary” to the formal claims, and refer to “Heuristic claims” for the rest. The first heuristic which we use is the commonly used Gaussian heuristic, which predicts the number of lattice vectors and their density within certain regions based on the lattice volume. Its use for analysing sieve-type algorithms is well established [6,7,18,22] and seems consistent with various experiments conducted in the past. The second heuristic assumption we use is also central in previous work on the randomized iterative slicer [12,20], and consists of assuming that the input target can be randomized, yielding essentially independent experiments each time we randomize the input over the coset of the target vector. Practical experiments from DLW [12] seem to support this assumption.

The Randomized Slicer for CVPP

11

The third heuristic is specific to this work, and consists of assuming that in our graph, the density over all successful paths taken by the slicing procedure is asymptotically equal to the density given by the most probable successful path. We suspect that this heuristic assumption can be formalized and justified following an analysis similar to the concentration bound result of Herold and Kirshanova [16]. We leave this question open for future work. Note that this heuristic is only needed to justify the sharpness of our analysis; even without it our results give lower bounds on the success probability of the iterative slicer.

2 2.1

Preliminaries Notation

Let us first describe some basic notation. Throughout we will write · for Euclidean norms, and ·, · for the standard dot product. Dimensions of vector spaces are commonly denoted by d. Vectors are written in boldface notation (e.g. x). We denote d-dimensional volumes by Vol(·). 2.2

Spherical Geometry

We write B = B d ⊂ Rd for the unit ball, consisting of all vectors with Euclidean norm at most 1, and we write S = S d−1 ⊂ Rd for the unit sphere, i.e. the boundary of B d . More generally we denote by B(x, α) the ball of radius α around x. Within the unit ball, we denote spherical caps by Cx,α = {v ∈ B : x, v ≥ α} for x ∈ S and α ∈ (0, 1), and we denote spherical wedges by Wx,α,y ,β = Cx,α ∩ Cy ,β where x, y ∈ S and α, β ∈ (0, 1). Note that due to spherical symmetry, the volume of Cx,α is independent of the choice of x, and the volume of Wx,α,y ,β only depends on the angle between x and y. To obtain the relevant probability distributions for the treated algorithms we need the following asymptotic volumes. Lemma 1 (Volume spherical cap). Let α ∈ (0, 1) and let x ∈ S. Then the volume of a spherical cap Cx,α relative to the unit ball B is C(α) := (1 − α2 )d/2+o(d) .

(7)

Lemma 2 (Volume spherical wedge). Let α, β ∈ (0, 1), let x, y ∈ S, and let γ = x, y. Then the volume of the spherical wedge Wx,α,y ,β relative to B is ⎧ d/2+o(d) 

1−α2 −β 2 −γ 2 +2αβγ ⎪ α β ⎪ , if 0 < γ < min , 2 ⎨ 1−γ β α ; β W(α, β, γ) := (1 − α2 )d/2+o(d) , (8) if α ≤ γ < 1; ⎪ ⎪ ⎩ if α (1 − β 2 )d/2+o(d) , β ≤ γ < 1.

12

2.3

L. Ducas et al.

Lattices

Given a set of linearly independent vectors B = {b1 , . . . , bd } ⊂ Rd , we define d the lattice generated by the basis B as L = L(B) := { i=1 λi bi : λi ∈ Z}. We denote the volume det(B) of the parallelepiped B · [0, 1]d by det(L); this volume is independent of the choice of basis for a lattice. Given a basis of a lattice, the shortest vector problem (SVP) asks to find a lattice vector of minimum (nonzero) Euclidean norm in this lattice: if we let λ1 (L) = minx∈L\{0} x, then solving SVP corresponds to finding a vector x ∈ L of norm λ1 (L). The analysis of lattice algorithms heavily depends on the Gaussian heuristic, as it better represents the practical complexity of the algorithms than their provable counterparts. Heuristic 1 (The Gaussian heuristic (GH)). Let K ⊂ Rd be a measurable body, then the number |K ∩ L| of lattice points in K is approximately equal to Vol(K)/ det(L). Assuming this heuristic with K a Euclidean d-ball we obtain that λ1 (L) has  expected value d/(2πe) · det(L)1/d . For random lattices, which are the main target in the context of cryptanalysis, the Gaussian heuristic is widely verified and the following statement can be observed in practice. Heuristic 2 (Lattice points in a ball, consequence of GH). Let t ∈ Rd be random. Under the Gaussian heuristic the ball of radius α · λ1 (L) contains αd+o(d) lattice points that we treat as being uniformly distributed over the ball. As a direct result a random target t ∈ Rd is expected to lie at distance ≈ λ1 (L) from the lattice. This gives the following alternative statements for the common variants of the closest vector problem (CVP). Definition 1 (Closest Vector Problem (CVP)). Given a basis B of a lattice L and a target vector t ∈ Rd , find a vector v ∈ L such that t − v ≤ λ1 (L). The hardness of most lattice-based cryptographic schemes actually depends on one of the following two easier variants. Definition 2 (Approximate Closest Vector Problem (γ-CVP)). Given a basis B of a lattice L, a target vector t ∈ Rd and an approximation factor γ ≥ 1, find a vector v ∈ L such that t − v ≤ γ · λ1 (L). Definition 3 (Bounded Distance Decoding (δ-BDD)). Given a basis B of a lattice L, a target vector t ∈ Rd and a distance guarantee δ ∈ (0, 1) such that minv ∈L t − v ≤ δ · λ1 (L), find a vector v ∈ L such that t − v ≤ δ · λ1 (L). The preprocessing variants CVPP, γ-CVPP and δ-BDDP additionally allow to do any kind of preprocessing given only a description of the lattice L (and not the target t). The size of the final preprocessing advice is counted in the eventual space complexity of the CVPP algorithm or variants thereof. In the remainder we assume without loss of generality that λ1 (L) = 1.

The Randomized Slicer for CVPP

13

Algorithm 1. The iterative slicer of [27]

1 2 3 4 5

2.4

Input: A target vector t ∈ Rd , a list L ⊂ L. Output: A close vector v ∈ L to t. Function IterativeSlicer(L, t): t0 ← t; for i ← 0, 1, 2, . . . do ti+1 ← min {ti − v}; v ∈L∪{0}

if ti+1 = ti then return t0 − ti ;

Solving CVPP with the Randomized Slicer

The (Randomized) Iterative Slicer. The iterative slicer (Algorithm 1) is a simple but effective algorithm that aims to solve the closest vector problem or variants thereof. The preprocessing consists of finding and storing a list L ⊂ L of lattice vectors. Then given a target point t ∈ Rd the iterative slicer tries to reduce the target t by the list L to some smaller representative t ∈ t + L in the same coset of the lattice. This is repeated until the reduction fails or until the algorithm succeeds, i.e. when t  ≤ γ. We then obtain the lattice point t − t that lies at distance at most γ to t. Observe that t is the shortest vector in t + L if and only if v = t − t ∈ L is the closest lattice vector to t. To provably guarantee that the closest vector is found we need the preprocessed list L to contain all the Voronoi-relevant vectors; the vectors that define the Voronoi cell of the lattice. However most lattices have O(2d ) relevant vectors, which is too much to be practically viable. Under the Gaussian heuristic, Laarhoven [18] showed that 2d/2+o(d) short vectors commonly suffice for the iterative slicer to succeed with high probability, but this number of vectors is still too large for any practical algorithm. The randomized slicer (Algorithm 2) of Doulgerakis–Laarhoven–De Weger [12] attempts to overcome this large list requirement by using a smaller preprocessed list together with rerandomizations to obtain a reasonable probability of finding a close vector – the success probability of one run of the iterative slicer might be small, but repeating the algorithm many times using randomized inputs from t + L, the algorithm then succeeds with high probability, without requiring a larger preprocessed list. Because we can only use a list of limited size, one can ask the question which lattice vectors to include in this list L. Later in the analysis it will become clear that short vectors are more useful to reduce a random target, so it is natural to let L consist of all short vectors up to some radius. Let α > 1 be this radius and denote its square by a := α2 . The preprocessed list then becomes 2

La := {x ∈ L : x ≤ a}.

(9)

Recall that we normalized to λ1 (L) = 1 and thus under the Gaussian heuristic this list consists of |La | = αd+o(d) lattice points, which determines (ignoring nearest neighbour data structures) the space complexity of the algorithm and

14

L. Ducas et al.

Algorithm 2. The randomized iterative slicer of [12]

1 2 3 4 5 6

Input: A target vector t ∈ Rd , a list L ⊂ L, a target distance γ ∈ R. Output: A close vector v ∈ L, s.t. t − v ≤ γ. Function RandomizedSlicer(L, t, γ): repeat t ← Sample(t + L); v ← IterativeSlicer(L, t ); until t − v ≤ γ; return v + (t − t );

also determines the time complexity of each iteration. Until Sect. 7 we restrict our attention to the approximate case γ-CVPP where we have γ ≥ 1, with γ = 1 corresponding to (average-case) exact CVPP. Throughout we will write c := γ 2 . Success Probability. The iterative slicer is not guaranteed to succeed as the list does not contain all relevant vectors. However, suppose that the iterative slicer has a success probability of Pa,c given a random target. It is clear that having a larger preprocessed list increases the success probability, but in general it is hard to concretely analyse the success probability for a certain list. Under the Gaussian heuristic we can actually derive bounds on Pa,c , as was first done by DLW [12]. They obtained the following two regimes for the success probability as d → ∞: √ – For a ≥ 2c − 2√c2 − c we have Pa,c → 1. – For a < 2c − 2 c2 − c we have Pa,c = exp(−C · d + o(d)) for C > 0. The second case above illustrates that for a small list size the algorithm needs to be repeated a large number of times with fresh targets to guarantee a high success probability. This gives us the randomized slicer algorithm. To obtain a fresh target the idea is to sample randomly a not too large element from the coset t+L, and assume that the reduction of this new target is independent from the initial one. Experiments from DLW suggest that this is a valid assumption to make, and given a success probability Pa,c  1 it is enough to repeat the algorithm O(1/Pa,c ) times to find the√closest lattice point. However this success probability in the case a < 2c − 2 c2 − c is not yet fully understood. Two heuristic lower bounds [12,20] are known and are shown in Fig. 5. None of these lower bounds fully dominates the other, which implies that neither of the bounds is sharp. In the remainder of this work we consider this case where we have a small success probability.

3

The Random Walk Model

To interpret the iterative slicer algorithm as a random walk we first look at the probability that a target t is reduced by a random lattice point from the

The Randomized Slicer for CVPP

15

preprocessed list La . By the Gaussian heuristic this lattice point is√distributed 2 uniformly over the ball of radius α. To reduce t from x to y ∈ [( x − α)2 , x] 2 by some v with v = a, their inner product must satisfy: t, v < −(a + x − y)/2. Using the formulas for the volume of a spherical cap we then deduce the following probability: d/2+o(d) 

  (a + x − y)2  2 2 . (10) Pv ∈α·Bd t + v ≤ y  t = x = 1 − 4ax √ Clearly any reduction to y < ( x − α)2 is unreachable by a vector in α · B d . 2 The probability that the target norm is successfully reduced to some y ≤ t decreases in α and thus we prefer to have short vectors in our list. As the list La does not contain just one, but ad/2 lattice vectors we obtain the following reduction probability for a single iteration of the iterative slicer:    

2/d (a + x − y)2  2 2 P ∃v ∈ La : t + v ≤ y  t = x → min 1, a · 1 − 4ax as d → ∞. Note that the reduction probability takes the form exp(−Cd + o(d)) for some constant C ≥ 0 that only depends on a, x and y. As we are interested in the limit behaviour as d → ∞ we focus our attention to this base exp(−C), which we call the base-probability of this reduction and denote it by pa (x, y). Although these transition probabilities represent a reduction to any square norm ≤ y, they should asymptotically be interpreted as a reduction to ≈ y, as for any fixed ε > 0 we have that pa (x, y − )d /pa (x, y)d = 2−Θ(d) → 0 as d → ∞. If 2 t = x is large enough we can almost certainly find a lattice point in La that reduces this norm successfully. In fact a simple computation shows that this is the case for any x > b := a2 /(4a − 4) as d → ∞. So in our analysis we can assume that our target is already reduced to square norm b, and the interesting part is how probable the remaining reduction from b to c is.

pa (x, y) t2 0

1 ≤ c

y

x

b

Definition 4 (Transition probability). The transition base-probability 2 pa (x, y) to reduce t from x ∈ [c, b] to y ∈ [c, x] is given by pa (x, y) : Sa → (0, 1],  1/2 (a + x − y)2 (x, y) → a − , 4x

(11) (12)

16

L. Ducas et al.

with Sa = {(x, y) ∈ [c, b]2 : b ≥ x ≥ y and



√ x− y < α} the allowed transitions.

Using the above reduction probabilities we model the iterative slicer as a random walk over an infinite graph where each node xi ∈ [c, b] is associated with 2 the squared norm ti  of the partly reduced target. Note that each possible successful random walk b = x0 → x1 → · · · → xn = c has a certain success probability. Assuming the different steps are independent this success probability is just the product of the individual reduction probabilities. For an n-step path we could split our list La in n parts, one for each step, to obtain this independence without changing the asymptotic size of these lists. Again this success probability is of the form exp(−Cd + o(d)) for some constant C ≥ 0 that only depends on x0 , . . . , xn and a. Definition 5 (Path). All decreasing n-step paths x0 → x1 → · · · → xn with positive probability from b to c are given by the set: n

Sa [b → c] := {(b = x0 , x1 , . . . , xn = c) ∈ Rn+1 : ∀i (xi−1 , xi ) ∈ Sa }.

(13)

The transition base-probability of such a path is given by n

n

Pa [b → c] : Sa [b → c] → (0, 1], n  x → pa (xi−1 , xi ).

(14) (15)

i=1

The success probability of reaching c from b is determined by the total probability of all successful paths. Note that all these paths have some probability of the form exp(−Cd + o(d)) and thus the probability for the path with the smallest C ≥ 0 will dominate all other paths for large d. As a result, almost all successful walks will go via the highest probable path, i.e. the one with the highest baseprobability. After applying a log-transform this becomes equivalent to finding the shortest path in a weighted graph. Definition 6 (Transition graph). Let V = [c, b] and E = [c, b]2 be an infinite graph G = (V, E) with weight function w : E → R≥0 ∪ {∞} given by:  − log pa (x, y), if (x, y) ∈ Sa ; w(x, y) = (16) ∞, otherwise. n

One can associate n-step paths in this graph from b to c with the space Sa [b → c]. n n The length of a path x ∈ Sa [b → c] is denoted by a [b → c](x) and the shortest path length by

a,opt [b → c] = inf

inf

n n∈Z≥1 x∈Sa [b→c]

n

a [b → c](x).

(17)

Obtaining the success probability in this model therefore becomes equivalent to n obtaining the length of the shortest path a,opt [b → c] as we have Pa [b → c](x) = n exp(− a [b → c](x)).

The Randomized Slicer for CVPP

17

Algorithm 3. A discretized shortest path algorithm [11]

1 2 3 4

4

Input: Parameters a, b, c describing the graph, a discretization value k. Output: A shortest path on the discretized graph from b to c. Function DiscretizedDijkstra(a, b, c, k): Compute Vd = {c + i·(b−c) : i = 0, . . . , k}; k Compute Ed = {(x, y) ∈ Vd2 ∩ Sa } and the weights wa (x, y); Compute shortest path on Gd = (Vd , Ed ) from b to c.

Numerical Approximations

We reduced the problem of obtaining the success probability of the iterative slicer to the search of a shortest path in a specially constructed weighted infinite graph. We might not always be able to find an exact solution in the input variables to the length of the shortest path. However for fixed parameters we can always try to numerically approximate the success probability, by approximating the shortest path in our infinite graph. We present two fairly standard methods for doing so. The first method first discretizes the infinite graph and then determines the shortest path using standard algorithms such as Dijkstra’s algorithm [11]. The second method uses the fact that the weight function wa : Sa → R≥0 is convex. 4.1

Discretization

A natural way to approximate the shortest path in an infinite graph is to first discretize to a finite subgraph. Then one can determine the shortest path in this subgraph using standard methods to obtain a short path in the infinite graph. The details of this approach are shown in Algorithm 3. Using any optimized Dijkstra implementation the time and space complexity of Algorithm 3 is O(|Ed | + |Vd | log |Vd |) = O(k 2 ). In general this method gives a lower bound on the success probability for any fixed a and c. Because the weight function wa : Sa → R≥0 is continuous Algorithm 3 converges to the optimal path length as k → ∞. The C++ implementation of this method used for the experiments is attached in the complementary material of this work. For this method to converge to the shortest path in the full graph we only need a continuous weight function. Furthermore the number of steps does not have to be specified a priori. The high memory usage of O(k 2 ) could limit the fineness of our discretization. To circumvent this we can generate the edges (and their weight) on the fly when needed, which reduces the memory consumption to O(k). 4.2

Convex Optimization

Where the first method only needed wa : Sa → R≥0 to be continuous, the second method makes use of the convexity of this function.

18

L. Ducas et al.

Lemma 3 (Convexity of Sa and wa ). The set of allowed transitions Sa is convex and the weight function wa is strictly convex on Sa . √ √ 2 Proof. The convexity of Sa = {(x, y) ∈ [c, b] √ : b ≥ x ≥ y and x − y < α} follows immediately from the fact that x → x is concave on [0, ∞). Remember that for (x, y) ∈ Sa   (a + x − y)2 1 wa (x, y) = − log pa (x, y) = − log a − , (18) 2 4x and thus we have 8xpa (x, y)2 + (4a − 2(a + x − y))2 − 16pa (x, y)4 d2 w (x, y) = , a dx2 32x2 pa (x, y)4 d d −8xpa (x, y)2 + (4a − 2(a + x − y)) · 2(a + x − y) wa (x, y) = , dy dx 32x2 pa (x, y)4 d2 8xpa (x, y)2 + 4(a + x − y)2 w (x, y) = . a dy 2 32x2 pa (x, y)4

(19) (20) (21)

2

d As pa (x, y) > 0 and a + x − y ≥ a > 0 for (x, y) ∈ Sa we have dy 2 wa (x, y) > 0. We consider the Hessian H of wa . Computing the determinant gives:

det(H) =

2(a + x − y)4 · (4ax − (a + x − y)2 ) 1024x6 pa (x, y)8

(22)

and we can conclude that det(H) > 0 from the fact that 4ax − (a + x − y)2 > 0 and (a + x − y)4 > 0 for (x, y) ∈ Sa . So H is positive definite, which makes wa strictly convex on Sa .   n

n

Corollary 1 (Convexity of Sa [b → c] and a [b → c]). The space of n-step n n paths Sa [b → c] is convex and the length function a [b → c] is strictly convex on n Sa [b → c] for any n ≥ 1. n

Proof. The convexity of Sa [b → c] follows immediately from that of Sa . Note n n that a [b → c](x) = i=1 wa (xi−1 , xi ) and thus it is convex as a sum of convex functions. Furthermore for each variable at least one of these functions is strictly convex and thus the sum is strictly convex.   So for any fixed n ≥ 1 we can use convex optimization to numerically determine the optimal path of n steps. In fact, because of the strict convexity, we know that this optimal path of n steps (if it exists) is unique. However the question remains what the optimal number of steps is, i.e. for which n we should run the convex optimization algorithm. We might miss the optimal path if we do not guess the optimal number of steps correctly. Luckily because wa (b, b) = 0 by definition, we can increase n without being afraid to skip some optimal path.

The Randomized Slicer for CVPP n

19

n+k

Lemma 4 (Longer paths are not worse). If a [b → c] and a [b → c] for n, k ≥ 0 both attain a minimum, then n

minn a [b → c](x) ≥

x∈Sa [b→c]

min n+k

x∈Sa [b → c]

n+k

a [b → c](x).

(23)

n

Proof. Suppose a [b → c] attains its minimum at y = (b = y0 , y1 , . . . , yn = c) ∈ n Sa [b → c]. Using that wa (b, b) = 0 we get that: min n+k

x∈Sa [b → c]

n+k

n+k

a [b → c](x) ≤ a [b → c](b, . . . , b = y0 , . . . , yn = c) n

= k · wa (b, b) + a [b → c](y) n

= a [b → c](y). This completes the proof.

(24) (25) (26)  

So increasing n can only improve the optimal result. When running a numerical convex optimization algorithm one could start with a somewhat small n and increase it (e.g. double it) until the result does not improve any more. 4.3

Numerical Results

We ran both numerical algorithms and got similar results. Running the convex optimization algorithm gave better results for small a = 1 + ε as the fineness of the discretization is not enough to represent the almost shortest paths in 1 and thus for fixed c the distance this regime. This is easily explained as b ≈ 4ε between b and c, i.e. the interval to be covered by the discretization quickly grows as ε → 0. The new lower bound that we obtained numerically for exact CVPP (c = 1) is shown in Fig. 5. For α ≤ 1.1047 we observe that the new lower bound is strictly better than the two previous lower bounds. For α > 1.1047 the new lower bound is identical to the lower bound from [12]. Taking a closer look at the short paths we obtained numerically we see that α ≈ 1.1047 is exactly the moment where this path switches from a single step to at least 2 steps. This makes sense as in our model the lower bound from [12] can be interpreted as a ‘single step’ analysis. This also explains the asymptote for this lower bound as for α ≤ 1.0340 it is not possible to walk from b to c = 1 in a single step. When inspecting these short paths b = x0 → x1 → · · · → xn = c further we observed an almost perfect fit with a quadratic formula xi = u · i2 + v · i + b for some constants u, v. In the next section we show how we use this to obtain an exact analytic solution for the shortest path.

5

An Exact Solution for the Randomized Slicer

In order to determine an exact solution of the shortest path, and thus an exact solution of the success probability of the iterative slicer we use some observations

20

L. Ducas et al.

20 d

Laa'16

Success probability

Optimal / DLW'19 2

- 0.2 d

Prop. 1

Laa'19

2- 0.4 d Optimal 2- 0.6 d 2- 0.8 d

DLW'19

Success probability bounds

2- 1.0 d 2- 1.2 d 0 d 2

20.1 d

20.2 d

20.3 d

20.4 d

20.5 d

20.6 d

List size Fig. 5. Lower bounds on success probability of the iterative slicer for CVPP (c = 1) computed with a discretization parameter of k = 5000.

from the numerical results. Due to Corollary 1 we know that for any fixed n ≥ 1 our minimization problem is strictly convex. As a result there can be at most one local minimum which, if it exists, is immediately also the unique global minimum. In order to find an exact solution we explicitly construct the shortest n-step path using observations from the numerical section. Then showing that this path is a local minimum is enough to prove that it is optimal. We recall from Sect. 4.3 that the optimal path x0 → · · · → xn seems to take the shape xi = u · i2 + v · i + b with xn = c. So for our construction we assume this shape, which reduces the problem to determining the constants u, v. Furthermore, as we are trying to construct a local minimum, we assume that all partial derivatives in the nonconstant variables are equal to 0. This gives enough restrictions to obtain an explicit solution. Definition 7 (Explicit construction). Let n ≥ 1 and let n

n

xi = ua [b → c] · i2 + va [b → c] · i + b, 1

(27)

1

with ua [b → c] := 0, va [b → c] := c − b and for n ≥ 2:  (b + c − a)n − (an2 − (b + c))2 + 4bc(n2 − 1) n , ua [b → c] := n3 − n  (a − 2b)n2 + (b − c) + (an2 − (b + c))2 + 4bc(n2 − 1)n n . va [b → c] := n3 − n

(28) (29)

The Randomized Slicer for CVPP

21

15

Optimal paths for 1 CVPP (a=1.02)

Squared norm

b 10 10 steps

30 steps 5

24 steps (optimal)

c 0 0

5

10

15

20

25

30

Steps Fig. 6. Some examples of the constructed paths in Definition 7 for a = 1.02, c = 1.

Lemma 5. By construction we have xn = c and n ∂  − log pa (xj−1 , xj ) = 0 ∂xi j=1

(30)

for all i ∈ {1, . . . , n − 1}. Proof. Note that the partial derivative constraints can be reduced to the sin∂ gle constraint ∂x (− log pa (xi−1 , xi ) − log pa (xi , xi+1 )) = 0 for a symbolic i. i Together with the constraint xn = c one can solve for u, v in xi = u · i2 + v · i + b. For a symbolic verification see the Sage script in Appendix A.   What remains is to show that the explicit construction indeed gives a valid n path, i.e. one that is in the domain Sa [b → c]. An example of how these constructed paths look are given in Fig. 6. We observe that if n becomes too large these constructed paths are invalid as they walk outside the interval [c, b]. This is an artefact of our simplification that wa (x, y) = − log pa (x, y) which does not hold for (x, y) ∈ Sa . We can still ask the question for which n this construction is actually valid. √ (4b−a)2 −8(2b−a)c 1 ≤ n < + and Lemma 6 (Valid constructions). Let b−c a 2 2a n

n

xi = ua [b → c] · i2 + va [b → c] · i + b. n

(31) n

Then x = (x0 , . . . , xn ) ∈ Sa [b → c] and x is the unique minimum of a [b → c].

22

L. Ducas et al.

Proof. We have to check that x satisfies the two conditions √ √ xi−1 ≥ xi and xi−1 − xi < α,

(32)

for all i ∈ {1, . . . , n}. Note that for n = 0 we must have b = c and the statement becomes trivial. For n = 1 we have x = (b, c) and the conditions follows from n 0 ≤ b − c ≤ na ≤ a. So we can assume that n ≥ 2. First we rewrite ua [b → c] to:  (b + c − a)n − ((b + c − a)n)2 + (a2 n2 − (b − c)2 )(n2 − 1) n ua [b → c] = , (33) n3 − n n

which makes it clear that ua [b → c] ≤ 0 when an ≥ b − c. As a result the differences n

n

xi−1 − xi = (1 − 2i) · ua [b → c] − va [b → c],

(34)

are increasing in i ∈ {1, . . . , n}. Therefore for the first condition it is enough to check that  (b − c) + (2b − a)n − (an2 − (b + c))2 + 4bc(n2 − 1) ≥ 0. (35) x0 − x1 = n2 + n In fact a solution with x0 = x1 = b is not so interesting, so solving for x0 −x1 > 0 gives for n ≥ 2 the sufficient condition  (4b − a)2 − 8(2b − a)c 1 . (36) n< + 2 2a For the second condition we first show the stronger property that xi−1 − xi ≤ a, and again by the increasing differences it is enough to show that xn−1 − xn ≤ a; rewriting gives the following sufficient statement for n ≥ 2: −an + b − c ≤ 0.

(37)

√ √ Now we prove that xi−1 − xi < α. If xi−1 = xi the condition holds trivially, else xi−1 > xi and we get √ √ √ √ √ √ ( xi−1 − xi )2 < ( xi−1 − xi )( xi−1 + xi ) = xi−1 − xi ≤ a. (38) n n n We conclude that x ∈ Sa [b → c]. As a [b → c](x) = i=1 − log pa (xi−1 , xi ) on n Sa [b → c], the claim that this is a global minimum follows from Definition 7 and Lemma 1.   So by Lemma 7 there exists some s ∈ N such that for all (b − c)/a ≤ n ≤ s we have an explicit construction for the optimal n-step path. By Lemma 4 we know that of these paths the one with n = s steps must be the shortest. However for n > s our construction did not work and thus we do not know if any shorter path exists. Inspired by Lemma 4 and numerical results we obtain the following alternative exact solution for n > s.

The Randomized Slicer for CVPP

Theorem 1 (Optimal arbitrary-step paths). Let n satisfy   1 1 (4b − a)2 − 8(2b − a)c . n= − + 2 2a

23

(39)

k

For k ≥ n the unique global minimum of a [b → c] is given by k

x = (b, . . . , b, b = y0 , . . . , yn = c) ∈ Sa [b → c] n

n

(40) n

with yi = ua [b → c] · i2 + va [b → c] · i + b and the length is equal to a [b → c](y). Proof. By Corollary 1 it is enough to show that x is a local minimum, therefore k ∂ we check the partial derivatives. For i > k − n we have ∂x

a [b → c](x) = i n ∂ ∂xi a [b →

c](y) = 0 by construction. For i < k−n we have xi−1 = xi = xi+1 = b, k

→ c](x) = − a−1 2b < 0. For the most interesting case √ (4b−a)2 −8(2b−a)c . Because as a result we get i = k − n we need that n ≥ − 12 + 2a a2 y0 − y1 ≤ 2b−a , which together with y0 − y1 ≤ b − c ≤ b − 1 is precisely enough

which results in

to show that

∂ ∂xi a [b

k ∂ ∂xk−n a [b →

c](x) ≤ 0. n

To conclude let z = x ∈ Sa [b → c], then by Corollary 1 and using that zi − xi = zi − b ≤ 0 for all 0 ≤ i ≤ k − n we have: k

k

k

(41)

a [b → c](z) > a [b → c](x) + y − x, ∇ a [b → c](x)  ∂ k k k (zi − xi ) ·

a [b → c](x) ≥ a [b → c](x). = a [b → c](x) + ∂xi i≤k−n

(42) k

and thus x is the unique global minimum of a [b → c].

 

Corollary 2 (Optimal minimum-step paths). The optimal path from b to c consists of n steps, with n defined by Eq. (39). The optimal path is of the form n n b = x0 → x1 → · · · → xn = c with xi = ua [b → c] · i2 + va [b → c] · i + b. Heuristic claim 5. Given the optimal path b = x0 → · · · → xn = c from Corollary 2, the success probability of the iterative slice algorithm for γ-CVPP is given by   n  wa (xi−1 , xi )d + o(d) . (43) exp i=1

As we have an exact formula for the optimal number of steps, and the lower bound from DLW [12] uses a ‘single-step’ analysis we know exactly in which regime Corollary 2 improves on theirs. Namely for those a > 1 and c ≥ 1 such that for n defined by Eq. (39) we have n > 1. For exact CVPP we obtain

24

L. Ducas et al.

32

Optimal number of steps for (approximate) CVPP

Number of steps

16 8 4 2 5 CVPP 1 20 d

2 CVPP 20.1 d

20.1436 d

CVPP

1 CVPP

20.2 d

20.3 d

List size Fig. 7. Optimal number of steps n against the list size |L| = αd+o(d) = ad/2+o(d) . We improve upon DLW whenever n > 1. For large list sizes the optimal number of steps of cost exp(−Cd + o(d)) drops to 0, as then the success probability of the iterative slicer equals 2−o(d) .

improvements for a < 1.22033. This improvement can also be visualized through Fig. 7, which plots the optimal number of steps against the size of the preprocessed list. Whenever the optimal strategy involves taking more than one step, we improve upon DLW. For the crossover points where the number of optimal steps changes we have a more succinct formula for the shortest path and the success probability. Lemma 7 (Success probability for integral n). If n defined similar to Eq. (39), but without rounding up, is integral, then the optimal path from b to c has probability 

a 2−a

n  d/2+o(d) 2n(a − 1) · 1− . 2−a

Proof. For such n we obtain the expression xi = b − (i + 1) · i · follows from simplifying the remaining expression.

(44) a2 −a 2−a .

The result  

Using this special case we can easily analyse the success probability in the lowmemory regime. Corollary 3 (Low-memory asymptotics). For a fixed > 0 and a = 1 + ε, the success probability of the optimal path from b to c equals (2eε + o(ε))d/2+o(d) .

The Randomized Slicer for CVPP

25

The above improves upon the lower bound of (4ε + o(ε))d/2+o(d) of Laarhoven [20]. Using a similar methodology to [20], to obtain a polynomial space complexity ad/2+o(d) = dΘ(1) we set ε = Θ( d1 log d), resulting in a success probability of 1 e− 2 d ln d+o(d ln d) . We nevertheless stress that drawing conclusions on the iterative slicer efficiency for = o(1) is far from rigorous: first the analysis assumes a space complexity of ad/2+o(d) for a constant a > 1; second, the optimal path now requires an non-constant number of steps, and the o(d) terms in the exponent may accumulate to linear or super-linear terms. To make this more rigorous one would require do a more extensive analysis of the lower order terms.

6

Memoryless Nearest Neighbour Searching

Nearest Neighbour Searching Techniques. The main subroutine of the iterative slicer is to find lattice vectors close to a target t in a large list L, also known as the nearest neighbour search problem (NNS). By preprocessing the list and storing them in certain query-friendly data structures, we can find a close vector much faster than through the naive way of going through all vectors in the list. Generally we obtain a trade-off between the size of the NNS data structure (and the time to generate and populate this data structure) and the eventual query complexity of finding a nearest neighbour given a target vector. A well known technique for finding near neighbours is locality-sensitive hashing (LSH). The idea is that a hash function partitions the space into buckets, such that two vectors that are near neighbours are more likely to fall in the same bucket than a general pair of vectors. Preprocessing then consists of indexing the list L in these buckets, for each of several hash functions. Using a hash table we then perform a quick lookup of all list vectors that lie in the same bucket as our query vector, to find candidate near neighbours. A query t is then answered by searching for a close vector in these buckets, one for each hash function, that corresponds to t. Given the correct parameters this leads to a query time of |L|ρ+o(1) for some ρ < 1. More hash functions giving finer partitions can reduce the query time at the cost of extra storage for the required number of hash tables. Locality-sensitive filters (LSF) were later proposed as a generalization of LSH, where the space is not necessarily partitioned into buckets, but where regions can overlap – some vectors may end up in multiple buckets for one hash function, and some may end up in none of them. Currently the best nearest neighbour complexities for large lists are achieved by using spherical localitysensitive filters [6]. Nearest Neighbour Search in Batches. The drawback of NNS data structures is that it can increase the memory usage significantly. As for the iterative slicer this memory could also be used for a larger list L, and thus giving a higher success probability, the current optimal time-memory trade-offs only spend a small amount of memory on the NNS data structure.

26

L. Ducas et al.

However as already introduced in [7] and later applied in [12,18] and [6], we can reduce the query time significantly without any extra memory in case we process multiple queries at the same time. Suppose we have |L| targets, then to process all these queries we need as many hash computations as one would need for the precomputation of the list. As a result we could just process each hash function one by one on our list L and our list of targets. We immediately process the list and target vectors that fall in the same bucket. In the end this is equivalent to first preprocessing the list L and then running all queries one by ˜ one, however without using more than O(|L|) memory. So we can achieve low amortized query times for large batches, without using any extra memory. Lemma 8 (Batch NNS [6]). Given a list of size |L| = αd+o(d) uniformly distributed over S d−1 and a batch of targets of size |B| ≥ |L|, we can solve the nearest-neighbour problem with an amortized cost per target of  T =

2 · (a − 1)  a− 1 + 1 − 1/a

−d/2 (45)

using only αd+o(d) space. Batches from Rerandomization. Note that for the randomized slicer we naturally obtain a batch of rerandomized targets of size |B| = O(1/Pa,c ). In the case that the number of rerandomized targets is larger than the list size |L| we could generate and process these targets in batches of |L| at a time, therefore making use of optimal NNS parameters without any extra memory. This idea significantly improves the time-memory trade-off compared to the current state-of-the-art as shown in Fig. 4. Also note that in the higher memory regimes where we do not have enough rerandomized targets to do this, we still lower the necessary batch sizes for this technique to work by a factor one over the success probability. Heuristic claim 6 (Improved memory usage for batch-CVPP with NNS). Suppose we have a list of size |L| = αd+o(d) , and suppose we are given a batch of at least B γ-CVPP instances, with B = max(1, αd+o(d) · Pa,c )

(46)

Then we can heuristically solve this entire batch of γ-CVPP instances with the following amortized complexities per CVPP instance: S=α

7

d+o(d)

,

1 T = · Pa,c



2 · (a − 1)  a− 1 + 1 − 1/a

−d/2+o(d) .

(47)

Bounded Distance Decoding with Preprocessing

We consider the success probability of the iterative slicer for bounded distance decoding. Instead of assuming that our target lies at distance λ1 (L) of the lattice

The Randomized Slicer for CVPP

27

we get the guarantee that our target lies at distance δ · λ1 (L) of the lattice. To incorporate this into our model we start with the same graph G = (V, E) with V = [1, b] and weight function wa from Definition 6. However we add a single extra node V  = V ∪ {δ 2 } to the graph that represents our goal, i.e. the reduced target t with norm δ. We have to determine the base-probability of transitioning from a target t of squared norm x to our goal t of norm at most δ using a lattice vector v ∈ La . Because the reduction vector v = t − t can assumed to be uniformly distributed over B(t, δ) we obtain the following base-probability of the reduction: ⎧ if x ≤ a − δ 2 , ⎪ ⎨1, 2 2 2 2 +a)−(a−δ ) Pv ∈B(t,δ) (v ∈ La )2/d → −x +2x(δ4xδ , if a − δ 2 < x < (α + δ)2 , 2 ⎪ ⎩ 0, otherwise. as d → ∞. Given the base-probability that we can transition from a target t to our goal t we extend the weight function on the edges (x, δ 2 ) in the natural way. As before we can now run the numerical approximation algorithm from Sect. 4.1 to obtain a lower bound on the success probability. The results are shown in Fig. 8 and improve on those from [12] in the low-memory regime. We do not see any restrictions for doing an exact analysis for BDDP similar to that of Sect. 5, but it is out of the scope of this paper. Also we expect these numerical results to be sharp, just as shown in the approximate CVPP case. In Fig. 9 we show the resulting δ-BDDP time-memory trade-off with memoryintensive NNS, similar to Fig. 3. The memoryless NNS technique from Sect. 6 could also directly be applied for (batch-)BDDP, to obtain even better amortized complexities. We also note from Fig. 9 that, our bound for the time complexity δ-BDDP is always smaller than δ  -BDDP for δ < δ  , as one would naturally expect. This resolves another mystery left by the analysis of [12], for which this wasn’t the case. We observe that the BDD guarantee does not improve the success probabilities that much, certainly not in the low-memory regime. The iterative slicer algorithm does not seem to fully exploit the BDD guarantee. An explanation for this in the low-memory regime is that only the ‘last’ step can improve by the BDD guarantee. For all other steps, of which there are many in the low-memory regime, the BDD guarantee does not improve the transition probabilities. Therefore we cannot expect that the algorithm performs significantly better in the low-memory regime with that BDD guarantee than without. An open problem would be to adapt the iterative slicer to make better use of this guarantee.

8

Application to Graph-Based NNS

Besides nearest-neighbour search data structures based on locality-sensitive hashing or filters, as seen in Sect. 6, there also exists a graph based variant. Although graph based nearest-neighbour data structures have proven to be very

28

L. Ducas et al.

Success probability

20 d 0 BDDP

2- 0.1 d

1 BDDP

2- 0.2 d 2- 0.3 d 2

CVPP

Success probabilities for {0, 15 , 25 , 35 , 45 ,1} BDDP

- 0.4 d

2- 0.5 d 0 d 2

20.1 d

20.2 d

20.3 d

20.4 d

20.5 d

List size Fig. 8. Success probability of the iterative slicer for δ-BDDP with δ ∈ {0, 0.2, 0.4, 0.6, 0.8, 1}, computed with a discretization parameter of k = 5000.

21.2 d

Time complexity

21.0 d

BDDP complexities with nearest neighbor search

20.8 d 20.6 d 20.4 d 20.2 d

0 BDDP (new)

20 d 0 d 2

1 BDDP

CVPP

0- BDDP (old)

2

0.1 d

20.2 d

20.3 d

20.4 d

20.5 d

20.6 d

Space complexity (≥ List size) Fig. 9. Time complexities for δ-BDDP with memory-intensive nearest neighbour searching.

The Randomized Slicer for CVPP

29

efficient in practice [5], the theoretical analysis has only been considered very recently [19,24]. Preprocessing consists out of constructing a nearest-neighbour graph of the list L and the query phase consists out of a greedy walk on this graph that hopefully ends at the closest vector to the given target. Definition 8 (α-near neighbour graph). Let L ⊂ S d−1 and α ∈ (0, 1), we define the α-near neighbour graph G = (V, E) with V = L and (x, y) ∈ E if and only if x, y ≥ α. Given a target t, the query phase starts at some random node x ∈ L of the α-near neighbour graph. Then it tries to find a neighbour y of x in the graph that lies closer to t. This is repeated until such a closer neighbour does not exist any more or if a close enough neighbour is found. Note that for α ≈ 0 this is equivalent to a brute-force algorithm with time O(N ), however for larger α the number of neighbours can be much lower than N , possibly resulting in lower query times. Just as for the iterative slicer there is no guarantee that the nearest neighbour of t is found. This success probability decreases as the graph becomes sparser, and just as for the iterative slicer we achieve a good probability of answering the query successfully by repeating the algorithm. The rerandomization in this case is achieved by starting the greedy walk at a different node of the graph. In the context of lattice problems we are mainly interested in NNS in the setting that |L| = (4/3)d/2 , and thus we will focus on that, but our model is certainly not limited by this. In this setting the points in our list are uniformly distributed over the sphere. Laarhoven [19] was the first to formalize the success probability and this resulted in a lower bound using similar techniques as those used for DLW [12]. We show that this lower bound on the success probability is not sharp for all parameters α and our analysis gives the real asymptotic success probability, again using the random walk model. In this case the distance measure is taken as the cosine of the angle v, t between the vector and the target. Note that in this setting the goal is to find a v ∈ L such that v, t ≥ 12 by greedily walking over the graph, decreasing this angle in each step if possible. Again given α we have some β ≤ 12 such that with high probability we end up at the some v ∈ L with v, t ≈ β. So just as in Sect. 3 the success probability is determined by the highest probable path from β to 12 . The transition probability from x to y is equal to (4/3)d/2 · W(α, y, x) [19]. Heuristic claim 7 (Success probability of graph-NNS). Let L ⊂ S d−1 d+o(d) be a uniformly distributed list . Let α ∈ (0, 12 ) and β =  of size (4/3)  max 12 , (1 − 4α2 )/(5 − 8α) . Let G = (V, E) be an infinite graph with V = [β, 12 ] and weight function    4 4 α2 + y 2 − 2αxy 1 − · . wα,nns (x, y) = min 0, − log 2 3 3 1 − x2

(48)

30

L. Ducas et al.

20.45 d

Time complexity

GraphSieve (old)

20.40 d Has

Sp

20.35 d

Gr

he

ap

hS

iev

e(

LDS

20.30 d

d 20.25 0.20 2 d

hSie

re

ieve

Si

ve

ev

ne w)

20.25 d

e

ac

e

e

Sp

Tim

20.30 d

20.35 d

20.40 d

Space complexity Fig. 10. Asymptotic exponents for heuristic lattice sieving methods for solving SVP in dimension d, using near neighbour techniques.

Let x0 → · · · → xn be the shortest path in G from β to 12 , then success probability of a single greedy walk in the α-near neighbour graph of L is given by   n  wα,nns (xi−1 , xi )d + o(d) . (49) exp − i=1

We do not see major problems in finding an exact solution for the shortest path, but this is out of the scope of this paper. The results from a numerical approximation using the techniques from Sect. 4 are shown in Fig. 10. Acknowledgements. Leo Ducas was supported by the European Union H2020 Research and Innovation Program Grant 780701 (PROMETHEUS) and the Veni Innovational Research Grant from NWO under project number 639.021.645. Thijs Laarhoven was supported by a Veni Innovational Research Grant from NWO under project number 016.Veni.192.005. Wessel van Woerden was supported by the ERC Advanced Grant 740972 (ALGSTRONGCRYPTO).

Appendix A: Sage Code for Symbolic Verification Sage code for the symbolic verification of the statements in this paper.

The Randomized Slicer for CVPP

31

A.1 Lemma (Strict Convexity) We check that the given partial derivatives in the Lemma are correct.

A.2 Definition (Explicit Constructions) We check that the explicit construction indeed satisfies the mentioned properties.

A.3 Lemma (Valid Construction) We check that the explicit construction is valid for  (4b − a)2 − 8(2b − a)c 1 b−c ≤n< + a 2 2a We first need to verify that x0 − x1 > 0. We do this by rewriting the problem to that of showing that a degree 3 polynomial in n with positive leading coefficient is negative. Our n is between the second and third root and thus we can conclude.

32

L. Ducas et al.

Next we check that xn − 1 − xn = max(2, (b − c)/a), again by rewriting the equations.

The Randomized Slicer for CVPP

33

A.4 Theorem (Optimal Arbitrary-Length Paths) The case i < k − n is easily verified

For the case i = k − n we first show that y0 − y1 2−128 ) may sacrifice the security, and a smaller ν (i.e., ν < 2−128 ) may compromise the performance. To this end, we introduce special variants of SIS and LWE, referred to as asymmetric SIS (ASIS) and asymmetric LWE (ALWE). Informally, the ASIS problem ASIS∞ n,m1 ,m2 ,q,β1 ,β2 refers to the problem that, $

n×(m +m )

1 2 − Zq , find out a non-zero x = (xT1 , xT2 )T ∈ Zm1 +m2 given a random A ← satisfying Ax = 0 mod q, x1 ∞ ≤ β1 and x2 ∞ ≤ β2 . It is easy to see that ∞ ASIS∞ n,m1 ,m2 ,q,β1 ,β2 is at least as hard as SISn,m1 +m2 ,q,max(β1 ,β2 ) . Thus, we have

∞ ∞ SIS∞ n,m1 +m2 ,q,max(β1 ,β2 )  ASISn,m1 ,m2 ,q,β1 ,β2  SISn,m1 +m2 ,q,min(β1 ,β2 ) .

This lays the theoretical foundation for constructing secure signatures based on the ASIS problem. In addition, we investigate a class of algorithms for solving the ASIS problem, and provide a method for selecting appropriate parameters for different security levels with reasonable security margin. Correspondingly, the ALWE problem ALWEn,m,q,α1 ,α2 asks to find out s ∈ $

$

× Zm − Zm×n ,s ← − Znq from samples (A, b = As + e) ∈ Zm×n q q , where A ← q $

− χm χnα1 , e ← α2 . The hardness of ALWE may depend on the actual distribution from which s (or e) is sampled, and thus we cannot simply compare the hardness of LWE and ALWE like we did for SIS and ASIS. However, the relation below remains valid for our parameter choices in respect to all known solving algorithms despite the lack of a proof in general:1 LWEn,m,q,min(α1 ,α2 )  ALWEn,m,q,α1 ,α2  LWEn,m,q,max(α1 ,α2 ) . More importantly, the literature [9,16,26] suggests that ALWE can reach comparable hardness to standard LWE as long as the secret is sampled from a distribution (i.e., χnα1 ) with sufficiently large entropy (e.g., uniform distribution over {0, 1}n ) and appropriate values are chosen for other parameters. This shows the possibility of constructing secure cryptographic schemes based on the ALWE problem. We also note that Cheon et al. [11] introduced a variant of LWE that is quite related to ALWE, where s and e are sampled from different distributions (notice that s and e in the ALWE problem are sampled from the same distribution χ, albeit with different parameters α1 and α2 ). By comprehensively comparing, analyzing and optimizing the state-of-the-art LWE solving algorithms, we establish approximate relations between parameters of ALWE and LWE, and suggest practical parameter choices for several levels of security strength intended for ALWE. 1

In the full version, we show that the relations actually hold for discrete Gaussian distributions and binomial distributions under certain choices of parameters.

40

J. Zhang et al.

The definitions of the aforementioned variants can be naturally generalized to the corresponding ring and module versions, i.e., ring-LWE/SIS and moduleLWE/SIS. As exhibited in [6,12], module-LWE/SIS allows for better trade-off between security and performance. We will use the asymmetric module-LWE problem (AMLWE) and the asymmetric module-SIS problem (AMSIS) to build a key encapsulation mechanism (KEM) and a signature scheme of smaller sizes. Technically, our KEM scheme is mainly based on the PKE schemes in [6,22], except that we make several modifications to utilize the inherent asymmetry of the (M)LWE secret and noise in contributing to the decryption failure probabilities, which allow us to obtain smaller public keys and ciphertexts. In Sect. 3.1, we will further discuss this asymmetry in the design of existing schemes, and illustrate our design rationale in more details. For a targeted 128-bit security, the public key (resp., ciphertext) of our KEM only has 896 bytes (resp., 992 bytes). Our signature scheme bears most resemblance to Dilithium in [12]. The main difference is that we make several modifications to utilize the asymmetric parameterization of the (M)LWE and (M)SIS to reach better trade-offs among computational costs, storage overhead and security, which yields smaller public keys and signatures without sacrificing the security or computational efficiency. In Sect. 4.1, we will further discuss the asymmetries in existing constructions, and illustrate our design rationale in more details. For a targeted 128-bit quantum security, the public key (resp., signature) of our signature scheme only has 1312 bytes (resp., 2445 bytes). We make a comprehensive and in-depth study on the concrete hardness of AMLWE and AMSIS by adapting the best known attacks (that were originally intended for MLWE and MSIS respectively) and their variants (that were modified to solve AMLWE and AMSIS respectively), and provide several choices of parameters for our KEM and signature schemes aiming at different security strengths. The implementation of our schemes (and its comparison with the counterparts) confirms that our schemes are practical and competitive. We compare our KEM with NIST round2 lattice-based PKEs/KEMs in Sect. 1.1, and compare our signature with NIST round2 lattice-based signatures in Sect. 1.2. 1.1

Comparison with NIST Round2 Lattice-Based PKEs/KEMs

As our KEM is built upon Kyber [6], we would like to first give a slightly detailed comparison between our KEM and Kyber-round2 [6] in Table 1. Our software is implemented in C language with optimized number theory transform (NTT) and vector multiplication using AVX2 instructions. The running times of KeyGen, Encap and Decap algorithms are measured in averaged CPU cycles of 10000 times running on a 64-bit Ubuntu 14.4 LTS ThinkCenter desktop (equipped with Intel Core-i7 4790 3.6 GHz CPU and 4 GB memory). The sizes of public key |pk|, secret key |sk|, ciphertext |C| are measured in terms of bytes. The column |ss| gives the size of the session key that is encapsulated by each ciphertext. The column “Dec. Failure” lists the probabilities of decryption failure. The last column “Quant. Sec.” gives the estimated quantum security level expressed in bits. Note that for X ∈ {512, 768, 1024} aiming at NIST Category I, III and V, the estimated quantum security of our KEM ΠKEM -X is slightly lower than that

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

41

Table 1. Comparison between Our KEM ΠKEM and Kyber-round2 Schemes

|pk| |sk| |C| |ss| KeyGen Encap Decap Dec. Quant. (Bytes) (Bytes) (Bytes) (Bytes) (AVX2) (AVX2) failure Sec.

Kyber-512

800

1632

736

32

37 792

54 465

41 614 2−178 100

ΠKEM -512

672

1568

672

32

66 089

70 546

56 385 2−82

102

800

1632

640

32

-

-

-

2−82

99

Kyber-768

1184

2400

1088

32

66 760

86 608

69 449 2−164 164

ΠKEM -768

ΠKEM

-512†

896

2208

992

32

84 504

93 069

76 568 2−128 147

-768†

1184

2400

960

32

-

-

-

Kyber-1024

116 610 96 100 2−174 230

ΠKEM

2−130 157

1568

3168

1568

32

88 503

ΠKEM -1024 1472

3392

1536

64

115 268 106 740 92 447 2−211 213

ΠKEM -1024† 1728

3648

1472

64

-

-

-

2−198 206

of Kyber-X, but we emphasize that our parameter choices have left out sufficient security margin reserved for further development of attacks. For example, our ΠKEM -768 reaches an estimated quantum security of 147 bits and a 2−128 decryption failure probability, which we believe is sufficient to claim the same targeted 128-bit quantum security (i.e., NIST Category III) as Kyber-768. We also note that the parameter choice of ΠKEM -1024 is set to encapsulate a 64-byte session key, which is twice the size of that achieved by Kyber-1024. This decision is based on the fact that a 32-byte session key may not be able to provide a matching security strength, say, more than 210-bit quantum security (even if the Grover algorithm [17] cannot provide a real quadratic speedup over classical algorithms in practice). We note that the Kyber team [6] removed the public-key compression to purely base their Kyber-round2 scheme on the standard MLWE problem and obtained (slightly) better computational efficiency (for saving several operations such as NTT). On the first hand, as commented by the Kyber team that “we strongly believe that this didn’t lower actual security”, we prefer to use the public-key compression to obtain smaller public key sizes (with the cost of a slightly worse computational performance). On the other hand, one can remove the public-key compression and still obtain a scheme with shorter ciphertext size (see ΠKEM -X† in Table 1), e.g., a reduction of 128 bytes in the ciphertext size over Kyber-768 at the targeted 128-bit quantum security by using a new parameter set (n, k, q, η1 , η2 , du , dv ) = (256, 3, 3329, 1, 2, 9, 3) (see ΠKEM -X† in Table 5). We also give a comparison between our KEM and NIST round2 lattice-based PKEs/KEMs in Table 2. For simplicity, we only compare those schemes under the parameter choices targeted at IND-CCA security and 128-bit quantum security in terms of space and time (measured in averaged CPU cycles of running 10000 times) on the same computer. We failed to run the softwares of the schemes marked with ‘∗ ’ on our experiment computer (but a public evaluation on the Round1 submissions suggests that Three-Bears may have better computational efficiency than Kyber and ours). As shown in Table 2, our ΠKEM has a very competitive performance in terms of both sizes and computational efficiency.

42

J. Zhang et al.

Table 2. Comparison between ΠKEM and NIST Round2 lattice-based PKEs/KEMs Schemes

|pk| |sk| |C| KeyGen Encap Decap Problems (Bytes) (Bytes) (Bytes) (AVX2) (AVX2) (AVX2)

Frodo∗

15 632

31 296

15 744

-

-

-

LWE

Kyber

1184

2400

1088

66 760

86 608

69 449

MLWE

LAC

1056

2080

1188

108 724 166 458 208 814 RLWE

Newhope

1824

3680

2208

146 909 233 308 237 619 RLWE

NTRU-Prime∗ 1158

1763

1039

-

-

-

NTRU variant

NTRU

1138

1450

1138

378 728 109 929 75 905

NTRU

Round5∗

983

1031

1119

-

GLWR

Saber

992

2304

1088

117 504 139 044 133 875 MLWER

Three-Bears∗

1194

40

1307

-

-

-

MLWE variant

ΠKEM -768

896

2208

992

84 504

93 069

76 568

AMLWE

1.2

-

-

Comparison with NIST Round2 Lattice-Based Signatures

We first give a slightly detailed comparison between our signature ΠSIG with Dilithium-round2 [12] in Table 3. Similarly, the running times of the KeyGen, Sign and Verify algorithms are measured in the average number of CPU cycles (over 10000 times) on the same machine configuration as before. The sizes of public key |pk|, secret key |sk|, signature |σ| are counted in bytes. As shown in Table 3, the estimated quantum security of ΠSIG -1024 is slightly lower than that of Dilithium-1024, but those at ΠSIG -1280 and ΠSIG -1536 are slightly higher. In all, our scheme has smaller public key and signatures while still providing comparable efficiency to (or even slightly faster than) Dilithium-round2. Table 3. Comparison between Our Signature ΠSIG and Dilithium-round2 Schemes

|pk| (Bytes)

Dilithium-1024 1184 ΠSIG -1024

1056

Dilithium-1280 1472 ΠSIG -1280

1312

Dilithium-1536 1760 ΠSIG -1536

1568

|sk| (Bytes)

|σ| (Bytes)

KeyGen (AVX2)

Sign (AVX2)

Verify (AVX2)

Quantum Sec.

2800

2044

140 181

476 598

129 256

91

2448

1852

126 719

407 981

113 885

90

3504

2701

198 333

657 838

187 222

125

3376

2445

198 876

634 128

170 283

128

3856

3366

269 430

639 966

260 503

158

3888

3046

296 000

800 831

259 855

163

We also compare our signature with NIST round2 lattice-based signatures: Falcon, qTESLA and Dilithium, where the first one is an instantiation of fulldomain hash and trapdoor sampling [15] on NTRU lattices (briefly denoted as FDH-like methodology), and the last two follows the more efficient Fiat-Shamir heuristic with rejection sampling (briefly denoted as FS-like methodology) [24].

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

43

As we failed to run the softwares of Falcon and qTESLA on our experiment computer (but a public evaluation on the round1 submissions suggests that Falcon and qTESLA are probably much slower than Dilithium), we only compare the sizes of those schemes at all parameter choices in Table 4. Note that the qTESLA team had dropped all the parameter sets of qTESLA-round2, the figures in Table 4 corresponds to their new choices of parameter sets. Table 4. Comparison between ΠSIG and NIST Round2 lattice-based signatures NIST category

Schemes

|pk| |sk| |σ| Problems (Bytes) (Bytes) (Bytes)

Methodology

I

Falcon-512

897

II III

1.3

4097

690

NTRU

FDH-like

qTESLA-1024 14 880

5 184

2 592

RLWE

FS-like

Dilithium-1024 1184

2800

2044

MLWE, MSIS

ΠSIG -1024

2448

1852

AMLWE, AMSIS

Dilithium-1280 1472

3504

2701

MLWE, MSIS

ΠSIG -1280

1312

3376

2445

AMLWE, AMSIS

Falcon-1024

1793

1056

FS-like

8193

1330

NTRU

FDH-like

qTESLA-2048 38 432

12 352

5 664

RLWE

FS-like

Dilithium-1536 1760

3856

3366

MLWE, MSIS

ΠSIG -1536

3888

3046

AMLWE, AMSIS

1568

Organizations

Section 2 gives the preliminaries and background information. Section 3 describes the KEM scheme from AMLWE. Section 4 presents the digital signature scheme from AMLWE and AMSIS. Section 5 analyzes the concrete hardness of AMLWE and AMSIS by adapting the best known attacks.

2 2.1

Preliminaries Notation

We use κ to denote the security parameter. For a real number x ∈ R, x denotes the closest integer to x (with ties being rounded down, i.e., 0.5 = 0). We denote by R the ring R = Z[X]/(X n + 1) and by Rq the ring Rq = Zq [X]/(X n + 1), where n is a power of 2 so that X n + 1 is a cyclotomic polynomial. For any positive integer η, Sη denotes the set of ring elements of R that each coefficient is taken from {−η, −η + 1 . . . , η}. The regular font letters (e.g., a, b) represent elements in R or Rq (including elements in Z or Zq ), and bold lower-case letters (e.g., a, b) denote vectors with coefficients in R or Rq . By default, all vectors

44

J. Zhang et al.

will be column vectors. Bold upper-case letters (e.g., A, B) represent matrices. We denote by aT and AT the transposes of vector a and matrix A respectively. $ $ − D sampling x according to a distribution D and by x ← −S We denote by x ← denote sampling x from a set S uniformly at random. For two bit-strings s and t, st denotes the concatenation of s and t. We use logb to denote the logarithm function in base b (e.g., 2 or natural constant e) and log to represent loge . We say that a function f : N → [0, 1] is negligible, if for every positive c and all sufficiently large κ it holds that f (κ) < 1/κc . We denote by negl : N → [0, 1] an (unspecified) negligible function. We say that f is overwhelming if 1 − f is negligible. 2.2

Definitions

Modular Reductions. For an even positive integer α, we define r = r mod± α as the unique element in the range (− α2 , α2 ] such that r = r mod α. For an odd positive integer α, we define r = r mod± α as the unique element in the range 1 α−1  [− α − 2 , 2 ] such that r = r mod α. For any positive integer α, we define + r = r mod α as the unique element in the range [0, α) such that r = r mod α. When the exact representation is not important, we simply write r mod α. Sizes of Elements. For an element w ∈ Zq , we write w∞ to mean |w mod± q|. The ∞ and 2 norms of a ring element w = w0 + w1 X + · · · + wn−1 X n−1 ∈ R are defined as follows:  w∞ = max wi ∞ , w = w0 2∞ + . . . + wn−1 2∞ . i

Similarly, for w = (w1 , . . . , wk ) ∈ Rk , we define  w∞ = max wi ∞ , w = w1 2 + . . . + wk 2 . i

Modulus Switching. For any positive integers p, q, we define the modulus switching function ·q→p as: xq→p = (p/q) · x mod+ p. It is easy to show that for any x ∈ Zq and p < q ∈ N, x = xq→p p→q is an element close to x, i.e,   q ±  |x − x mod q| ≤ . 2p When ·q→p is used to a ring element x ∈ Rq or a vector x ∈ Rqk , the procedure is applied to each coefficient individually.

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

45

Binomial Distribution. The centered binomial distribution Bη with some positive integer η is defined as follows:   η  $ 2η Bη = (ai − bi ) : (a1 , . . . , aη , b1 , . . . , bη ) ← − {0, 1} i=1 $

− Bη or a vector of such polynoWhen we write that sampling a polynomial g ← $

mials g ← − Bη , we mean that sampling each coefficient from Bη individually. 2.3

High/Low Order Bits and Hints

Our signature scheme will adopt several simple algorithms proposed in [12] to extract the “higher-order” bits and “lower-order” bits from elements in Zq . The goal is that given an arbitrary element r ∈ Zq and another small element z ∈ Zq , we would like to recover the higher order bits of r + z without needing to store z. Ducas et al. [12] define algorithms that take r, z and generate a 1-bit hint h that allows one to compute the higher order bits of r + z just using r and h. They consider two different ways which break up elements in Zq into their “higher-order” bits and “lower-order” bits. The related algorithms are described in Algorithms 1–6. We refer the reader to [12] for the illustration of the algorithms. The following lemmas claim some crucial properties of the above supporting algorithms, which are necessary for the correctness and security of our signature scheme. We refer to [12] for their proofs. Lemma 1. Let q and α be positive integers such that q > 2α, q mod α = 1 and α is even. Suppose that r, z are vectors of elements in Rq , where z∞ ≤ α/2. Let h, h be vectors of bits. Then, algorithms HighBitsq , MakeHintq and UseHintq satisfy the following properties: – UseHintq (MakeHintq (z, r, α), r, α) = HighBitsq (r + z, α). – Let v1 = UseHintq (h, r, α). Then r − v1 · α∞ ≤ α + 1. Furthermore, if the number of 1’s in h is at most ω, then all except for at most ω coefficients of r − v1 · α will have magnitude at most α/2 after centered reduction modulo q. – For any h, h , if UseHintq (h, r, α) = UseHintq (h , r, α), then h = h . Lemma 2. If s∞ ≤ β and LowBitsq (r, α)∞ < α/2 − β, then we have: HighBitsq (r, α) = HighBitsq (r + s, α).

3

An Improved KEM from AMLWE

Our scheme is based on the key encapsulation mechanism in [6,22]. The main difference is that our scheme uses a (slightly) different hardness problem, which gives us a flexible way to set the parameters for both performance and security.

46

J. Zhang et al.

Algorithm 1: Power2Roundq (r, d) 1 2 3 4

r := r mod+ q; r0 := r mod± 2d ; r1 := (r − r0 )/2d ; return (r1 , r0 );

Algorithm 2: Decomposeq (r, α) 1 2 3 4 5 6 7 8 9

r := r mod+ q; r0 := r mod± α; if r − r0 = q − 1 then r1 := 0; r0 := r0 − 1; else r1 := (r − r0 )/α; end return (r1 , r0 );

Algorithm 3: HighBitsq (r, α) 1 2

(r1 , r0 ) := Decomposeq (r, α); return r1 ;

Algorithm 4: LowBitsq (r, α) 1 2

(r1 , r0 ) := Decomposeq (r, α); return r0 ;

Algorithm 5: MakeHintq (z, r, α) 1 2 3 4 5 6 7 8

3.1

r1 := HighBitsq (r, α); v1 := HighBitsq (r + z, α); if r1 = v1 then h := 1; else h := 0; end return h;

Design Rationale

For simplicity and clarity, we explain the core idea using the (A)LWE-based public-key encryption (PKE) scheme as an example. Note that most LWE-based

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

47

Algorithm 6: UseHintq (h, r, α) 1 2 3 4 5 6 7 8 9

k := (q − 1)/α; (r1 , r0 ) := Decomposeq (r, α); if h = 1 and r0 > 0 then r1 := (r1 + 1) mod+ k; end if h = 1 and r0 ≤ 0 then r1 := (r1 − 1) mod+ k; end return r1 ;

PKE schemes mainly follow the framework in [22] up to the choices of parameters and noise distributions. Let n, q ∈ Z be positive integers, and let χα ⊂ Z be a discrete Gaussian distribution with standard variance α ∈ R. The LWE-based PKE works as follows: $

$

− Zn×n , s, e ← − χnα and compute – Key generation: randomly choose A ← q b = As + e. Return the public key pk = (A, b) and secret key sk = s. – Encryption: given the public key pk = (A, b) and a plaintext μ ∈ {0, 1}, $

$

− χnα , x2 ← − χα and compute c1 = AT r + x1 , c2 = randomly choose r, x1 ← q T b r + x2 + μ ·  2 . Finally, return the ciphertext C = (c1 , c2 ). – Decryption: given the secret key sk = s and a ciphertext C = (c1 , c2 ), compute z = c2 − sT c1 and output z · 2q  mod 2 as the decryption result. For a honestly generated ciphertext C = (c1 , c2 ) that encrypts plaintext μ ∈ {0, 1}, we have: q + eT r − sT x1 + x2 . (1) z = c2 − sT c1 = μ ·



2 noise

e

Thus, the decryption algorithm is correct as long as |e | < 4q . Since |x2 | |eT r − sT x1 |, the magnitude of |e | mainly depends on |eT r − sT x1 |. That is, the LWE secret (s, r) and the noise (e, x1 ) contribute almost equally to the magnitude of |e |. Moreover, for a fixed n the expected magnitude of |eT r − sT x1 | is a monotonically increasing function of α: larger α ⇒ larger |eT r − sT x1 | ⇒ larger |e |. Let ν be the probability that the decryption algorithm fails, and let λ be the complexity of solving the underlying LWE problem. Ideally, for a targeted security strength κ, we hope that ν = 2−κ and λ = 2κ , since a large ν (i.e., ν > 2−κ ) will sacrifice the overall security, and a large λ (i.e., λ > 2κ ) may compromise the overall performance. Since both ν and λ are strongly related to the ratio α/q of the Gaussian parameter α and the modulus q, it is hard to come up with an appropriate choice of (α, q) to simultaneously achieve the best of the two worlds.

48

J. Zhang et al.

To obtain smaller public keys and ciphertexts (and thus improve the communication efficiency), many schemes use the modulus switching technique [8,10] to compress public keys and ciphertexts. We refer to the following scheme that adopts modulus switching technique to compress public keys and ciphertexts, where p1 , p2 , p3 ∈ Z are parameters for compression (p1 for the public key and p2 , p3 for ciphertexts). $

$

– Key generation: pick A ← − Zn×n and s, e ← − χnα and compute b = As + e. q ¯ = b Then, return the public key pk = (A, b q→p1 ) and the secret key sk = s. ¯ and a plaintext μ ∈ {0, 1}, – Encryption: given the public key pk = (A, b) $

$

− χnα , x2 ← − χα , and compute c1 = AT r + x1 and randomly choose r, x1 ← T q ¯ ¯2 = c1 = c1 q→p2 , c c2 = bp1 →q r + x2 + μ ·  2 . Return the ciphertext C = (¯ c2 q→p3 ). ¯2 ), – Decryption: given the secret key sk = s and a ciphertext C = (¯ c1 , c c1 p2 →q and output zq→2 = z · 2q  mod 2 as compute z = ¯ c2 p3 →q − sT ¯ the decryption result. Let ¯ = bq→p1 p1 →q − b, x ¯ 1 = c1 q→p2 p2 →q − c1 , x e ¯2 = c2 q→p3 p3 →q − c2 . It is easy to verify ¯ e∞ ≤ 2pq 1 , ¯ x1 ∞ ≤ 2pq 2 , and |¯ x2 | ≤ ¯2 ) that encrypts μ ∈ {0, 1} we have ciphertext C = (¯ c1 , c

q 2p3 .

For any valid

¯)T r − sT (x1 + x ¯ 1 ) + (x2 + x = μ ·  2q  + (e + e ¯2 )



(2)

z = ¯ c2 p3 →q − sT ¯ c1 p2 →q noise

e

Apparently, the smaller values for p1 , p2 , p3 the better compression rate is achieved for public keys and ciphertexts. At the same time, however, by the def¯, x ¯1, x ¯2 we know that smaller p1 , p2 , p3 also result in a larger noise e . initions of e e∞ e∞ , Notice that when p1 , p2 , p3 are much smaller than q, we will have ¯ x2 | |x2 |, which further leads to asymmetric roles of ¯ x1 ∞ x1 ∞ and |¯ (e, x1 , x2 ) and (s, r) in contributing to the resulting size of |e |, i.e., for specific (p1 , p2 , p3 ) decreasing (resp., increasing) s∞ or r∞ would significantly reducing (resp., enlarging) the noise |e |, and in contrast, changing the size of e∞ , x1 ∞ and |x2 | would not result in substantial change to |e |. The asymmetry observed above motivates the design of our ALWE-based PKE, which uses different noise distributions χα1 and χα2 (i.e., same distribution with different parameters α1 and α2 ) for the secrets (i.e., s and r) and the errors (i.e., e, x1 , x2 ), respectively. $

$

$

− Zn×n , s ← − χnα1 and e ← − χnα2 , compute b = – Key generation: pick A ← q ¯ As + e. Then, return the public key pk = (A, b = bq→p1 ) and the secret key sk = s.

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

49

¯ and a plaintext μ ∈ {0, 1}, – Encryption: given the public key pk = (A, b) $

$

$

− χnα1 , x1 ← − χnα2 , x2 ← − χα2 , compute c1 = AT r + x1 and randomly choose r ← T c2 = bp1 →q r + x2 + μ ·  2q , and return the ciphertext C = (¯ c1 = c1 q→p2 ¯2 = c2 q→p3 ). and c ¯2 ), – Decryption: Given the secret key sk = s and the ciphertext C = (¯ c1 , c c1 p2 →q and output zq→2 = z · 2q  mod 2 as compute z = ¯ c2 p3 →q − sT ¯ the decryption result. ¯2 ) we have the same z and e as defined in (2), Similarly, for ciphertext C = (¯ c1 , c where the difference is that now s∞ and r∞ are determined by α1 , and that e∞ , x1 ∞ and |x2 | are determined by α2 . Intuitively, we wish to use small α1 in order to keep |e | small, and at the same time choose relatively large α2 to remedy the potential security loss due to the choice of a small α1 . While the intuition seems reasonable, it does not shed light on the choices of parameters, in particular, how parameters α1 and α2 (jointly) affect security. To this end, we consider the best known attacks and their variants against (A)LWE problems, and obtain the following conclusions: Let χα1 and χα2 be subgaussians with standard variances α1 , α2 ∈ R respectively, then we have the following approximate relation between the hardness of ALWE and LWE: the hardness of ALWE with subgaussian standard variances α1 , α2 ∈ R is polynomially equiva√ lent to the hardness of LWE with subgaussian standard variance α1 α2 . Clearly, the equivalence is trivial for α1 = α2 . This confirms the feasibility of our idea: use a small α1 to keep the probability ν of decryption failures small while pick a relatively larger α2 remain the security of the resulting PKE scheme. The above idea can be naturally generalized to the schemes based on the ring and module versions of LWE. Actually, we will use AMLWE for achieving a better trade-off between computational and communication costs. 3.2

The Construction

We now formally describe a CCA-secure KEM from AMLWE (and AMLWE-R). For ease of implementation, we will use centered binomial distributions instead of Gaussian distributions as in [3,6]. We first give an intermediate IND-CPA secure PKE, which is then transformed into an IND-CCA secure KEM by applying a tweaked Fujisaki-Okamoto (FO) transformation [14,18]. An IND-CPA Secure PKE. Let n, q, k, η1 , η2 , dt , du , dv be positive integers. Let H : {0, 1}n → Rqk×k be a hash function, which is modeled as a random oracle. The PKE scheme ΠPKE consists of three algorithms (KeyGen, Enc, Dec): $

$

$

– ΠPKE .KeyGen(κ): randomly choose ρ ← − {0, 1}n , s ← − Bηk1 , e ← − Bηk2 , compute A = H(ρ) ∈ Rqk×k , t = As + e ∈ Rqk and ¯t = tq→2dt . Then, return the public key pk = (ρ, ¯t) and the secret key sk = s.

50

J. Zhang et al.

– ΠPKE .Enc(pk, μ): given the public key pk = (ρ, ¯t) and a plaintext μ ∈ R2 , $

$

$

− Bηk1 , e1 ← − Bηk2 , e2 ← − Bη2 , compute A = H(ρ), u = randomly choose r ← T T ¯ A r + e1 , v = t2dt →q r + e2 , and return the ciphertext q C = (¯ u = uq→2du , v¯ = v + μ ·  q→2dv ). 2 – ΠPKE .Dec(sk, C): given the secret key sk = s and a ciphertext C = (¯ u, v¯), u2du →q , output zq→2 = z · 2q  mod 2. compute z = ¯ v 2dv →q − sT ¯ Let ct ∈ Rk satisfy that ¯t2dt →q = As + eq→2dt 2dt →q = As + e − ct . Let cu ∈ Rk satisfy that ¯ u2du →q = AT r + e1 q→2du 2du →q = AT r + e1 − cu . Let cv ∈ R satisfy that T ¯ v 2dv →q = ¯t2dt →q r + e2 + q/2 · μq→2dv 2dv →q T = ¯t2dt →q r + e2 + q/2 · μ − cv = (As + e − ct )T r + e2 + q/2 · μ − cv = (As + e)T r + e2 + q/2 · μ − cv − cTt r.

Using the above equations, we have z = ¯ v 2dv →q − sT ¯ u2du →q = eT r + e2 − cv − cTt r − sT e1 + sT cu +q/2 · μ



= w

= w + q/2 · μ. It is easy to check that for any odd number q, we have that μ = zq→2 holds as long as w∞ < q/4. In Sect. 3.4, we will choose the parameters such that the decryption algorithm succeeds with overwhelming probability. IND-CCA Secure KEM. Let G : {0, 1}∗ → {0, 1}n , and H : {0, 1}∗ → {0, 1}n ×{0, 1}n be two hash functions, which are modeled as random oracles. By applying a slightly tweaked Fujisaki-Okamoto (FO) transformation [14,18], we can transform the above IND-CPA secure PKE ΠPKE into an IND-CCA secure KEM (with implicit rejection) ΠKEM = (KeyGen, Encap, Decap) as follows. $

− {0, 1}n , compute (pk  , sk  ) = ΠPKE . – ΠKEM .KeyGen(κ): choose z ← KeyGen(κ). Then, return the public key pk = pk  and the secret key sk = (pk  , sk  , z).

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

51

$

– ΠKEM .Encap(pk): given the public key pk, randomly choose μ ← − {0, 1}n ,   ¯ compute μ = H(μ), (K, r) = G(μ H(pk)) C = ΠPKE .Enc(pk, μ ; r) and ¯ K = H(KH(C)), where the notation ΠPKE .Enc(pk, μ ; r) denotes running the algorithm ΠPKE .Enc(pk, μ ) with fixed randomness r. Finally, return the ciphertext C and the encapsulated key K. – ΠKEM .Decap(sk, C): given the secret key sk = (pk  , sk  , z) and a cipher¯  , r ) = G(μ H(pk  )), C  = text C, compute μ = ΠKEM .Dec(sk  , C) and (K    ¯  H(C)), else return ΠKEM .Enc(pk, μ ; r ). If C = C , return K = H(K H(zH(C)). 3.3

Provable Security

In the full version [32], we will show that under the hardness of the AMLWE problem and its rounding variant AMLWE-R (which is needed for compressing the public key, see Appendix A), our scheme ΠPKE is provably IND-CPA secure. Formally, we have the following theorem. Theorem 1. Let H : {0, 1}n → Rqk×k be a random oracle. If both problems AMLWEn,q,k,k,η1 ,η2 and AMLWE-Rn,q,2dt ,k,k,η1 ,η2 are hard, then the scheme ΠPKE is IND-CPA secure. Since ΠKEM is obtained by applying a slightly tweaked Fujisaki-Okamoto (FO) transformation [14,18] to the PKE scheme ΠPKE , given the results in [6,18] and Theorem 1, we have the following theorem. Theorem 2. Under the AMLWE assumption and the AMLWE-R assumption, ΠKEM is IND-CCA secure in the random oracle model. Notice that the algorithm Decap will always return a random “session key” even if the check fails (i.e., implicit rejection). Furthermore, the paper [20] showed that if the underlying PKE is IND-CPA secure, then the resulting KEM with implicit rejection obtained by using the FO transformation is also IND-CCA secure in the quantum random oracle model (QROM). Given the results in [20] and Theorem 1, we have the following theorem. Theorem 3. Under the AMLWE assumption and the AMLWE-R assumption, ΠKEM is IND-CCA secure in the QROM.

3.4

Choices of Parameters

In Table 5, we give three sets of parameters (namely, ΠKEM -512, ΠKEM -768 and ΠKEM -1024) for ΠKEM , aiming at providing quantum security of at least 80, 128 and 192 bits, respectively. These parameters are carefully chosen such that the decryption failure probabilities (i.e., 2−82 , 2−128 and 2−211 , respectively) are commensurate with the respective targeted security strengths. A concrete estimation of the security strength provided by the parameter sets will be given

52

J. Zhang et al. Table 5. Parameters sets for ΠKEM

Parameters

(n, k, q)

(η1 , η2 ) (dt , du , dv ) |pk|

ΠKEM -512

(256, 2, 7681)

(2, 12)

(10, 9, 3)

672 1568

672 32

2−82

ΠKEM -512†

(256, 2, 3329)

(1, 4)

(−, 8, 4)

800 1632

640 32

2−82

99

ΠKEM -768

(256, 3, 7681)

(1, 4)

(9, 9, 4)

896 2208

992 32

2−128

147

ΠKEM -768†

(256, 3, 3329)

(1, 2)

(−, 9, 3)

1184 2400

960 32

2−130

157

ΠKEM -1024

(512, 2, 12289) (2, 8)

(11, 10, 4)

1472 3392 1536 64

2−211

213

(−, 9, 5)

1728 3648 1472 64

2−198

206

ΠKEM -1024† (512, 2, 7681)

(1, 4)

|sk|

|C|

|ss| Dec. Fail. Quant. Sec. 100

in Sect. 5. Among them, ΠKEM -768 is the recommended parameter set. By the quantum searching algorithm [17], 2κ-bit randomness/session key can only provide at most κ security. Even if the Grover algorithm cannot provide a quadratic speedup over classical algorithms in practice, we still set ΠKEM -1024 to support an encryption of 64-bytes (512-bit) randomness/session key, aiming at providing a matching security strength, say, more than 210-bit estimated quantum security. Note that ΠKEM -512 and ΠKEM -768 only support an encryption of 32-byte (256-bit) session key. We implemented our ΠKEM on a 64-bit Ubuntu 14.4 LTS ThinkCenter desktop (equipped with Intel Core-i7 4790 3.6 GHz CPU and 4 GB memory). Particularly, the codes are mainly written using the C language, with partially optimized codes using AVX2 instructions to speedup some basic operations such as NTT operation and vector multiplications. The average number of CPU cycles (averaged over 10000 times) for running each algorithm is given in Table 1.

4

An Improved Signature from AMLWE and AMSIS

Our signature scheme is based on the “Fiat-Shamir with Aborts” technique [23], and bears most resemblance to Dilithium in [12]. The main difference is that our scheme uses the asymmetric MLWE and MSIS problems, which provides a flexible way to make a better trade-off between performance and security. 4.1

Design Rationale

Several lattice-based signature schemes were obtained by applying the FiatShamir heuristic [13] to three-move identification schemes. For any positive integer n and q, let R = Z[x]/(xn + 1) (resp., Rq = Zq [x]/(xn + 1)). Let H : {0, 1}∗ → R2 be a hash function. Let k, , η be positive integers, and γ, β > 0 be reals. We first consider an identification protocol between two users A and B based on the MSIS∞ n,q,k,,β problem. Formally, user A owns a pair of public key pk = (A, t = Ax) ∈ Rqk× × Rqk and secret key sk = x ∈ Rq . In order to convince another user B (who knows the public key pk) of his ownership of sk, A and B can execute the following protocol: (1) A first chooses a vector y ∈ R from some distribution, and sends w = Ay to user B; (2) B randomly chooses

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

53

a bit c ∈ Rq , and sends it as a challenge to A; (3) A computes z := y + cx and sends it back to B; B will accept the response z by check if Az = w + ct. For the soundness (i.e., user A cannot cheat user B), B also has to make sure that β2 = z∞ is sufficiently small (to ensure that the MSIS∞ n,q,k,,β problem is hard), otherwise anyone can easily complete the proof by solving a linear equation. Moreover, we require that β1 = x∞ is sufficiently small and y∞ x∞ (and thus β2 β1 ) holds to prevent user B from recovering the secret x from the public key pk or the response z. Typically, we should require β2 /β1 > 2ω(log κ) , where κ is the security parameter. This means that the identification protocol as well as its derived signature from the Fiat-Shamir heuristic will have a very large parameter size. To solve this problem, Lyubashevsky [23,24] introduce the rejection sampling, which allows A to abort and restart the protocol (by choosing another y) if he thinks z might leak the information of x. This technique could greatly reduce the size of z (since it allows to set β2 /β1 = poly(κ)), but the cost is painful for an interactive identification protocol. Fortunately, this technique will only increase the computation time of the signer when we transform the identification protocol into a signature scheme. For any positive integer η, Sη denotes the set of elements of R that each coefficient is taken from {−η, −η + 1 . . . , η}. By the Fiat-Shamir heuristic, one can construct a signature scheme from the MSIS problem as follows: $

$

– Key generation: randomly choose A ← − Rqk× , x ← − Sη , and compute t = Ax. Return the public key pk = (A, t) and secret key sk = (x, pk). – Signing: given the secret key sk = (x, pk) and a message μ ∈ {0, 1}∗ , $

 − Sγ−1 ; randomly choose y ← compute w = Ay and c = H(wμ); compute z = y + cx; If z∞ ≥ γ − β, restart the computation from step 1), where β is a bound such that cx∞ ≤ β for all possible c and x. Otherwise, return the signature σ = (z, c). – Verification: given the public key pk = (A, t), a message μ ∈ {0, 1}∗ and a signature σ = (z, c), return 1 if z∞ < γ − β and c = H(Az − ctμ), and 0 otherwise.

1. 2. 3. 4.

Informally, we require the MSIS∞ n,q,k,,η problem to be hard for the security of the secret key (i.e., it is computationally infeasible to compute sk from pk). Moreover, we also require the MSIS∞ n,q,k,,2γ problem to be hard for the unforgeability of signatures (i.e., it is computationally infeasible to forge a valid signature). Since cx∞ ≤ β, for any (c, x) and z output by the signing algorithm there always exists a y ∈ Sγ such that z = y + cx, which guarantees that the signature will not leak the information of the secret key. In terms of efficiency, the signing −n·  − β) − 1 times to output a signature, and algorithm will repeat about 2(γ2γ −1 the signature size is about n log2 (2(γ − β) − 1) + n. Clearly, we wish to use a small for better efficiency, but the hardness of the underlying MSIS problems require a relatively large .

54

J. Zhang et al.

To mediate the above conflict, one can use the MLWE problem, which can be seen as a special MSIS problem, to reduce the size of the key and signature. Formally, we can obtain the following improved signature scheme: $

$

$

– Key generation: randomly choose A ← − Rqk× , and s1 ← − Sη , s2 ← − Sηk , compute t = As1 + s2 . Return the public key pk = (A, t) and secret key sk = (s1 , s2 , pk). – Signing: given the secret key sk = (s1 , s2 , pk) and a message μ ∈ {0, 1}∗ , $

+k − Sγ−1 ; 1. randomly choose y ← and c = H(wμ); 2. compute w = (AI k )y  s 3. compute z = y + c 1 ; s2 4. If z∞ ≥ γ − β, restart    the computation from step (1), where β is a  s1   bound such that  c s2  ≤ β holds for all possible c, s1 , s2 . Otherwise, ∞ output the signature σ = (z, c). – Verification: given the public key pk = (A, t), a message μ ∈ {0, 1}∗ and a signature σ = (z, c), return 1 if z∞ < γ − β and c = H((AIk )z − ctμ), otherwise return 0.

Furthermore, since w = (AIk )y = Ay1 +y2 where y = (y1T , y2T ) and γ q, we have that the higher bits of (each coefficient of) w is almost determined by high order bits of (the corresponding coefficient of) Ay1 . This fact has been utilized by [5,12] to compress the signature size. Formally, denote HighBits(z, 2γ2 ) and LowBits(z, 2γ2 ) be polynomial vector defined by the high order bits and low order bits of a polynomial vector z ∈ Rqk related to a parameter γ2 . We can obtain the following signature scheme: $

$

$

– Key generation: randomly choose A ← − Rqk× , and s1 ← − Sη , s2 ← − Sηk , compute t = As1 + s2 . Return the public key pk = (A, t) and secret key sk = (s1 , s2 , pk). – Signing: given the secret key sk = (s1 , s2 , pk) and a message μ ∈ {0, 1}∗ , $

− Sγ1 −1 ; 1. randomly choose y ← 2. compute w = Ay and c = H(HighBits(w, 2γ2 )μ); 3. compute z = y + cs1 ; 4. If z∞ ≥ γ1 − β or LowBits(Ay − cs2 , 2γ2 ) ≥ γ2 − β, restart the computation from step 1), where β is a bound such that cs1 ∞ , cs2 ∞ ≤ β hold for all possible c, s1 , s2 . Otherwise, output the signature σ = (z, c). – Verification: given the public key pk = (A, t), a message μ ∈ {0, 1}∗ and a signature σ = (z, c), return 1 if z∞ < γ1 − β and c = H(HighBits(Az − ct, 2γ2 )μ), otherwise return 0. Essentially, the checks in step (4) are used to ensure that (1) the signature (z, c) will not leak the information of s1 and s2 ; and (2) HighBits(Az − ct, 2γ2 ) = HighBits(Ay − cs2 , 2γ2 ) = HighBits(w, 2γ2 ) (note that w = Ay = Ay − cs2 +

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

55

cs2 , LowBits(Ay − cs2 , 2γ2 ) < γ2 − β and cs2 ∞ ≤ β). By setting γ1 = 2γ2 , we require the MLWEn,k,,q,η problem and the (variant of) MSIS∞ n,k,(+k+1),q,2γ1 +2 problem to be hard to ensure the security of the secret key and the unforgeability of the signature, respectively. By a careful examination on the above scheme, one can find that the computational efficiency of the signing algorithm is determined by the expected number of repetitions in step (4):  −n·  −n·k 2(γ2 − β) − 1 2(γ1 − β) − 1 · , 2γ1 − 1 2γ2 − 1







=N1

=N2

where N1 and N2 are determined by the first and second checks in step (4), respectively. Clearly, it is possible to modify N1 and N2 while keeping the total number of repetitions N = N1 ·N2 unchanged. Note that the size of the signature is related to γ1 and is irrelevant to γ2 , which means that a shorter signature can be obtained by using a smaller γ1 . However, simply using a smaller γ1 will also give a bigger N1 , and thus a worse computational efficiency. In order to obtain a short signature size without (significantly) affecting the computational efficiency: – We use the AMLWE problem for the security of the secret key, which allows us to use a smaller γ1 by reducing s1 ∞ (and thus β = cs1 ∞ in the expression of N1 ); – We use the AMSIS problem for the unforgeability of the signatures, which further allows us to use a smaller γ1 by increasing γ2 to keep N = N1 · N2 unchanged. Note that reducing s1 ∞ (by choosing a smaller η1 ) may weaken the hardness of the underlying AMLWE problem (if we do not change other parameters). We choose to increase η2 (and thus s2 ∞ ) to remain the hardness. Similarly, increasing γ2 will weaken the hardness of the underlying AMSIS problem, and we choose to reduce γ1 to remain the hardness. Both strategies crucially rely on the asymmetries of the underlying problems. 4.2

The Construction

Let n, k, , q, η1 , η2 , β1 , β2 , γ1 , γ2 , ω ∈ Z be positive integers. Let R = Z[x]/(xn + 1) and Rq = Zq [x]/(xn + 1). Denote B60 as the set of elements of R that  have n . When 60 coefficients are either −1 or 1 and the rest are 0, and |B60 | = 260 · 60 n = 256, |B60 | > 2256 . Let H1 : {0, 1}256 → Rqk× , H2 : {0, 1}∗ → {0, 1}384 , H3 : {0, 1}∗ → Sγ1 −1 and H4 : {0, 1}∗ → B60 be four hash functions. We now present the description of our scheme ΠSIG = (KeyGen, Sign, Verify): $

$

$

− {0, 1}256 , s1 ← − Sη1 , s2 ← − – ΠSIG .KeyGen(κ): first randomly choose ρ, K ← k k× k Sη2 . Then, compute A = H1 (ρ) ∈ Rq , t = As1 + s2 ∈ Rq , (t1 , t0 ) = Power2Roundq (t, d) and tr = H2 (ρt1 ) ∈ {0, 1}384 . Finally, return the public key pk = (ρ, t1 ) and secret key sk = (ρ, K, tr , s1 , s2 , t0 ).

56

J. Zhang et al.

– ΠSIG .Sign(sk, M ): given sk = (ρ, K, tr , s1 , s2 , t0 ) and a message M ∈ {0, 1}∗ , first compute A = H1 (ρ) ∈ Rqk× , μ = H2 (tr M ) ∈ {0, 1}384 , and set ctr = 0. Then, perform the following computations: 1. y = H3 (Kμctr) ∈ Sγ1 −1 and w = Ay; 2. w1 = HighBitsq (w, 2γ2 ) and c = H4 (μw1 ) ∈ B60 ; 3. z = y + cs1 and u = w − cs2 ; 4. (r1 , r0 ) = Decomposeq (u, 2γ2 ); 5. if z∞ ≥ γ1 − β1 or r0 ∞ ≥ γ2 − β2 or r1 = w1 , then set ctr = ctr + 1 and restart the computation from step (1); 6. compute v = ct0 and h = MakeHintq (−v, u + v, 2γ2 ); 7. if v∞ ≥ γ2 or the number of 1’s in h is greater than ω, then set ctr = ctr + 1 and restart the computation from step 1); 8. return the signature σ = (z, h, c). – ΠSIG .Verify(pk, M, σ): given the public key pk = (ρ, t1 ), a message M ∈ {0, 1}∗ and a signature σ = (z, h, c), first compute A = H1 (ρ) ∈ Rqk× , μ = H2 (H2 (pk)M ) ∈ {0, 1}384 . Let u = Az − ct1 · 2d , w1 = UseHintsq (h, u, 2γ2 ) and c = H4 (μw1 ). Finally, return 1 if z∞ < γ1 −β1 , c = c and the number of 1’s in h is ≤ ω, otherwise return 0. We note that the hash function H3 is basically used to make the signing algorithm Sign deterministic, which is needed for a (slightly) tighter security proof in the quantum random oracle model. One can remove H3 by directly $ − Sγ1 −1 at random, and obtain a probabilistic signing algorithm. choosing y ← We also note that the hash function H4 can be constructed by using an extendable output function such as SHAKE-256 [29] and a so-called “inside-out” version of Fisher-Yates shuffle algorithm [21]. The detailed constructions of hash functions H3 and H4 can be found in [12]. Correctness. Note that if ct0 ∞ < γ2 , by Lemma 1 we have UseHintq (h, w − cs2 + ct0 , 2γ2 ) = HighBitsq (w − cs2 , 2γ2 ). Since w = Ay and t = As1 + s2 , we have that w − cs2 = Ay − cs2 = A(z − cs1 ) − cs2 = Az − ct, w − cs2 + ct0 = Az − ct1 · 2d , where t = t1 · 2d + t0 . Therefore, the verification algorithm computes UseHintq (h, Az − ct1 · 2d , 2γ2 ) = HighBitsq (w − cs2 , 2γ2 ). As the signing algorithm checks that r1 = w1 , this is equivalent to HighBitsq (w − cs2 , 2γ2 ) = HighBitsq (w, 2γ2 ). Hence, the w1 computed by the verification algorithm is the same as that of the signing algorithm, and thus the verification algorithm will always return 1.

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

57

Number of Repetitions. Since our signature scheme uses the rejection sampling [23,24] to generate (z, h), the efficiency of the signing algorithm is determined by the number of repetitions that will be caused by steps (5) and (7) of the signing algorithm. We first estimate the probability that z∞ < γ1 − β1 holds in step (5). Assuming that cs1 ∞ ≤ β1 holds, then we always have z∞ ≤ γ1 − β1 − 1 whenever y∞ ≤ γ1 − 2β1 − 1. The size of this range is 2(γ1 − β1 ) − 1. Note that each coefficient of y is chosen randomly from 2γ1 − 1 possible values. That is, for a fixed cs1 , each coefficient of vector z = y + cs1 has 2γ1 − 1 possibilities. Therefore, the probability that z∞ ≤ γ1 − β1 − 1 is  n·  n· 2(γ1 − β1 ) − 1 β1 = 1− ≈ e−nβ1 /γ1 . 2γ1 − 1 γ1 − 1/2 Now, we estimate the probability that r0 ∞ = LowBitsq (w − cs2 , 2γ2 )∞ < γ2 − β2 holds in step (5). If we (heuristically) assume that each coefficient of r0 is uniformly distributed modulo 2γ2 , the probability that r0 ∞ < γ2 − β2 is n·k  2(γ2 − β2 ) − 1 ≈ e−nkβ2 /γ2 . 2γ2 By Lemma 2, if cs2 ∞ ≤ β2 , then r0 ∞ < γ2 − β2 implies that r1 = w1 . This means that the overall probability that step (5) will not cause a repetition is ≈ e−n(β1 /γ1 +kβ2 /γ2 ) . Finally, under our choice of parameters, the probability that step (7) of the signing algorithm will cause a repetition is less than 1%. Thus, the expected number of repetitions is roughly en(β1 /γ1 +kβ2 /γ2 ) . 4.3

Provable Security

In the full version [32], we show that under the hardness of the AMLWE problem and a rounding variant AMSIS-R of AMSIS (which is needed for compressing the public key, see Appendix A), our scheme ΠSIG is provably SUF-CMA secure in the ROM. Formally, we have the following theorem. Theorem 4. If H1 : {0, 1}256 → Rqk× and H4 : {0, 1}∗ → B60 are random oracles, the outputs of H3 : {0, 1}∗ → Sγ1 −1 are pseudo-random, and H2 : {0, 1}∗ → {0, 1}384 is a collision-resistant hash function, then ΠSIG is SUF-CMA secure under the AMLWEn,q,k,,η1 ,η2 and AMSIS-R∞ n,q,d,k,,4γ2 +2,2γ1 assumptions. Furthermore, under an interactive variant SelfTargetAMSIS of the AMSIS problem (which is an asymmetric analogue of the SelfTargetMSIS problem introduced by Ducas et al. [12]), we can also prove that our scheme ΠSIG is provably SUF-CMA secure. Formally, we have that following theorem.

58

J. Zhang et al.

Theorem 5. In the quantum random oracle model (QROM), signature scheme ΠSIG is SUF-CMA secure under the following assumptions: AMLWEn,q,k,,η1 ,η2 , ∞ AMSIS∞ n,q,d,k,,4γ2 +2,2(γ1 −β1 ) and SelfTargetAMSISH4 ,n,q,k,1 ,2 ,4γ2 ,(γ1 −β1 ) . 4.4

Choices of Parameters

In Table 6, we provide three sets of parameters (i.e., ΠSIG -1024, ΠSIG -1280 and ΠSIG -1536) for our signature scheme ΠSIG , which provide 80-bit, 128-bit and 160-bit quantum security, respectively (corresponding to 98-bit, 141-bit and 178bit classical security, respectively). A concrete estimation of the security provided by the parameter sets will be given in Sect. 5. Among them, ΠSIG -1280 is the recommended parameter set. Table 6. Parameters for ΠSIG (The column “Reps.” indicates the excepted number of repetitions that the signing algorithm takes to output a valid signature) Parameters (k, , q, d, ω)

(η1 , η2 ) (β1 , β2 )

ΠSIG -1024 (4, 3, 2021377, 13, 80)

(2, 3)

(120, 175) (131072, 168448) 5.86

90

ΠSIG -1280 (5, 4, 3870721, 14, 96)

(2, 5)

(120, 275) (131072, 322560) 7.61

128

(60, 275)

163

ΠSIG -1536 (6, 5, 3870721, 14, 120) (1, 5)

(γ1 , γ2 )

Reps. Quant. Sec.

(131072, 322560) 6.67

Our scheme ΠSIG under the same machine configuration as in Sect. 3.4 is implemented using standard C, where some partial optimization techniques (e.g., AVX2 instructions) are adopted to speedup basic operations such as NTT operation. The average CPU cycles (averaged over 10000 times) needed for running the algorithms are given in Table 3.

5

Known Attacks Against AMLWE and AMSIS

Solvers for LWE mainly include primal attacks, dual attacks (against the underlying lattice problems) and direct solving algorithms such as BKW and Arora-Ge [2]. BKW and Arora-Ge attacks need sub-exponentially (or even exponentially) many samples, and thus they are not relevant to the public-key cryptography scenario where only a restricted amount of samples is available. Therefore, for analyzing and evaluating practical lattice-based cryptosystems, we typically consider only primal attacks and dual attacks. Further, these two attacks, which are the currently most relevant and effective, seem not to have additional advantages in solving RLWE/MLWE over standard LWE. Thus, when analyzing RLWE or MLWE based cryptosystems, one often translates RLWE/MLWE instances to the corresponding LWE counterparts [6,12] and then applies the attacks. In particular, one first transforms AMLWEn,q,k,,α1 ,α2 into ALWEnk,q,k,α1 ,α2 , and then applies, generalizes and optimizes the LWE solving algorithms to ALWE. Since any bounded centrally symmetric distribution can be regarded as subgaussian for a certain parameter, for simplicity and without loss of generality,

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

59

we consider the case that secret vector and error vector in ALWEn,q,m,α1 ,α2 are sampled from subgaussians with parameters α1 and α2 respectively. Formally, the problem is to recover s from samples (A, b = As + e) ∈ Zm×n × Zm q q , $

− Zm×n , s ← χnα1 and e ← χm where A ← q α2 . In the full version [32], we will not only consider the traditional primal attack and dual attack against ALWE, but also consider two variants of primal attack and three variants of dual attack, which are more efficient to solve the ALWE problem by taking into account the asymmetry of ALWE. As for the best known attacks against (A)SIS, the BKZ lattice basis reduction algorithm and its variants are more useful for solving the 2 -norm (A)SIS problem than the ∞ -norm counterpart. Note that a solution x = n×(m1 +m2 −n) (xT1 , xT2 )T ∈ Zm1 +m2 to the infinity-norm ASIS instance A ∈ Zq , where (In A)x = 0 mod q and x∞ ≤ max(β1 , β2 ) < q, may have x > q, whose 2 -norm is even larger than that of a trivial solution u = (q, 0, . . . , 0)T . We will follow [12] to solve the ∞ -norm SIS problem. Further, we can always apply an 2 -norm SIS solve to the ∞ -norm SIS problem due to the relation x∞ ≤ x. Hereafter we refer to the above two algorithms as ∞ -norm and

2 -norm attacks respectively, and use them to estimate the concrete complexity of solving ASIS∞ n,q,m1 ,m2 ,β1 ,β2 . As before, when analyzing RSIS or MSIS based cryptosystems, one often translates RSIS/MSIS instances to the corresponding SIS counterparts [12] and then applies the attacks. In the full version [32], we will not only consider the traditional 2 norm attack and ∞ norm attack against ASIS, but also consider one variant of 2 norm attack and two variants of ∞ norm attack, which are more efficient to solve the ASIS problem by taking into consideration the asymmetry of ASIS. In the following two subsections, we will summarize those attacks against our ΠKEM and ΠSIG schemes. 5.1

Concrete Security of ΠKEM

The complexity varies for the type of attacks, the number m of samples used and choice of b ∈ Z to run the BKZ-b algorithm. Therefore, in order to obtain an overall security estimation sec of the ΠKEM under the three proposed parameter settings, we enumerate all possible values of m (the number of ALWE samples) and b to reach a conservative estimation about the computational complexity of primal attacks and dual attacks, by using a python script (which is planned to be uploaded together with the implementation of our schemes to a public repository later). Tables 7 and 8 estimate the complexities of the three parameter sets against primal attacks and dual attacks by taking the minimum of sec over all possible values of (m, b). Taking into account the above, Table 9 shows the overall security of ΠKEM .

60

J. Zhang et al. Table 7. The security of ΠKEM against primal attacks Parameters Attack model

Traditional (m, b, sec)

Variant 1 (m, b, sec)

Variant 2 (m, b, sec)

ΠKEM -512

Classical Quantum

(761, 390, 114) (761, 390, 103)

(531, 405, 118) (531, 405, 107)

(476, 385, 112) (476, 385, 102)

ΠKEM -768

Classical Quantum

(1021, 640, 187) (1021, 640, 169)

(646, 575, 168) (646, 575, 152)

(556, 560, 163) (556, 560, 148)

ΠKEM -1024 Classical Quantum

(1526, 825, 241) (1531, 825, 218)

(886, 835, 244) (886, 835, 221)

(786, 815, 238) (786, 815, 216)

Table 8. The security of ΠKEM against dual attacks Parameters Attack model ΠKEM -512 Classical Quantum ΠKEM -768 Classical Quantum ΠKEM -1024 Classical Quantum

Traditional (m, b, sec)

Variation 1 (m, b, sec)

Variation 2 (m, b, sec)

Variation 3 (m, b, sec)

(766, 385, 112)

(736, 395, 115)

(595, 380, 111)

(711, 380, 111)

(766, 385, 102)

(736, 395, 104)

(596, 380, 100)

(711, 380, 100)

(1021, 620, 181)

(881, 570, 166)

(586, 555, 162)

(776, 555, 162)

(1021, 620, 164)

(881, 570, 151)

(586, 555, 147)

(776, 555, 147)

(1531, 810, 237)

(981, 810, 239)

(906, 805, 236)

(1171, 805, 235)

(1531, 810, 215)

(981, 810, 217)

(906, 805, 214)

(1171, 805, 213)

Table 9. The overall security of ΠKEM Parameters Classical security Quantum security ΠKEM -512

111

100

ΠKEM -768

162

147

ΠKEM -1024 235

213

Table 10. Comparison between AMLWE and MLWE under “comparable” parameters Parameters

(n, k, q, η1 , η2 )

Classical security Quantum security η1 · η2

ΠKEM -512

(256, 2, 7681, 2, 12) 111

100

24

MLWE I

(256, 2, 7681, 5, 5)

112

102

25

ΠKEM -768

(256, 3, 7681, 1, 4)

162

147

4

MLWE II

(256, 3, 7681, 2, 2)

163

148

4

ΠKEM -1024

(512, 2, 12289, 2, 8) 235

213

16

MLWE III (512, 2, 12289, 4, 4) 236

214

16

Further, in order to study the complexity relations of asymmetric (M)LWE and standard (M)LWE, we give a comparison in Table 10 between the AMLWE and the corresponding MLWE, in terms of the parameter choices used by ΠKEM , which shows that the hardness of AMLWE with Gaussian standard variances

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

61

α1 , α2 is “comparable” to that of MLWE with Gaussian standard variance √ α1 α2 . We note that the comparison only focuses on security, and the corresponding MLWE, for the parameters given in Table 10, if ever used to build a KEM, cannot achieve the same efficiency and correctness as our ΠKEM does. 5.2

Concrete Security of ΠSIG

As before, in order to obtain an overall security estimation of the ΠSIG under the three proposed parameter settings against key recovery attacks, we enumerate all possible values of m and b to reach a conservative estimation sec about the computational complexities of primal attacks and dual attacks by using a python script. Tables 11 and 12 estimate the complexities of the three parameter sets of the underlying ALWE problem against primal attacks and dual attacks by taking the minimum of sec over all possible values of (m, b). Likewise, we enumerate all possible values of m and b to reach a conservative estimation sec about the computational complexities of 2 -norm and ∞ -norm attacks. Tables 13 and 14 estimate the complexities of the three parameter sets of the underlying ASIS problem against 2 -normal and ∞ -normal attacks by taking the minimum of sec over all possible values of (m, b). In Table 15, we give the overall security of ΠSIG under the three parameter settings against key recovery and forgery attacks, which takes account of both AMLWE and AMSIS attacks. Table 11. The security of ΠSIG against AMLWE primal attacks (The last row of the third column has no figures, because the complexity (i.e., sec) of the traditional attack for ΠSIG -1536 is too large, and our python script fails to compute it) Parameters Attack model ΠSIG -1024 ΠSIG -1280 ΠSIG -1536

Traditional (m, b, sec)

Variant 1 (m, b, sec)

Variant 2 (m, b, sec)

Classical

(1021, 555, 162)

(671, 345, 100)

(741, 340, 99)

Quantum

(1021, 555, 147)

(671, 345, 91)

(741, 340, 90)

Classical

(1276, 1060, 310)

(996, 500, 146)

(896, 490, 143)

Quantum

(1276, 1060, 281)

(996, 500, 132)

(896, 490, 129)

Classical

-

(1101, 660, 193)

(1106, 615, 179)

Quantum

-

(1101, 660, 175)

(1106, 615, 163)

Table 12. The security of ΠSIG against AMLWE dual attacks Parameters Attack model ΠSIG -1024 ΠSIG -1280 ΠSIG -1536

Traditional (m, b, sec)

Variant 1 (m, b, sec)

Variant 2 (m, b, sec)

Variant 3 (m, b, sec)

Classical

(1021, 550, 160)

(786, 340, 99)

(706, 340, 99)

(706, 340, 99)

Quantum

(1021, 550, 145)

(786, 340, 90)

(706, 340, 90)

(706, 340, 90)

Classical

(1276, 1050, 307) (1121, 495, 144)

(966, 485, 141)

(966, 485, 141)

Quantum

(1276, 1050, 278) (1121, 495, 131)

(966, 485, 128)

(966, 485, 128)

Classical

(1535, 1535, 464) (1381, 650, 190)

(1031, 615, 179)

(1036, 615, 179)

Quantum

(1235, 1535, 422) (1381, 650, 172)

(1031, 615, 163)

(1036, 615, 163)

62

J. Zhang et al. Table 13. The security of ΠSIG against two-norm attack (for ASIS problem) Parameters Attack model

Traditional (m, b, sec)

Variation 1 (m, b, sec)

ΠSIG -1024

Classical Quantum

(2031, 750, 219) (2031, 750, 198)

(2031, 665, 194) (2031, 665, 176)

ΠSIG -1280

Classical Quantum

(2537, 1100, 321) (2537, 900, 263) (2537, 1100, 291) (2537, 900, 238)

ΠSIG -1536

Classical Quantum

(3043, 1395, 408) (3043, 1140, 333) (3043, 1395, 370) (3043, 1140, 302)

Table 14. The security of ΠSIG against infinity-norm attack (for ASIS problem) Parameters Attack model

Traditional (m, b, sec)

Variant 1 (m, b, sec)

Variant 2 (m, b, sec)

ΠSIG -1024

Classical Quantum

(1831, 385, 112) (1831, 385, 102)

(1781, 385, 112) (1781, 385, 102)

(1731, 360, 105) (1731, 360, 95)

ΠSIG -1280

Classical Quantum

(2387, 495, 144) (2387, 495, 131)

(2387, 545, 159) (2387, 545, 144)

(2187, 485, 141) (2187, 485, 128)

ΠSIG -1536

Classical Quantum

(2743, 630, 184) (2743, 630, 167)

(2793, 690, 201) (2793, 690, 183)

(2543, 615, 179) (2543, 615, 163)

Table 15. The overall security of ΠSIG Parameters Classical security Quantum security ΠSIG -1024

99

90

ΠSIG -1280

141

128

ΠSIG -1536

179

163

Acknowledgments. We thank the anonymous reviewers for their helpful suggestions. Jiang Zhang is supported by the National Natural Science Foundation of China (Grant Nos. 61602046, 61932019), the National Key Research and Development Program of China (Grant Nos. 2017YFB0802005, 2018YFB0804105), the Young Elite Scientists Sponsorship Program by CAST (2016QNRC001), and the Opening Project of Guangdong Provincial Key Laboratory of Data Security and Privacy Protection (2017B030301004). Yu Yu is supported by the National Natural Science Foundation of China (Grant No. 61872236), the National Cryptography Development Fund (Grant No. MMJJ20170209). Shuqin Fan and Zhenfeng Zhang are supported by the National Key Research and Development Program of China (Grant No. 2017YFB0802005).

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

A

63

Definitions of Hard Problems

The AMLWE Problem (with Binomial Distributions). The decisional AMLWE problem AMLWEn,q,k,,η1 ,η2 asks to distinguish (A, b = As + e) and $

$

$

− Rqk× , s ← − Bη1 , e ← − Bηk2 . Obviously, when uniform over Rqk× × Rqk , where A ← η1 = η2 , the AMLWE problem is the standard MLWE problem. The AMLWE-R Problem. The AMLWE-R problem AMLWE-Rn,q,p,k,,η1 ,η2 asks to distinguish T

(A, ¯t = tq→p , AT s + e, ¯tp→q s + e) $

$

− Rq×k , s ← − from (A , t q→p , u, v) ∈ Rq×k × Rp × Rqk × Rq , where A, A ← $

$

$

$

$

− Bkη2 , e ← − Bη2 , t, t ← − Rq , u ← − Rqk , v ← − Rq . Bη1 , e ← k×( + −k)

The AMSIS Problem. Given a uniform matrix A ∈ Rq 1 2 , the (Hermite Normal Form) AMSIS problem AMSIS∞ n,q,k,1 ,2 ,β1 ,β2 over ring Rq asks to find a non-zero vector x ∈ Rq1 +2 \{0} such that (Ik A)x = 0 mod q,   x1 x1 ∞ ≤ β1 and x2 ∞ ≤ β2 , where x = ∈ Rq1 +2 , x1 ∈ Rq1 , x2 ∈ Rq2 . x2 k×( + −k)

The AMSIS-R Problem. Given a uniformly random matrix A ∈ Rq 1 2 and a uniformly random vector t ∈ Rqk , the (Hermite Normal Form) AMSIS-R over ring Rq asks to find a non-zero vector problem AMSIS-R∞ n,q,d,k,1 , 2 ,β1 ,β2  1 +2 +1 \{0} such that Ik At1 · 2d x = 0 mod q, x1 ∞ ≤ β1 , x2 ∞ ≤ x ∈ Rq ⎛ ⎞ x1 β2 and x3 ∞ ≤ 2, where x = ⎝ x2 ⎠ ∈ Rq1 +2 +1 , x1 ∈ Rq1 , x2 ∈ Rq2 , x3 ∈ Rq x3 and (t1 , t0 ) = Power2Roundq (t, d). The SelfTargetAMSIS Problem. Let H : {0, 1}∗ → B60 is a (quantum) rank×( + −k) dom oracle. Given a uniformly random matrix A ∈ Rq 1 2 and a uniform k vector t ∈ Rq , the SelfTargetAMSIS problem SelfTargetAMSIS∞ n,q,k,1 ,2 ,β1 ,β2 ⎛ ⎞ y1 over ring Rq asks to find a vector y = ⎝ y2 ⎠ and μ ∈ {0, 1}∗ , such that c y1 ∞ ≤ β1 , y2 ∞ ≤ β2 , c∞ ≤ 1 and H (μ, (Ik At)y) = c holds.

References 1. Ajtai, M.: Generating hard instances of lattice problems (extended abstract). In: Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, STOC 1996, pp. 99–108. ACM, New York, NY, USA (1996) 2. Albrecht, M.R., Player, R., Scott, S.: On the concrete hardness of learning with errors. J. Math. Cryptol. 9, 169–203 (2015)

64

J. Zhang et al.

3. Alkim, E., Ducas, L., P¨ oppelmann, T., Schwabe, P.: Post-quantum key exchange-a new hope. In: USENIX Security Symposium 2016 (2016) 4. Applebaum, B., Cash, D., Peikert, C., Sahai, A.: Fast cryptographic primitives and circular-secure encryption based on hard learning problems. In: Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 595–618. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-03356-8 35 5. Bai, S., Galbraith, S.D.: An improved compression technique for signatures based on learning with errors. In: Benaloh, J. (ed.) CT-RSA 2014. LNCS, vol. 8366, pp. 28–47. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-04852-9 2 6. Bos, J., et al.: CRYSTALS - Kyber: a CCA-secure module-lattice-based KEM. In: 2018 IEEE European Symposium on Security and Privacy (EuroS P), pp. 353–367, April 2018 7. Brakerski, Z., Gentry, C., Vaikuntanathan, V.: Fully homomorphic encryption without bootstrapping. In: Innovations in Theoretical Computer Science, ITCS, pp. 309–325 (2012) 8. Brakerski, Z., Vaikuntanathan, V.: Efficient fully homomorphic encryption from (standard) LWE. In: 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS), pp. 97–106, October 2011 9. Brakerski, Z., Langlois, A., Peikert, C., Regev, O., Stehl´e, D.: Classical hardness of learning with errors. In: Proceedings of the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC 2013, pp. 575–584. ACM, New York, NY, USA (2013) 10. Brakerski, Z., Vaikuntanathan, V.: Fully homomorphic encryption from ring-LWE and security for key dependent messages. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 505–524. Springer, Heidelberg (2011). https://doi.org/10. 1007/978-3-642-22792-9 29 11. Cheon, J.H., Kim, D., Lee, J., Song, Y.: Lizard: cut off the tail! A practical post-quantum public-key encryption from LWE and LWR. In: Catalano, D., De Prisco, R. (eds.) SCN 2018. LNCS, vol. 11035, pp. 160–177. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98113-0 9 12. Ducas, L., et al.: Crystals-dilithium: a lattice-based digital signature scheme. IACR Trans. Cryptogr. Hardw. Embed. Syst. 2018(1), 238–268 (2018) 13. Fiat, A., Shamir, A.: How to prove yourself: practical solutions to identification and signature problems. In: Odlyzko, A.M. (ed.) CRYPTO 1986. LNCS, vol. 263, pp. 186–194. Springer, Heidelberg (1987). https://doi.org/10.1007/3-540-47721-7 12 14. Fujisaki, E., Okamoto, T.: Secure integration of asymmetric and symmetric encryption schemes. J. Cryptol. 26(1), 80–101 (2013) 15. Gentry, C., Peikert, C., Vaikuntanathan, V.: Trapdoors for hard lattices and new cryptographic constructions. In: Proceedings of the 40th Annual ACM Symposium on Theory of Computing, STOC 2008, pp. 197–206. ACM, New York, NY, USA (2008) 16. Goldwasser, S., Kalai, Y., Peikert, C., Vaikuntanathan, V.: Robustness of the learning with errors assumption. In: Proceedings of the Innovations in Computer Science 2010. Tsinghua University Press (2010) 17. Grover, L.K.: A fast quantum mechanical algorithm for database search. In: STOC 1996, pp. 212–219. ACM (1996) 18. Hofheinz, D., H¨ ovelmanns, K., Kiltz, E.: A modular analysis of the FujisakiOkamoto transformation. In: Kalai, Y., Reyzin, L. (eds.) TCC 2017. LNCS, vol. 10677, pp. 341–371. Springer, Cham (2017). https://doi.org/10.1007/978-3-31970500-2 12

Tweaking the Asymmetry of Asymmetric-Key Cryptography on Lattices

65

19. IBM: IBM unveils world’s first integrated quantum computing system for commercial use (2019). https://newsroom.ibm.com/2019-01-08-IBM-Unveils-WorldsFirst-Integrated-Quantum-Computing-System-for-Commercial-Use 20. Jiang, H., Zhang, Z., Chen, L., Wang, H., Ma, Z.: IND-CCA-secure key encapsulation mechanism in the quantum random oracle model, revisited. In: Shacham, H., Boldyreva, A. (eds.) CRYPTO 2018. LNCS, vol. 10993, pp. 96–125. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96878-0 4 21. Knuth, D.: The Art of Computer Programming, vol. 2, 3rd edn. Addison-Wesley, Boston (1997) 22. Lindner, R., Peikert, C.: Better key sizes (and Attacks) for LWE-based encryption. In: Kiayias, A. (ed.) CT-RSA 2011. LNCS, vol. 6558, pp. 319–339. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-19074-2 21 23. Lyubashevsky, V.: Fiat-Shamir with aborts: applications to lattice and factoringbased signatures. In: Matsui, M. (ed.) ASIACRYPT 2009. LNCS, vol. 5912, pp. 598–616. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-103667 35 24. Lyubashevsky, V.: Lattice signatures without trapdoors. In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT 2012. LNCS, vol. 7237, pp. 738–755. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-29011-4 43 25. Lyubashevsky, V., Peikert, C., Regev, O.: On ideal lattices and learning with errors over rings. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 1–23. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-13190-5 1 26. Micciancio, D.: On the hardness of learning with errors with binary secrets. Theory Comput. 14(13), 1–17 (2018) 27. Micciancio, D., Regev, O.: Worst-case to average-case reductions based on gaussian measures. In: Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science 2004, pp. 372–381 (2004) 28. NSA National Security Agency. Cryptography today, August 2015. https://www. nsa.gov/ia/programs/suiteb cryptography/ 29. National Institute of Standards and Technology. SHA-3 standard: permutationbased hash and extendable-output functions. FIPS PUB 202 (2015). http:// nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf 30. Regev, O.: On lattices, learning with errors, random linear codes, and cryptography. In: Proceedings of the Thirty-Seventh Annual ACM Symposium on Theory of Computing, STOC 2005, pp. 84–93. ACM, New York, NY, USA (2005) 31. Shor, P.: Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput. 26(5), 1484–1509 (1997) 32. Zhang, J., Yu, Y., Fan, S., Zhang, Z., Yang, K.: Tweaking the asymmetry of asymmetric-key cryptography on lattices: KEMs and signatures of smaller sizes. Cryptology ePrint Archive, Report 2019/510 (2019)

MPSign: A Signature from Small-Secret Middle-Product Learning with Errors Shi Bai1 , Dipayan Das2 , Ryo Hiromasa3 , Miruna Rosca4,5 , Amin Sakzad6 , Damien Stehl´e4,7(B) , Ron Steinfeld6 , and Zhenfei Zhang8 1

Department of Mathematical Sciences, Florida Atlantic University, Boca Raton, USA 2 Department of Mathematics, National Institute of Technology, Durgapur, Durgapur, India 3 Mitsubishi Electric, Kamakura, Japan 4 Univ. Lyon, EnsL, UCBL, CNRS, Inria, LIP, 69342 Lyon Cedex 07, France 5 Bitdefender, Bucharest, Romania 6 Faculty of Information Technology, Monash University, Melbourne, Australia 7 Institut Universitaire de France, Paris, France [email protected] 8 Algorand, Boston, USA

Abstract. We describe a digital signature scheme MPSign, whose security relies on the conjectured hardness of the Polynomial Learning With Errors problem (PLWE) for at least one defining polynomial within an exponential-size family (as a function of the security parameter). The proposed signature scheme follows the Fiat-Shamir framework and can be viewed as the Learning With Errors counterpart of the signature scheme described by Lyubashevsky at Asiacrypt 2016, whose security relies on the conjectured hardness of the Polynomial Short Integer Solution (PSIS) problem for at least one defining polynomial within an exponential-size family. As opposed to the latter, MPSign enjoys a security proof from PLWE that is tight in the quantum-access random oracle model. The main ingredient is a reduction from PLWE for an arbitrary defining polynomial among exponentially many, to a variant of the MiddleProduct Learning with Errors problem (MPLWE) that allows for secrets that are small compared to the working modulus. We present concrete parameters for MPSign using such small secrets, and show that they lead to significant savings in signature length over Lyubashevsky’s Asiacrypt 2016 scheme (which uses larger secrets) at typical security levels. As an additional small contribution, and in contrast to MPSign (or MPLWE), we present an efficient key-recovery attack against Lyubashevsky’s scheme (or the inhomogeneous PSIS problem), when it is used with sufficiently small secrets, showing the necessity of a lower bound on secret size for the security of that scheme.

1

Introduction

The Polynomial Short Integer Solution (PSIS) and Polynomial Learning With Errors (PLWE) were introduced as variants of the SIS and LWE problems c International Association for Cryptologic Research 2020  A. Kiayias et al. (Eds.): PKC 2020, LNCS 12111, pp. 66–93, 2020. https://doi.org/10.1007/978-3-030-45388-6_3

MPSign: A Signature from Small-Secret Middle-Product Learning

67

leading to more efficient cryptographic constructions [LM06,PR06,SSTX09]. ) Let n, m, q ≥ 2 and f ∈ Z[x] monic of degree n. A PSIS(f q,m instance consists in m uniformly chosen elements a1 , . . . , am ∈ Zq [x]/f , and the goal is to find z1 , . . . , zm ∈ Z[x]/f not all zero and with entries of small magnitudes such ) that z1 a1 + · · · + zm am = 0 mod q. A PLWE(f instance consists of oracle access q to the uniform distribution over Zq [x]/f × Zq [x]/f ; or to oracle access to the distribution of (ai , ai · s + ei ), where ai is uniform in Zq [x]/f , ei ∈ Z[x]/f has random coefficients of small magnitudes, and the so-called secret s ∈ Zq [x]/f is uniformly sampled but identical across all oracle calls. The goal is to distinguish between the two types of oracles. For any fixed f , the hardness of PSIS(f ) and PLWE(f ) has been less investigated than that of SIS and LWE. In particular, it could be that PSIS(f ) and PLWE(f ) are easy, or easier, to solve for some defining polynomials f than for others. To mitigate such a risk, Lyubashevsky [Lyu16] introduced a variant of PSIS that is not parametrized by a specific polynomial f but only a degree n, and is at least as hard as PSIS(f ) for exponentially many polynomials f of degree n. We will let it be denoted by PSIS∅ . Further, Lyubashevsky designed a signature scheme whose security relies on the hardness of this new problem, and hence on the hardness of PSIS(f ) for at least one f among exponentially many. This signature scheme enjoys asymptotic efficiency, similar (up to a constant factor) to those based on PSIS(f ) for a fixed f . Later on, Rosca et al. [RSSS17] introduced an LWE counterpart of PSIS∅ : the Middle-Product Learning with Errors problem (MPLWE). Similarly to PSIS∅ , MPLWE is not parametrized by a specific polynomial f but only a degree n, and is at least as hard as PLWE(f ) for exponentially many polynomials f of degree n. To illustrate the cryptographic usefulness of MPLWE, Rosca et al. built a public-key encryption scheme whose IND-CPA security relies on the MPLWE hardness assumption. A more efficient encryption scheme and a key encapsulation mechanism [SSZ17,SSZ19] were later proposed as a submission to the NIST standardization process for post-quantum cryptography [NIS]. In [RSSS17], it was observed that several LWE/PLWE(f ) techniques leading to more cryptographic functionalities do not easily extend to MPLWE, possibly limiting its cryptographic expressiveness. These include a polynomial leftover hash lemma, the construction of trapdoors for MPLWE that allow to recover the secret s, and the “HNF-ization” technique of [ACPS09] which would allow to prove hardness of MPLWE with small-magnitude secrets. The leftover hash lemma and trapdoor sampling questions were recently studied in [LVV19], with an application to identity-based encryption, though only for security against an adversary whose distinguishing advantage is non-negligible (as opposed to exponentially small). On the HNF-ization front, the main result of [RSSS17] was mis-interpreted in [Hir18] (see Theorem 1 within this reference), in that the latter work assumed that the hardness result of [RSSS17] was for secrets whose coefficients were distributed as those of noise terms (and hence of small magnitudes). The main result from [Hir18] was a signature scheme with security relying on MPLWE.

68

1.1

S. Bai et al.

Contributions

In this work, we give a reduction from PLWE(f ) to a variant of MPLWE in which the secret has small-magnitude coefficients. The reduction works for a family of defining polynomials f that grows with the security parameter. We then build an identification scheme which follows Schnorr’s general framework [Sch89] and which can be upgraded to a signature scheme that is tightly secure in the quantum-access random oracle model (QROM), using [KLS18]. We show that MPSign is unforgeable against chosen message attacks (UF-CMA), which means that no adversary may forge a signature on a message for which it has not seen a signature before. We did not manage to prove that there is no adversary who may forge a new signature on a previously signed message, i.e., that the scheme is strongly unforgeable against chosen message attacks (UF-sCMA). Nevertheless, any UF-CMA secure signature can be upgraded to a UF-sCMA secure signature using a one-time UF-sCMA secure signature [Kat10]. Such a one-time signature can be achieved easily by a universal one-way hash function (by Lamport’s one-time signature) [Kat10] or key collision resistant pseudo-random function (by Winternitz one-time signature) [BDE+11]. We provide concrete parameters for MPSign corresponding to level 1 security of the NIST post-quantum standardization process (via the SVP core hardness methodology from [ADPS16]), which take into account our tight QROM security proof with respect to small secret MPLWE (rather than just taking in account the classical ROM security proof as, e.g., in the Dilithium scheme parameter selection [DKL+18]). We also provide parameters that achieve similar security to those from [Lyu16], to allow for a reasonably fair comparison. The MPSign verification key is larger but its signature size is twice smaller. Our MPSign signature length savings over the scheme of [Lyu16] arise mainly due to our use of much smaller secret key coordinates. Therefore, one could wonder the reducing the size of the secret key coordinates in the scheme of [Lyu16] would also give a secure signature scheme. As an additional small contribution, we show that the answer is negative by presenting a simple efficient key recovery attack on Lyubashevsky’s scheme with sufficiently small secret coordinates. Our attack works (heuristically) when the underlying inhomogeneous variant of PSIS∅ has a unique solution, and shows that a lower bound similar to that shown sufficient in the security proof of [Lyu16] is also necessary for the security of Lyubashevsky’s scheme (and the underlying inhomogeneous PSIS∅ problem) with small secret coordinates. Finally, we provide a proof-of-concept implementation in Sage, publicly available at https://github.com/pqc-ntrust/middle-product-LWE-signature. 1.2

Comparison with Prior Works

Our signature construction is similar to the one in [Hir18]. However, the proof of the latter is incorrect: in its proof of high min-entropy of commitments (see [Hir18, Lemma 7]), it is assumed that the middle n coefficients of the product between a uniform a ∈ Zq [x] of degree < n and a fixed polynomial y of

MPSign: A Signature from Small-Secret Middle-Product Learning

69

degree ≤ 2n, are uniform. In fact, this distribution depends on the rank of a Hankel matrix associated to y and encoding the linear function from a to the considered coefficients of the product. This Hankel matrix can be of low rank and, when it is the case, the resulting distribution is uniform on a very small subset of the range. Interestingly, the distribution of these Hankel matrices (for a uniform y) was recently studied in [BBD+19], in the context of proving hardness of an MPLWE variant with deterministic noise. We do not know how to fix the error from [Hir18]. As a result, we use a different identification scheme to be able to make our proofs go through. Concretely, the identification scheme from [Hir18] used the Bai-Galbraith [BG14] compression technique to decrease the signature size. We circumvent the difficulty by not using the Bai-Galbraith compression technique. Lyubashevsky’s signature from [Lyu16] can also be viewed as secure under the assumption that PLWE(f ) is hard for at least one f among exponentially many defining polynomials f , like ours. Indeed, it was proved secure under the assumption that PSIS∅ is hard, it was proved that PSIS(f ) reduces to PSIS∅ for exponentially many defining polynomials f , and PLWE(f ) (directly) reduces to PSIS(f ) . Furthermore, MPLWE (both with small-magnitude secrets and uniform secrets) reduces to PSIS∅ , whereas the converse is unknown. Hence it seems that in terms of assumptions, Lyubashevsky’s signature outperforms ours. However, the security proof from [Lyu16] only holds in the random oracle model, as opposed to ours which is tight in the quantum-access random oracle model (QROM). Recent techniques on Fiat-Shamir in the QROM [LZ19,DFMS19] might be applicable to [Lyu16], but they are not tight. We now compare MPSign with LWE-based signature schemes and efficient lattice-based signature schemes such as those at Round 2 of the NIST postquantum standardization process [NIS]: Dilithium [DKL+18], Falcon [PFH+19] and Tesla [BAA+19]. Compared to LWE-based signatures, our proposal results in much smaller values for the sum of sizes of a signature and a public key, with much stronger security guarantees than the efficient schemes based on polynomial rings. For example, scaling Dilithium with NIST security level 1 parameters to LWE requires multiplying the public key size by the challenge dimension n = 256, since for an LWE adaptation of Dilithium, the public key would be a matrix with n columns instead of 1. For NIST security level 1, the public key and signature sizes sum would be above 300 KB for an LWE adaptation of Dilithium, whereas the same quantity is 47 KB for MPSign (see Table 2). Now, compared to the Dilithium, Falcon and Tesla NIST candidates, security guarantees are different. The security of Dilithium and Tesla relies on the module variants of PLWE and PSIS for a fixed polynomial [LS15]. In the case of Dilithium, the known security proof in the QROM is quite loose [LZ19], unless one relies on an ad hoc assumption like SelfTargetMSIS [KLS18]. Moreover, in the case of Dilithium, the SIS instance is in an extreme regime: the maximum infinity norm of the vectors to be found are below q/2, but their Euclidean norms may be above q. Currently, no reduction backs the assumption that SIS is intractable in that parameter regime. In Falcon, the public key is assumed pseudo-random,

70

S. Bai et al.

which is an adhoc version of the NTRU hardness assumption [HPS98]. Oppositely, the security of MPSign relies on the assumed PLWE hardness for at least one polynomial among exponentially many. Overall, MPSign is an intermediate risk-performance tradeoff between fixed-ring and LWE-based schemes.

2

Preliminaries

The notations in this paper are almost verbatim from [RSSS17] to maintain consistency and facilitate comparison. Let q > 1 be an integer. We let Zq denote the ring of integers modulo q and by Z≤q the set {−q, . . . , q} of integers of absolute value less or equal to q. We will write Rq to denote the group R/qZ. Let n > 0. For a ring R, we will use the notation R