Decidability of Logical Theories and Their Combination [1st ed.] 9783030565534, 9783030565541

This textbook provides a self-contained introduction to decidability of first-order theories and their combination. The

219 56 2MB

English Pages XIII, 178 [185] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Matter ....Pages i-xiii
First-Order Logic (João Rasga, Cristina Sernadas)....Pages 1-34
Reasoning with Theories (João Rasga, Cristina Sernadas)....Pages 35-73
Decidability Results on Theories (João Rasga, Cristina Sernadas)....Pages 75-99
Quantifier Elimination (João Rasga, Cristina Sernadas)....Pages 101-141
Combination of Theories (João Rasga, Cristina Sernadas)....Pages 143-159
Back Matter ....Pages 161-178
Recommend Papers

Decidability of Logical Theories and Their Combination [1st ed.]
 9783030565534, 9783030565541

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Studies in Universal Logic

João Rasga Cristina Sernadas

Decidability of Logical Theories and Their Combination

Studies in Universal Logic Series Editor Jean-Yves Béziau (Federal University of Rio de Janeiro, Rio de Janeiro, Brazil) Editorial Board Hajnal Andréka (Hungarian Academy of Sciences, Budapest, Hungary) Mark Burgin (University of California, Los Angeles, CA, USA) Răzvan Diaconescu (Romanian Academy, Bucharest, Romania) Andreas Herzig (University Paul Sabatier, Toulouse, France) Arnold Koslow (City University of New York, New York, USA) Jui-Lin Lee (National Formosa University, Huwei Township, Taiwan) Larissa Maksimova (Russian Academy of Sciences, Novosibirsk, Russia) Grzegorz Malinowski (University of Lódz, Lódz, Poland) Francesco Paoli (University of Cagliari, Cagliari, Italy) Darko Sarenac (Colorado State University, Fort Collins, USA) Peter Schröder-Heister (University of Tübingen, Tübingen, Germany) Vladimir Vasyukov (Russian Academy of Sciences, Moscow, Russia)

This series is devoted to the universal approach to logic and the development of a general theory of logics. It covers topics such as global set-ups for fundamental theorems of logic and frameworks for the study of logics, in particular logical matrices, Kripke structures, combination of logics, categorical logic, abstract proof theory, consequence operators, and algebraic logic. It includes also books with historical and philosophical discussions about the nature and scope of logic. Three types of books will appear in the series: graduate textbooks, research monographs, and volumes with contributed papers.

More information about this series at http://www.springer.com/series/7391

João Rasga Cristina Sernadas •

Decidability of Logical Theories and Their Combination

João Rasga Department of Mathematics Instituto Superior Técnico, Universidade de Lisboa and Instituto de Telecomunicações Lisboa, Portugal

Cristina Sernadas Department of Mathematics Instituto Superior Técnico, Universidade de Lisboa and Instituto de Telecomunicações Lisboa, Portugal

ISSN 2297-0282 ISSN 2297-0290 (electronic) Studies in Universal Logic ISBN 978-3-030-56553-4 ISBN 978-3-030-56554-1 (eBook) https://doi.org/10.1007/978-3-030-56554-1 Mathematics Subject Classification: 03B10, 03B25, 03B62 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The main objective of the book is to provide a self-contained introduction to decidability of first-order theories to graduate students of Mathematics and is equally suitable for Computer Science and Philosophy students who are interested in gaining a deeper understanding of the subject. The book is also directed to researchers that intend to get acquainted with first-order theories and their combinations. The technical material is presented in a systematic and universal way and illustrated with plenty of examples and a range of proposed exercises. The book is organized as follows. In Chap. 1, we start by providing an overview of basic first-order logic concepts and results (the reader can find more information on these preliminaries in [1–5]). Then, we introduce some important model-theoretic notions like embeddings, diagrams and elementary substructures among others and state some relevant properties (see [6–9]). Moreover, we refer to theories and give some examples. Chapter 2 concentrates on Gentzen calculus (see [10–12]) as a way of reasoning about theories. The chapter begins with the presentation of a sequent calculus for first-order logic along with some technical results about the calculus, namely, the Inversion Lemma and the Admissibility of Weakening and Contraction Lemmas. The calculus is shown to be sound as well as complete using Hintikka sets (see [13–15]). Then, Gentzen’s Hauptsatz (Cut Elimination Theorem) and Craig’s Interpolation Theorem are proved. Chapter 3 presents sufficient conditions for a theory to be decidable, namely, when the theory is axiomatizable and either has computable quantifier elimination or is complete. Moreover, we analyze sufficient conditions for a theory to be complete: quantifier elimination (a concept that was first used in [16]) and categoricity (see [17, 18, 9]). These results are illustrated by showing the decidability of the theories of real closed ordered fields (see [19]) and dense linear orders without endpoints (see [20, 21]). The chapter provides an explicit connection to computability theory and the works of Kurt Gödel, Alan Turing and Alfred Tarski on reduction techniques for proving decidability or undecidability of theories. The reduction technique is illustrated by showing that the theory of Euclidean geometry is decidable via reduction to the theory of real closed ordered fields. Chapter 4 is dedicated to quantifier elimination. After discussing the symbolic approach illustrated with the v

vi

Preface

theory of algebraically closed fields, some model-theoretic conditions are presented (see [6–9]) and exemplified with the Presburger arithmetic (see [22]), divisible torsion-free Abelian groups (see [23]) and the successor theory. Chapter 5 addresses the contemporary topic of combination of theories with the aim of obtaining preservation results in a universal way. Namely, we discuss preservation of satisfiability and decidability when combining theories by the Nelson-Oppen technique (see [24–27]). The theories only share equality and are stably infinite. The book ends with an Appendix presenting a modicum of computability theory (see [28– 30]). The Appendix follows closely [30], namely, adopting as the computational model an abstract high-level programming language. The concepts of computable function, decidable set and listable set are defined and explored. The problem reduction technique is also discussed. Lisbon, Portugal October 2019

João Rasga Cristina Sernadas

References 1. E. Mendelson, Introduction to Mathematical Logic, 6th edn. (Chapman and Hall, 2015) 2. R. Cori, D. Lascar, Mathematical Logic, Part I, Propositional Calculus, Boolean Algebras, Predicate Calculus (Oxford University Press, 2000) 3. R. Cori, D. Lascar, Mathematical Logic, Part II, Recursion Theory, Gödel Theorems, Set Theory, Model Theory (Oxford University Press, 2001) 4. H.B. Enderton, A Mathematical Introduction to Logic, 2nd edn. (Academic Press, 2001) 5. A. Sernadas, C. Sernadas, Foundations of Logic and Theory of Computation, 2nd edn. (College Publications, 2012) 6. C.C. Chang, H.J. Keisler,. Model Theory (Dover, 2012) 7. W. Hodges, Model theory, in Encyclopedia of Mathematics and its Applications, vol. 42 (Cambridge University Press, 1993) 8. W. Hodges, A Shorter Model Theory (Cambridge University Press, 1997) 9. D. Marker, Model theory: an introduction, in Graduate Texts in Mathematics, vol. 217 (Springer, 2002) 10. G. Gentzen, The Collected Papers of Gerhard Gentzen, in. Studies in Logic and the Foundations of Mathematics, ed. by M.E. Szabo (North-Holland, 1969) 11. A.S. Troelstra, H. Schwichtenberg. Basic Proof Theory, vol. 43, 2nd edn. (Cambridge University Press, 2000) 12. J. Gallier, Logic for Computer Science: Foundations of Automatic Theorem Proving (Dover, 2015) 13. R.M. Smullyan, First-Order Logic (Springer, 1968) 14. W. Hodges, A Shorter Model Theory (Cambridge University Press, 1997) 15. J. Gallier, Logic for Computer Science: Foundations of Automatic Theorem Proving (Dover, 2015) 16. T. Skolem, Selected Works in Logic by Th. Skolem, ed. by J.E. Fenstad. Scandinavian University Books (Universitetsforlaget, 1970) 17. C. Ryll-Nardzewski, On the categoricity in power  N0 . Bulletin de l’Académie Polonaise des Sciences. Série des Sciences Mathématiques, Astronomiques et Physiques, 7, 545–548, (1959)

Preface

vii

18. R.L. Vaught, Denumerable models of complete theories, in Infinitistic Methods (Proceedings of the Symposium Foundations of Mathematics, Warsaw, 1959) (Pergamon, 1961) pp. 303– 321 19. A. Tarski, A Decision Method for Elementary Algebra and Geometry, 2nd edn. (University of California Press, 1951) 20. C.H. Langford. Some theorems on deducibility. Ann. Math. 28(1–4), 16–40 (1927) 21. C.H. Langford, Theorems on deducibility. Ann. Math. 28(1–4), 459–471 (1927) 22. M. Presburger, On the completeness of a certain system of arithmetic of whole numbers in which addition occurs as the only operation. Hist. Philosophy of Logic. 12(2), 225–233 (1991) 23. S. Lang, Algebra, in Graduate Texts in Mathematics, vol, 211, 3rd edn. (Springer, 2002) 24. G. Nelson, D.C. Oppen, Simplification by cooperating decision procedures. ACM Transactions on Programming Languages and Systems, 1(2), 245–257 (1979) 25. D.C. Oppen, Complexity, convexity and combinations of theories. Theoretical Computer Science, 12, 291–302 (1980) 26. A.R. Bradley, Z. Manna, The Calculus of Computation: Decision Procedures with Applications to Verification (Springer, 2007) 27. D. Kroening, O. Strichman, Decision Procedures: An Algorithmic Point of View, 2nd edn. (Springer, 2016) 28. N.J. Cutland, Computability: An Introduction to Recursive Function Theory (Cambridge University Press, 1980) 29. D.S. Bridges, Computability, in Graduate Texts in Mathematics, vol. 146 (Springer, 1994) 30. A. Sernadas, C. Sernadas, J. Rasga, J. Ramos, A Mathematical Primer on Computability (College Publications, 2018)

Acknowledgements

We would like to express our deepest gratitude to the many students of Mathematics of Instituto Superior Técnico, who attended the Foundations of Logic and Theory of Computation MSc course. We are also grateful to our colleagues Walter A. Carnielli and Amílcar Sernadas for many discussions on logic and computability theory. This work was supported by the Instituto de Telecomunicações, namely its Security and Quantum Information Group, by the Fundação para a Ciência e a Tecnologia (FCT) through national funds; by FEDER, COMPETE 2020; and by Regional Operational Program of Lisbon, under UIDB/50008/2020. Last but not the least, we greatly acknowledge the excellent work environment provided by the Department of Mathematics of Instituto Superior Técnico, Universidade de Lisboa.

ix

Contents

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

1 1 11 19 24 27 31 34

2 Reasoning with Theories . . . . . . . . . . . . . . 2.1 Gentzen Calculus . . . . . . . . . . . . . . . . 2.2 Soundness of the Gentzen Calculus . . . 2.3 Completeness of the Gentzen Calculus . 2.4 Cut Elimination . . . . . . . . . . . . . . . . . 2.5 Craig Interpolation . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

35 35 47 50 59 64 73

3 Decidability Results on Theories . . . . . . . . . . . . . . . . . 3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Decidability via Computable Quantifier Elimination 3.3 Decidability via Reduction . . . . . . . . . . . . . . . . . . 3.4 Decidability via Completeness . . . . . . . . . . . . . . . . 3.5 Completeness via Quantifier Elimination . . . . . . . . 3.6 Completeness via Categoricity . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

75 75 76 78 82 84 87 99

1 First-Order Logic . . . . . . . . . 1.1 Language . . . . . . . . . . . . 1.2 Semantics . . . . . . . . . . . . 1.3 Embeddings . . . . . . . . . . 1.4 Elementary Substructures 1.5 Diagrams . . . . . . . . . . . . 1.6 Theories . . . . . . . . . . . . . References . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

4 Quantifier Elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.1 Constructive Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.2 9-Embeddings and Algebraically Prime Models . . . . . . . . . . . . . . 109

xi

xii

Contents

4.3 Algebraically Prime Models via Proto Adjunction . . . . . . . . . . . . 123 4.4 Algebraically Prime Models via Iteration . . . . . . . . . . . . . . . . . . . 133 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 5 Combination of Theories . . . . . . 5.1 Preliminaries . . . . . . . . . . . 5.2 Preservation of Satisfiability 5.3 Preservation of Decidability References . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

143 143 147 155 158

Appendix A: Basics of Computability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 List of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Subject Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

Acronyms

ACF DAG DLO EG FOL PRA RCOF

Algebraically Closed Field Division Torsion-Free Abelian Group Dense Linear Order Without Left and Right Endpoints Euclidean Geometry First-Order Logic Presburger Arithmetic Real Closed Ordered Field

xiii

Chapter 1

First-Order Logic

In this chapter, we start by providing an overview of basic first-order logic concepts and results, namely, signature, language, relevant classes of formulas and several technical maps and relations. Then, we review semantic concepts like interpretation structure, satisfaction and entailment, as well as useful results like the Lemma of the Closed Formula, the Lemma of Substitution and the Cardinality Theorem (the reader can find more information on these preliminaries in [1–6]). After that we introduce some relevant model-theoretic notions. More precisely, we introduce the notions of embedding, isomorphism and elementary map and analyze its impact on preservation and reflection of satisfaction. We go on and define several relationships between interpretation structures like substructure, elementary substructure, structure generated by a set as well as elementary equivalence. Then, we present the concepts of diagram and reduct and relate satisfaction of a particular diagram with the existence of an embedding (the interested reader can also consult [7–10] for more advanced issues on model theory). Finally, we introduce theories and give some examples. Throughout the chapter, we provide computability insights.

1.1 Language We start by presenting signature introducing all the relevant non-logical symbols (dependent on the purpose one has in mind). Definition 1.1 A signature is a triple  = (F, P, τ ) such that F and P are disjoint sets and τ : F ∪ P → N is a map. We assume that τ ( p) = 0 for every p ∈ P. © Springer Nature Switzerland AG 2020 J. Rasga and C. Sernadas, Decidability of Logical Theories and Their Combination, Studies in Universal Logic, https://doi.org/10.1007/978-3-030-56554-1_1

1

2

1 First-Order Logic

The elements of F are said to be function symbols. The elements of P are said to be predicate symbols. Map τ returns the arity of its argument. Note 1.2 Given a signature  and n ∈ N, let Fn denote the set { f ∈ F : τ ( f ) = n} of function symbols of arity n and Pn denote the set { p ∈ P : τ ( p) = n} of predicate symbols of arity n for n > 0. The elements of F0 are also called constant symbols. Remark 1.3 For simplifying the presentation, given a generic signature , we may assume that the set of function symbols of  is F and the set of predicate symbols of  is P, similarly for   , F  and P  and for   , F  and P  . Example 1.4 The signature N for natural numbers is as follows: F0 = {0}, F1 = {S}, F2 = {+, ×}, Fn = ∅ for every n > 2, P2 = {∼ =, 2. Remark 1.5 In the sequel, when defining a signature, we only present the non-empty sets of function and predicate symbols with a given arity. Example 1.6 A simpler signature S for natural numbers is as follows: F0 = {0}, F1 = {S} and P2 = {∼ =}. Example 1.7 The signature f for fields is as follows: F0 = {0, 1}, F1 = {−}, F2 = {+, ×} and P2 = {∼ =}. Example 1.8 The signature RCOF for real closed ordered fields is as follows: F0 = {0, 1}, F1 = {−}, F2 = {+, ×} and P2 = {∼ =, 0. Remark 1.12 We assume once and for all a fixed decidable set X of variables. Before introducing the set of formulas of the logic, we need to define terms. Definition 1.13 The set T of terms over  is inductively defined as follows: • X ∪ F0 ⊆ T ; • f (t1 , . . . , tn ) ∈ T whenever f ∈ Fn , t1 , . . . , tn ∈ T and n ∈ N+ . Example 1.14 Consider the signatures N and f introduced in Example 1.4 and in Example 1.7, respectively. Then, +(S(0), x) ∈ TN and ×(0, +(x, 1)) ∈ Tf , where x ∈ X . It is usual to use infix notation instead of prefix notation when presenting terms. Example 1.15 Recall Example 1.14. We may write S(0) + x instead of +(S(0), x) and

0 × (x + 1)

instead of ×(0, +(x, 1)). Again for dealing with computability issues, we need to consider a working universe (see Definition A.2) for terms over a signature. Definition 1.16 The working universe WT for the set of terms over a signature  is inductively defined as follows: • "x", x ∈ WT whenever x ∈ X ; • "f", f, t1 , . . . , tn ∈ WT whenever f ∈ Fn and t1 , . . . , tn ∈ WT . Example 1.17 Recall signature S introduced in Example 1.6. Then

"f", S, "f", S, "f", 0 is the representation of the term S(S(0)) in the working universe WTS . Proposition 1.18 Let  be a decidable signature. Then T is decidable.

4

1 First-Order Logic

Proof Recall the high-level programming language described in Sect. A.2. Consider the program PT = function (e) ( if ¬ issyn(e) ∨ ¬ islist(e) then return 0; if e[1] = "x" ∧ e[1] = "f" then return 0; if e[1] == "x" then return Pχ X (e[2]); if ¬ Pχ F (e[2]) ∨ Pτ (e[2]) + 2 = length(e) then return 0; i = 1; while i ≤ Pτ (e[2]) do ( if PT (e[i + 2]) == 0 then return 0; i =i +1 ); return 1 ) where Pχ F , Pχ P , Pτ and Pχ X are programs that compute χ F , χ P , τ and χ X , respec tively. It is immediate to see that PT computes χT . We are ready to define the language of the logic over a given signature. Definition 1.19 The set L  of formulas over a signature , also called the language over , is inductively defined as follows: • • • •

⊥∈ L  ; p(t1 , . . . , tn ) ∈ L  whenever p ∈ Pn , t1 , . . . , tn ∈ T and n ∈ N+ ; (ϕ ⊃ ψ) ∈ L  whenever ϕ, ψ ∈ L  ; (∀x ϕ) ∈ L  whenever x ∈ X and ϕ ∈ L  .

Example 1.20 Consider the signature N introduced in Example 1.4. For instance, (∀x ∼ =(S(x) + S(S(0)), S(0))) is in L N . Note 1.21 We use the usual abbreviations ¬, ∧, ∨, ≡, ∃ for denoting negation, conjunction, disjunction, equivalence and existential quantification, respectively.

1.1 Language

5

Note 1.22 In the sequel, we may use ∃ instead of ∀ as a primitive constructor in certain proofs by induction on the structure of formulas, when this does not affect the correctness of the proof. For instance, when the proof involves the complexity of a formula like in Sect. 2.4 for cut elimination we stick with ∀. Example 1.23 Consider the signature N introduced in Example 1.4 and x ∈ X . Then, (∃x ∼ =(x, S(x))) ∈ L N and is an abbreviation of ((∀x (∼ =(x, S(x)) ⊃ ⊥)) ⊃ ⊥). Remark 1.24 From now on, when no confusion arises, we may write formulas without some of the parentheses. Moreover, we use the infix notation for presenting formulas involving ∼ =. Example 1.25 For instance, in the context of the signature N introduced in Example 1.4, we may write ∃x x ∼ = S(x) for (∃x ∼ =(x, S(x))) and

∀x∀y (x ∼ = y⊃x ∼ = 0)

for (∀x(∀y (∼ =(x, y) ⊃ ∼ =(x, 0)))). Similar to what happened with terms, in order to deal with computability issues in first-order logic theories, we need to consider a working universe (see Definition A.2) for formulas over a signature. Definition 1.26 The working universe W L  for the language of first-order logic over  is inductively defined as follows: • • • •

"ff" ∈ W L  ;

"p", p, t1 , . . . , tn ∈ W L  whenever p ∈ Pn and t1 , . . . , tn ∈ WT ;

"implies", ϕ1 , ϕ2 ∈ W L  whenever ϕ1 , ϕ2 ∈ W L  ;

"forall", x, ϕ ∈ W L  whenever ϕ ∈ W L  and x ∈ X .

Example 1.27 Let  be a signature such that q ∈ P2 . For instance,

"forall", x, "forall", y, "p", q, "x", x , "x", y is the representation of the formula ∀x∀y q(x, y) in the working universe W L  . Exercise 1.28 Given a decidable signature , show that L  is decidable. In the sequel we need to consider the following classes of particular formulas.

6

1 First-Order Logic

Note 1.29 We denote by

A the set of atomic formulas, that is, the set of formulas of the form ⊥ and p(t1 , . . . , tn ) in L  , and by B the set of literals, that is, the set of atomic formulas and their negation. Furthermore, we denote by Q the set of quantifier-free formulas, that is, the formulas in L  in which ∀ does not occur, and by cQ the set of all quantifier-free formulas without variables. Moreover, we denote by ∃1 the smallest class of formulas obtained by adding existential quantifiers at the front of each quantifier-free formula. Finally, we denote by ∀2 the smallest class of formulas obtained by (1) closing the set ∃1 under ∧ and ∨; (2) closing the set in (1) by adding universal quantifiers at the front of each formula. Exercise 1.30 Given a decidable signature , show that Q  is decidable. In the sequel we may work with formulas in disjunctive normal form. Definition 1.31 We say that a formula ψ is in disjunctive normal form if it is a disjunction n  ψi i=1

where each ψi is a conjunction

nj 

ψi j

j=1

where ψi j is a literal. Example 1.32 For instance, in the context of the signature N introduced in Example 1.4, the formula (¬ x ∼ = y ∧ S(x) ∼ = y) ∨ S(y) ∼ = S(y)

1.1 Language

7

is in disjunctive normal form with the same atomic formulas. Exercise 1.33 Show that any quantifier-free form has an equivalent formula in disjunctive normal form. The following sentences express properties of a equality. Definition 1.34 Let  be a signature with equality (see Definition 1.9). Then, we denote by • (E1) the sentence ∀x x ∼ = x; • (E2 f ) the sentence ∀x1 . . .∀xn+n ((x1 ∼ = f (xn+1 , . . . , xn+n )) =xn+1 ∧ . . . ∧ xn ∼ =xn+n ) ⊃ f (x1 , . . . , xn )∼ for every f ∈ Fn ; • (E3 p ) the sentence ∀x1 . . .∀xn+n ((x1 ∼ =xn+1 ∧ . . . ∧ xn ∼ =xn+n )⊃ p(x1 , . . . , xn ) ⊃ p(xn+1 , . . . , xn+n )) for every p ∈ Pn . Sentence (E1) expresses the reflexivity of ∼ =. Sentences (E2 f ) and (E3 p ) state that ∼ = is a congruence for function symbol f and predicate symbol p, respectively. For dealing with variables in terms and formulas, we need some useful technical maps. Definition 1.35 Let  be a signature. The map var : T → ℘ X, assigning to each term t the set of variables that appear in t, is inductively defined as follows: • var (x) = {x}, when x ∈ X ; • var (c) = ∅, when c ∈ F0 ; n  • var ( f (t1 , . . . , tn )) = var (ti ), when f ∈ Fn and t1 , . . . , tn ∈ T . i=1

The map fv : L  → ℘ X, assigning to each formula ϕ the set of variables that occur free in ϕ, is inductively defined as follows: • fv (⊥) = ∅; • fv ( p(t1 , . . . , tn )) =

n  i=1

var (ti ), when p ∈ Pn and t1 , . . . , tn ∈ T ;

8

1 First-Order Logic

• fv (ϕ ⊃ ψ) = fv (ϕ) ∪ fv (ψ), when ϕ, ψ ∈ L  ; • fv (∀x ϕ) = fv (ϕ) \ {x}, when ϕ ∈ L  . Definition 1.36 Let  be a signature and ϕ ∈ L  . Then ϕ is said to be closed or a sentence if fv (ϕ) = ∅. Example 1.37 Consider Example 1.15 and Example 1.25. Then varN (S(0) + x) = {x} and

fvN (∃x x ∼ = S(x)) = ∅

and so ∃x x ∼ = S(x) is a sentence. Note 1.38 We denote by

cL the set of all sentences in L  . Remark 1.39 Given a formula ϕ over a signature  such that fv (ϕ) = {x1 , . . . , xn }, we denote by ∀ϕ the universal closure of ϕ, that is, the sentence ∀x1 . . . ∀xn ϕ. Moreover, a sentence ψ over  is said to be universal if ψ is of the form ∀δ for some quantifier-free formula δ. Similarly, we denote by ∃ϕ the existential closure of ϕ, that is, the sentence ∃x1 . . . ∃xn ϕ. Proposition 1.40 Let  be a decidable signature. Then var : T → ℘ X is a computable map. Proof Recall the high-level programming language described in Sect. A.2 and the programs Pcount , Pset and Pconc introduced in Example A.16. It is immediate that the program Pvar = function (t) ( if t[1] == "x" then return t[2] ; r = ; i = 1; while i ≤ Pτ (t[2]) do ( r = Pconc (r, Pvar (t[i + 2])); i =i +1 ); return Pset (r ) )

1.1 Language

9



computes the map var .

Exercise 1.41 Let  be a decidable signature. Show that fv is a computable map. Substitutions play an important role when dealing with variables in first-order logic. Definition 1.42 Let  be a signature, x ∈ X and t ∈ T . The map [·]tx : T → T , when applied to a term gives the term where x was replaced by t, is inductively defined as follows: • • • •

[x]tx = t; [y]tx = y, when y ∈ X and y is not x; [c]tx = c, when c ∈ F0 ; [ f (t1 , . . . , tn )]tx = f ([t1 ]tx , . . . , [tn ]tx ), when f ∈ Fn and t1 , . . . , tn ∈ T . The map [·]tx : L  → L  ,

when applied to a formula gives the formula where the free occurrences of x are replaced by t, is inductively defined as follows: • [⊥]tx = ⊥; • [ p(t1 , . . . , tn )]tx = p([t1 ]tx , . . . , [tn ]tx ), when p ∈ Pn and t1 , . . . , tn ∈ T ; • [ϕ ⊃ ψ]tx = [ϕ]tx ⊃ [ψ]tx , when ϕ, ψ ∈ L  ; ∀y ϕ if y is x , when ϕ ∈ L  . • [∀y ϕ]tx = ∀y [ϕ]tx otherwise Substitution can be extended to sets of formulas as follows: given ⊆ L  , we denote by [ ]tx the set {[ψ]tx : ψ ∈ }. Example 1.43 Recall Example 1.4. Then x [∀y S(x) ∼ = S(y)]S(0)

is the formula ∀y S(S(0)) ∼ = S(y). y ,...,y

Exercise 1.44 Let  be a decidable signature. Show that ϕ → [ϕ]u11 ,...,umm is a computable map. Note 1.45 Given n ∈ N and a formula ϕ over a signature  containing a binary predicate symbol ∼ = such that x ∈ fv (ϕ), we denote by

10

1 First-Order Logic

• ∃≥n xϕ the formula ⎛⎛ ∃x1 . . . ∃xn ⎝⎝





= xj⎠ ∧ ¬ xi ∼

1≤i< j≤n

• ∃≤n xϕ the formula • ∃=n xϕ the formula





⎞ [ϕ]xxi ⎠ ;

1≤i≤n

¬ ∃≥n+1 xϕ; (∃≥n xϕ) ∧ ∃≤n xϕ.

Finally, we end the section with a useful relation stating when a term is free for a variable in a formula. Definition 1.46 Let  be a signature. The set  ⊆ T × X × L  is inductively defined as follows: • • • •

(t, x, ⊥) ∈  ; (t, x, p(t1 , . . . , tn )) ∈  ; (t, x, ϕ ⊃ ψ) ∈  whenever (t, x, ϕ) ∈  and (t, x, ψ) ∈  ; (t, x, ∀y ϕ) ∈  whenever (1) either y is x; (2) or the two following conditions are fulfilled:  if x ∈ fv (ϕ) then y ∈ / var (t); (t, x, ϕ) ∈  .

When (t, x, ϕ) ∈  it is said that term t is free for variable x in formula ϕ. Note 1.47 We write t  x : ϕ whenever (t, x, ϕ) ∈  . Intuitively, t  x : ϕ means that substituting t for x in ϕ does not lead to the capture of variables in t by quantifiers in ϕ. Example 1.48 Recall signature N introduced in Example 1.4. Then S(0) N x : ∀y S(x) ∼ = S(y). On the other hand,

S(y) N x : ∀y S(x) ∼ = S(y).

Exercise 1.49 Let  be a decidable signature. Show that  is decidable.

1.2 Semantics

11

1.2 Semantics The goal of an interpretation structure for a signature is to provide the meaning for the non-logical symbols in that signature. Definition 1.50 Let  = (F, P, τ ) be a signature. An interpretation structure for  is a triple I = (D, { f I } f ∈F , { p I } p∈P ) where • D is a non-empty set; • f I : D n → D is a map, for each f ∈ Fn ; • p I : D n → {0, 1} is a map, for each p ∈ Pn . The set D is the domain or the universe of I . The map f I is called the denotation of the function symbol f in I and p I is called the denotation of the predicate symbol p in I . When D is finite we say that I is a finite interpretation structure. Remark 1.51 For simplifying the presentation, given a generic interpretation structure I , we assume that the domain of I is D. Similarly for I  and D  , I  and D  and for I j and D j with j ∈ N. Example 1.52 Recall signature N introduced in Example 1.4. Let IN be the interpretation structure for N defined as follows: • • • • • •

D is N; 0 IN is 0; S IN is the map k → k + 1; + IN is the map (k1 , k2 ) → k1 + k2 ; × IN is the map (k1 , k2 ) → k1 × k2 ; ∼ = IN is =.

This interpretation structure is usually called N. Consider now another interpretation structure I for N defined as follows: • • • • • • •

D = {d1 , d2 }; 0 I = d1 ; S I (d1 ) = d1 ; S I (d2 ) = d2 ; + I is the map (k1 , k2 ) → d1 ; × I is the map (k1 , k2 ) → d1 ; ∼ = I (d1 , d2 ) = 1 iff d1 = d2 .

Example 1.53 Recall signature f introduced in Example 1.7. Let IQ be the interpretation structure for f defined as follows: • D is Q;

12

• • • • • •

1 First-Order Logic

0 IQ is 0; 1 IQ is 1; − IQ is the map d → −d; + IQ is the map (d1 , d2 ) → d1 + d2 ; × IQ is the map (d1 , d2 ) → d1 × d2 ; ∼ = IQ is =.

Consider now another interpretation structure IZ for f defined as follows: • • • • • • •

D is Z; 0 IZ is 0; 1 IZ is 1; − IZ is the map d → −d; + IZ is the map (d1 , d2 ) → d1 + d2 ; × IZ is the map (d1 , d2 ) → d1 × d2 ; ∼ = IZ is =.

Finally, consider the interpretation structure I for f defined as follows: • • • • • • •

D is {d}; 0 I is d; 1 I is d; − I is the map d1 → d; + I is the map (d1 , d2 ) → d; × I is the map (d1 , d2 ) → d; ∼ = I is =.

For providing the denotation of a term and the satisfaction of a formula with variables in an interpretation structure, we need the notion of assignment as well as equivalence of assignments up to a variable. Definition 1.54 An assignment over I is a map ρ : X → D. Let x ∈ X . Two assignments ρ, σ over I are said to be x-equivalent, denoted by ρ ≡x σ, if ρ(y) = σ (y) for each y = x. Example 1.55 Recall interpretation structure IQ introduced in Example 1.53. Consider the assignments ρ1 and ρ2 such that • ρ1 (y) = 0 for every y ∈ X ; • ρ2 (y) = 0 for every y ∈ X \ {x}; • ρ2 (x) = 1. Then ρ1 ≡x ρ2 .

1.2 Semantics

13

Exercise 1.56 Show that ≡x has the following properties: • reflexivity, that is, ρ ≡x ρ; • symmetry, that is, ρ2 ≡x ρ1 whenever ρ1 ≡x ρ2 ; • transitivity, that is, ρ1 ≡x ρ3 whenever ρ1 ≡x ρ2 and ρ2 ≡x ρ3 . We are ready to define the denotation of a term. Definition 1.57 Given an interpretation structure I over signature  and an assignment ρ over I , the map [[·]] Iρ : T → D, providing the denotation of a term, is inductively defined as follows: • [[x]] Iρ = ρ(x); • [[c]] Iρ = c I , when c ∈ F0 ; • [[ f (t1 , . . . , tn )]] Iρ = f I ([[t1 ]] Iρ , . . . , [[tn ]] Iρ ), when f ∈ Fn and t1 , . . . , tn ∈ T . Example 1.58 Recall interpretation structure IQ and the assignment ρ2 introduced in Example 1.53 and in Example 1.55, respectively. Then [[x − 1]] IQ ρ2 = 0. We now say when an interpretation structure and an assignment satisfy a formula. Definition 1.59 The contextual satisfaction of a formula ϕ by an interpretation structure I and an assignment ρ, denoted by Iρ  ϕ, is inductively defined as follows: • • • •

Iρ Iρ Iρ Iρ

   

⊥; p(t1 , . . . , tn ) whenever p I ([[t1 ]] Iρ , . . . , [[tn ]] Iρ ) = 1; ϕ ⊃ ψ whenever Iρ  ϕ or Iρ  ψ; ∀x ϕ whenever I σ  ϕ for every assignment σ ≡x ρ.

A formula ϕ is contextually satisfiable whenever there are an interpretation structure I and an assignment ρ over I such that Iρ  ϕ. Example 1.60 Recall interpretation structures IN and I over signature N introduced in Example 1.52. Let ρ and σ be assignments over IN and I , respectively. Then [[S(0) + x]] IN ρ is 1 + ρ(x) and

[[S(0) + x]] I σ

14

1 First-Order Logic

is d1 . Moreover,

On the other hand,

IN ρ N x ∼ = S(0) + x. I σ N 0 + x ∼ = S(0) + x.

Exercise 1.61 Show that contextual satisfaction for the formulas defined by abbreviation is as follows: • • • • •

Iρ Iρ Iρ Iρ Iρ

    

¬ ϕ whenever Iρ  ϕ; ϕ ∧ ψ whenever Iρ  ϕ and Iρ  ψ; ϕ ∨ ψ whenever Iρ  ϕ or Iρ  ψ; ϕ ≡ ψ whenever Iρ  ϕ iff Iρ  ψ; ∃x ϕ whenever there is an assignment σ ≡x ρ such that I σ  ϕ.

These notions extend to sets of formulas in the expected way. Definition 1.62 Let I be an interpretation structure for a signature , ρ an assignment over I and ⊆ L  . We say that I and ρ contextually satisfy , written as Iρ  whenever Iρ  ψ for every ψ ∈ . We then say that is contextually satisfiable. We now concentrate on satisfaction in every assignment of an interpretation structure. Definition 1.63 We say that a formula ϕ is satisfied by an interpretation structure I , denoted by I  ϕ, whenever Iρ  ϕ for every assignment ρ over I . A formula ϕ is satisfiable whenever there is an interpretation structure I such that I  ϕ. Example 1.64 Recall Example 1.60. Then IN N x ∼ = S(0) + x and I N x ∼ = S(0) + x. On the other hand,

I N 0 ∼ = S(0) + x.

Exercise 1.65 Given a signature , an interpretation structure I for  and ϕ ∈ L  , show that I  ∀ ϕ if and only if I  ϕ. Exercise 1.66 Provide an example of a contextually satisfiable formula which is not satisfiable. The notion of satisfaction extends to sets of formulas in the expected way.

1.2 Semantics

15

Definition 1.67 Let I be an interpretation structure for a signature  and ⊆ L  . We say that I satisfies , written as I  whenever I  ψ for every ψ ∈ . We then say that I is a model of and that is satisfiable. Moreover, when I is a finite interpretation structure, we say that I is a finite model of . Note 1.68 Given a set of formulas, we denote by Mod( ) the class of all models of . Definition 1.69 Let  be a signature and ϕ a formula over . Then ϕ is valid, written  ϕ if I  ϕ for every interpretation structure I for . Example 1.70 Recall Example 1.64. Then N x ∼ = S(0) + x. On the other hand, N ¬ ⊥. Exercise 1.71 Show that for every quantifier-free formula ϕ there is a formula ψ in disjunctive normal form such that  ϕ ≡ ψ. We are ready to say when a formula is a semantic consequence or is entailed by a set of formulas. Definition 1.72 Let  be a signature and ∪ {ϕ} ⊆ L  . We say that ϕ is a semantic consequence of , written as  ϕ, if for every interpretation structure I , I  ϕ whenever I  . Note 1.73 Let  be a signature and {γ1 , . . . , γk } ∪ {ϕ} ⊆ L  . In the sequel, we may write γ1 , . . . , γk  ϕ instead of {γ1 , . . . , γk }  ϕ.

16

1 First-Order Logic

Exercise 1.74 Given a signature  and a formula ϕ ∈ L  , show that  ϕ if and only if ∅  ϕ. Exercise 1.75 Given a signature  and a set ∪ {ϕ} ⊆ L  , show that Mod( ) ⊆ Mod(ϕ) providing that  ϕ. Exercise 1.76 Given a signature  and a formula ϕ ∈ L  , show that ϕ  ∀x ϕ and ∀x ϕ  ϕ. Definition 1.77 Let  be a signature. The map ·  : ℘ L  → ℘ L  such that  = {ϕ ∈ L  :  ϕ} is said to be the semantic consequence operator. Exercise 1.78 Show that · has the following properties: • • • •

extensivity, that is, ⊆  ; monotonicity, that is, if 1 ⊆ 2 then 1 ⊆ 2 ; idempotence, that is, (  ) =  ; compactness, that is, if γ ∈  then γ ∈  for some finite ⊆ .

Definition 1.79 Let  be a signature with equality. Recall Definition 1.34. A set of formulas over  is with equality whenever •  (E1); •  (E2 f ) for every f ∈ Fn ; •  (E3 p ) for every p ∈ Pn . Remark 1.80 Let be a set of formulas with equality over a signature . From now on, without loss of generality, we assume that ∼ =I

is

=

for every interpretation structure I for  such that I  . We now introduce the concept of consistent set that will be used later on. Definition 1.81 Let  be a signature. A set contained in L  is said to be consistent if  = L  .

1.2 Semantics

17

Example 1.82 Recall signature N introduced in Example 1.4. Then the set {x ∼ = x} is consistent. Exercise 1.83 Given an interpretation structure I for a signature , show that the set {ϕ ∈ L  : Iρ  ϕ for some assignment ρ over I } is consistent. Exercise 1.84 Show that a set of formulas is consistent if and only if it is satisfiable. We recall some useful propositions that are needed in the sequel (for the proofs see [6]). The following result is the Lemma of Absent Variables. Proposition 1.85 Let  be a signature, I an interpretation structure, ϕ a formula over  and ρ and ρ  assignments over I such that ρ(x) = ρ  (x) for every x ∈ fv (ϕ). Then Iρ   ϕ if and only if Iρ  ϕ.

Hence, we can say that for the semantic evaluation of a formula only the free variables are relevant. This result has consequences for sentences as stated in the following proposition known as the Lemma of the Closed Formula. Proposition 1.86 Let  be a signature, I an interpretation structure and ϕ, ψ sentences over . Then 1. 2. 3. 4. 5.

Iρ  ϕ if and only if Iρ   ϕ for every pair of assignments ρ and ρ  into I ; I  ϕ if and only if Iρ  ϕ for some assignment ρ into I ; I  ¬ ϕ if and only if I  ϕ; either I  ϕ or I  ¬ ϕ; I  ϕ ⊃ ψ if and only if I  ϕ or I  ψ.

Exercise 1.87 Let  be a signature, ⊆ L  and ϕ ∈ cL . Assume that  ϕ. Then ∪ {¬ ϕ} is satisfiable. The following proposition relates to contextual satisfaction with substitution and is known as the Lemma of Substitution. Proposition 1.88 Let  be a signature, I an interpretation structure for , ϕ ∈ L  , y1 , . . . , ym ∈ X distinct and u 1 , . . . , u m ∈ T such that u i  yi : ϕ for each i = 1, . . . , m. Then

18

1 First-Order Logic m Iρ   ϕ if and only if Iρ  [ϕ]uy11,...,y ,...,u m

for every assignments ρ, ρ  such that ρ  (yi ) = [[u i ]]



for each 1 ≤ i ≤ m and ρ  (x) = ρ(x) for every variable x ∈ / {y1 , . . . , ym }. The following result concerns a sufficient condition for the legitimate substitution of bound variables and is known as Lemma of Legitimate Substitution of Bound Variables. Proposition 1.89 Let  be a signature, x, y ∈ X and ϕ ∈ L  be such that • y∈ / fv (ϕ); • y  x : ϕ. Then, the formula (∀x ϕ) ≡ (∀y [ϕ]xy ) is valid. Remark 1.90 Taking into account Proposition 1.89, we identify ∀x ϕ with ∀y [ϕ]xy . Finally, we state the Cardinality Theorem. Proposition 1.91 Let I be an interpretation structure for a signature , D  ⊇ D and a ∈ D. Consider the map (·)a : D  → D such that (d  )a =



d  whenever d  ∈ D a otherwise.

Consider the interpretation structure I  such that • domain is D  ;  • f I (d1 , . . . , dn ) = f I ((d1 )a , . . . , (dn )a );  • p I (d1 , . . . , dn ) = p I ((d1 )a , . . . , (dn )a ); and an assignment ρ  : X → D  . Let ρa : X → D be an assignment over I such that ρa (x) = (ρ  (x))a . Then 

 

1. [[t]] Iρa = ([[t]] I ρ )a ; 2. Iρa  ϕ if and only if I  ρ   ϕ. Moreover, whenever ϕ ∈ cL .

I   ϕ if and only if I  ϕ

1.3 Embeddings

19

1.3 Embeddings Interpretation structures for the same signature can be related by embeddings which are special cases of homomorphisms. Definition 1.92 Given interpretation structures I and I  for the same signature , we say that a map h : D → D  is an homomorphism from I to I  , written as h : I → I , whenever 

• h( f I (d1 , . . . , dn )) = f I (h(d1 ), . . . , h(dn ));  • if p I (d1 , . . . , dn ) = 1 then p I (h(d1 ), . . . , h(dn )) = 1; for every d1 , . . . , dn ∈ D. The homomorphism condition for function symbols can be seen as the commutativity of the diagram in Fig. 1.1. and the embedding condition for predicate symbols can be seen as the commutativity of the diagram in Fig. 1.2. Definition 1.93 A homomorphism h is said to be an embedding if h is injective and  p I (h(d1 ), . . . , h(dn )) = 1 then p I (d1 , . . . , dn ) = 1. Example 1.94 Recall interpretation structures I , IZ and IQ for f introduced in Example 1.53. Let h 1 : {d} → Q be such that h 1 (d) = 0. For instance, h 1 (− I d) = h 1 (d) = 0 = − IQ 0 = − IQ h 1 (d).

Fig. 1.1 Homomorphism condition for function symbol f ∈ Fn

Fig. 1.2 Homomorphism condition for predicate symbol p ∈ Pn

20

1 First-Order Logic

Furthermore, h 1 is an embedding from I to IQ . Moreover, let h2 : Z → Q be the inclusion map, that is, h 2 (d) = d for every d ∈ Z. It is immediate that h 2 is an embedding from IZ to IQ . Exercise 1.95 Let I , I  and I  be interpretation structures for a signature . Show that • id D : D → D is an embedding from I to I ; • h  ◦ h : I → I  is an embedding whenever h : I → I  and h  : I  → I  are embeddings. Definition 1.96 We say that an embedding h : I → I  is an isomorphism if there is an embedding h  : I  → I such that h  ◦ h = id I and h ◦ h  = id I  . Exercise 1.97 Show that if h : I → I  is an isomorphism then there is a unique h  : I  → I such that h  ◦ h = id I and h ◦ h  = id I  . Note 1.98 Given an isomorphism h : I → I  , we denote by h −1 the unique h  : I  → I such that h  ◦ h = id I and h ◦ h  = id I  . Exercise 1.99 Show that an embedding is an isomorphism if and only if it is a surjective map on the domains. The image of an injective map is also an interpretation structure for the same signature. Definition 1.100 Let I and I  be interpretation structures for a signature  and h : D → D  an injective map. The interpretation structure h(I ) for  is the tuple (h(D), { f h(I ) } f ∈F , { p h(I ) } p∈P ) defined as follows: • f h(I ) (h(d1 ), . . . , h(dn )) = h( f I (d1 , . . . , dn )); • p h(I ) (h(d1 ), . . . , h(dn )) = p I (d1 , . . . , dn ). Exercise 1.101 Let I and I  be interpretation structures for signature  and h : D → D  an injective map. Show that h is an isomorphism from I to h(I ). Definition 1.102 Let I and I  be interpretation structures for signature . We say that I and I  are isomorphic, denoted by I ≡ I , whenever there is an isomorphism from I to I  .

1.3 Embeddings

21

Exercise 1.103 Show that ≡ has the following properties: • reflexivity, that is, I ≡ I ; • symmetry, that is, I2 ≡ I1 whenever I1 ≡ I2 ; • transitivity, that is, I1 ≡ I3 whenever I1 ≡ I2 and I2 ≡ I3 . The notion of embedding is an algebraic concept that basically imposes structural conditions over the denotations of both function and predicate symbols. However, logically speaking, the most important aspect of interpretation structures is related to contextual satisfaction. Hence, we now discuss the effect of embeddings on contextual satisfaction starting with the relationship between denotation of terms. Proposition 1.104 Let I and I  be interpretation structures for signature , h : I → I  an embedding and ρ an assignment over I . Then, for every term t over , 

h([[t]] Iρ ) = [[t]] I h◦ρ . Proof The proof follows by induction on the structure of t. (Base) t is x. Then  h([[x]] Iρ ) = h(ρ(x)) = [[x]] I h◦ρ . (Step) t is f (t1 , . . . , tn ). Then h([[ f (t1 , . . . , tn )]] Iρ ) = h( f I ([[t1 ]] Iρ , . . . , [[tn ]] Iρ )) 

= f I (h([[t1 ]] Iρ ), . . . , h([[tn ]] Iρ )) 





= f I ([[t1 ]] I h◦ρ , . . . , [[tn ]] I h◦ρ ) 

= [[ f (t1 , . . . , tn )]] I h◦ρ . It is natural to ask whether or not an embedding preserves and reflects contextual satisfaction. That is, whether or not • Iρ  ϕ implies I  h ◦ ρ  ϕ; • I  h ◦ ρ  ϕ implies Iρ  ϕ. The next result shows that embeddings are well behaved with respect to quantifierfree formulas. Proposition 1.105 Let I and I  be interpretation structures for signature , h : I → I  an embedding and ϕ ∈ Q  . Then, 1. for every assignment ρ over I , a. Iρ  ϕ if and only if I  h ◦ ρ  ϕ; b. Iρ  ∀x ϕ whenever I  h ◦ ρ  ∀x ϕ; 2. I  ∀ϕ whenever I   ∀ϕ.

22

1 First-Order Logic

Proof (1a) Iρ  ϕ if and only if I  h ◦ ρ  ϕ. The proof follows by induction on the structure of ϕ ∈ Q  : (Base) We have two cases to consider: (i) ϕ is ⊥. Then Iρ  ⊥ and I  h ◦ ρ  ⊥. (ii) ϕ is p(t1 , . . . , tn ). Then Iρ  p(t1 , . . . , tn ) iff p I ([[t1 ]] Iρ , . . . , [[tn ]] Iρ ) = 1 

iff p I (h([[t1 ]] Iρ ), . . . , h([[tn ]] Iρ )) = 1 (∗) 





iff p I ([[t1 ]] I h◦ρ , . . . , h([[tn ]] I h◦ρ )) = 1 (∗∗) iff I  h ◦ ρ  p(t1 , . . . , tn ) where (∗) follows since h is an embedding and (∗∗) follows by Proposition 1.104. (Step) Assume that ϕ is ϕ1 ⊃ ϕ2 . Then Iρ  ϕ1 ⊃ ϕ2 iff either Iρ  ϕ1 or Iρ  ϕ2 iff either I  h ◦ ρ  ϕ1 or I  h ◦ ρ  ϕ2 (∗) iff I  h ◦ ρ  ϕ1 ⊃ ϕ2 where (∗) follows by the induction hypothesis. (1b) Iρ  ∀x ϕ whenever I  h ◦ ρ  ∀x ϕ. Assume that I  h ◦ ρ  ∀x ϕ. Let σ ≡x ρ. Then h ◦ σ ≡x h ◦ ρ and so I  h ◦ σ  ϕ. Thus, by (1a), I σ  ϕ. (2) I  ∀ϕ whenever I   ∀ϕ. We start by proving that (†) Iρ  ∀xk . . . ∀x1 ϕ whenever I  h ◦ ρ  ∀xk . . . ∀x1 ϕ by induction on k. (Base) The result follows by (1b). (Step) Assume that I  h ◦ ρ  ∀xk . . . ∀x1 ϕ. Let σ ≡xk ρ. Then h ◦ σ ≡xk h ◦ ρ and so I  h ◦ σ  ∀xk−1 . . . ∀x1 ϕ. Thus, by the induction hypothesis, I σ  ∀xk−1 . . . ∀x1 ϕ. Therefore, Iρ  ∀xk . . . ∀x1 ϕ. Suppose with no loss of generality that fv (ϕ) = {x1 , . . . , xk }. Assume that I   ∀xk . . . ∀x1 ϕ. Let ρ be an assignment over I . Observe that I  h ◦ ρ  ∀xk . . . ∀x1 ϕ.  Then, by (†), Iρ  ∀xk . . . ∀x1 ϕ. Hence, I  ∀ϕ. Exercise 1.106 Let I and I  be interpretation structures for a signature  with  equality such that ∼ = I are the identity relation and h : D → D  a map. Show = I and ∼ that h is an embedding if and only if for every α ∈ A and assignment ρ over I Iρ  α iff I  h ◦ ρ  α. Exercise 1.107 Let I and I  be interpretation structures for a signature , h : I → I  an embedding and ϕ ∈ Q  . Show that for every assignment ρ over I

1.3 Embeddings

23

I  h ◦ ρ  ∃x ϕ whenever Iρ  ∃x ϕ. Exercise 1.108 Show that I  ϕ whenever there is an embedding h : I → I  , I   ϕ and ϕ is a quantifier-free formula. Proposition 1.109 Not always embeddings preserve and reflect contextual satisfaction. Proof (1) Not always embedding preserves contextual satisfaction. That is, there are , I and I  interpretation structures for , embedding h : I → I  , assignment ρ over I and ϕ ∈ L  such that I ρ  ϕ and I  h ◦ ρ  ϕ. Recall embedding h 1 in Example 1.94. Let ρ be the unique assignment over I such that ρ(x) = d. Then I ρ f ∀x x ∼ =0 but

IQ h ◦ ρ f ∀x x ∼ = 0.

(2) Not always embeddings reflect contextual satisfaction. That is, there are , I and I  interpretation structures for , embedding h : I → I  , assignment ρ over I and ϕ ∈ L  such that I  h ◦ ρ  ϕ and I ρ  ϕ. Recall embedding h 2 in Example 1.94. Let ρ be an assignment over Z such that ρ(x) = 3. Then, IQ h ◦ ρ f ∃y x × y ∼ = 1. Indeed, let σ be an assignment over Q such that σ ≡ y h ◦ ρ and σ (y) =

1 . Then 3

IQ σ f x × y ∼ = 1. On the other hand,

IZ ρ f ∃y x × y ∼ = 1.

Indeed, there is no assignment σ ≡ y ρ such that 3 × σ (y) = 1.



Exercise 1.110 Let I and I  be interpretation structures for signature , h : I → I  an isomorphism and ϕ ∈ L  . Show that for every assignment ρ over I Iρ  ϕ iff I  h ◦ ρ  ϕ. It is also possible to relate interpretation structures over the same signature directly in terms of contextual satisfaction of formulas instead of imposing conditions on

24

1 First-Order Logic

the denotation of function and predicate symbols. Nevertheless, these two ways of connecting structures are intimately related. Definition 1.111 Let I and I  be interpretation structures for the same signature. A map h : D → D  is elementary if Iρ  ϕ if and only if I  h ◦ ρ  ϕ for every ϕ ∈ L  and assignment ρ over I . Exercise 1.112 Let I and I  be interpretation structures over a signature  with  equality such that ∼ = I are the identity relation and h : D → D  an elementary = I and ∼ map. Show that h is an embedding. Exercise 1.113 Show that every isomorphism is an elementary map. Exercise 1.114 Investigate whether or not an elementary map is necessarily surjective. Exercise 1.115 Show that the composition of elementary maps is still elementary.

1.4 Elementary Substructures In this section we introduce some results about substructures. Definition 1.116 Given interpretation structures I and I  for the same signature, we say that I is a substructure of I  , denoted by I ⊆ I whenever D ⊆ D  and the corresponding inclusion map is an embedding from I to I . Example 1.117 Recall interpretation structures IZ and IQ introduced in Example 1.53 and the embedding h 2 : IZ → IQ introduced in Example 1.94. It is immediate to see that IZ ⊆ IQ since h 2 is an inclusion. Exercise 1.118 Show that the substructure relation ⊆ has the following properties: • reflexivity, that is, I ⊆ I ; • transitivity, that is, I1 ⊆ I3 whenever I1 ⊆ I2 and I2 ⊆ I3 .

1.4 Elementary Substructures

25

In Chap. 4, we need to refer to the interpretation structure generated by a subset of the domain of a given interpretation structure. The former is a substructure of the latter. Definition 1.119 Let  be a signature, I an interpretation structure for  and G ⊆ D. The interpretation structure IG for  generated by G on I is defined as follows: • the domain D G is inductively defined as follows: – G ⊆ DG ; – f I (d1 , . . . , dn ) ∈ D G whenever d1 , . . . , dn ∈ D G for n ∈ N; G

• f I (d1 , . . . , dn ) = f I (d1 , . . . , dn ); G • p I (d1 , . . . , dn ) = p I (d1 , . . . , dn ). Exercise 1.120 Show that I G ⊆ I . The substructures of an interpretation structure I  that contextually satisfy the same formulas as I  are called elementary. Definition 1.121 Let I and I  be interpretation structures for the same signature  and I ⊆ I  . We say that I is an elementary substructure of I  , denoted by I  I whenever

Iρ  ϕ if and only if I  ρ  ϕ

for every formula ϕ ∈ L  and assignment ρ over I . Exercise 1.122 Show that I and I  satisfy the same sentences whenever I  I  . Exercise 1.123 Investigate whether or not the elementary substructure relation  is transitive. Example 1.124 Recall Example 1.117 where we prove that IZ ⊆ IQ . However IZ is not an elementary substructure of IQ (see the proof of Proposition 1.109). The following result provides a sufficient condition for a substructure to be elementary. Proposition 1.125 Let I and I  be interpretation structures for the same signature  with I ⊆ I  . Assume that, for every formula ψ ∈ L  such that x ∈ fv (ψ), if I  ρ  ∃x ψ where ρ is an assignment over I then there is an assignment σ over I that is x-equivalent to ρ and such that I  σ  ψ. Then I  I  .

26

1 First-Order Logic

Proof We prove that Iρ  ϕ if and only if I  ρ  ϕ for every formula ϕ and assignment ρ over I by induction on ϕ. The base cases and the step cases for negation and implication are straightforward. Assume that ϕ is ∃x δ. Consider two cases: (1) x ∈ / fv (δ). (→) Assume that Iρ  ∃x δ. Let σ ≡x ρ be an assignment over I such that I σ  δ. Then, by the induction hypothesis, I  σ  δ and so I  ρ  ∃x δ. (←) Assume that I  ρ  ∃x δ. Let σ  ≡x ρ be an assignment over I  such that I  σ   δ. Observe that ρ(y) = σ  (y) for every y ∈ fv (δ). So, by, the Lemma of Absent Variables (Proposition 1.85), we can conclude that I  ρ  δ. Thus, by the induction hypothesis, Iρ  δ. Therefore, Iρ  ∃x δ. (2) x ∈ fv (δ). (→) Assume that Iρ  ∃x δ. Then, there is σ assignment x-equivalent to ρ over I such that I σ  δ. Hence, by the induction hypothesis, I  σ  δ and so I  ρ  ∃x δ. (←) Assume that I  ρ  ∃x δ. Then, by hypothesis, there is an assignment σ over I such that σ is x-equivalent to ρ and I  σ  δ. Using the induction hypothesis, we  conclude that I σ  δ and, consequently, Iρ  ∃x δ. We now fully characterize elementary maps in terms of particular elementary substructures. Proposition 1.126 Let I and I  be interpretation structures over a signature  and h : D → D  an injective map. Then, h is elementary if and only if h(I )  I  . Proof Observe that, by Exercise 1.101, h : I → h(I ) is an isomorphism. So, by Exercise 1.110, (†) Iρ  ϕ iff h(I )h ◦ ρ  ϕ for every assignment ρ over I and formula ϕ in L  . (→) Assume that h is elementary. Let ρ  be an assignment over h(I ) and ϕ ∈ L  . Then there is a unique assignment ρ over I such that ρ  = h ◦ ρ. Since h is elementary, Iρ  ϕ iff I  h ◦ ρ  ϕ. So, using (†), h(I )h ◦ ρ  ϕ iff I  h ◦ ρ  ϕ. That is, h(I )ρ   ϕ iff I  ρ   ϕ.

1.4 Elementary Substructures

27

(←) Assume that h(I )  I  . Let ρ be an assignment over I and ϕ ∈ L  . Then h(I )h ◦ ρ  ϕ iff I  h ◦ ρ  ϕ. Therefore, Iρ  ϕ iff I  h ◦ ρ  ϕ using (†).



Finally, we define elementary equivalence between interpretation structures. Definition 1.127 Interpretation structures I and I  over the same signature  are said to be elementary equivalent, written as I ≡e I  if I and I  satisfy the same sentences. Exercise 1.128 Show that ≡e is an equivalence relation. Exercise 1.129 Show that I ≡e I  whenever I  I  . Exercise 1.130 Let I and I  be interpretation structures for the same signature and h : D → D  an elementary map. Then, I ≡e I  .

1.5 Diagrams Diagrams play an important role toward proving Ło´s-Vaught Theorem in Chap. 3 and when dealing with quantifier elimination in Chap. 4. Note 1.131 Let I be an interpretation structure for signature . We denote by I the signature obtained by enriching  with a new constant symbol d¯ for each d ∈ D. Example 1.132 Recall the interpretation structure IN introduced in Example 1.52. Then  IN ¯ 1, ¯ . . . }, F1 = {S}, F2 = {+, ×}, Fn = ∅ for every n > 2, is as follows: F0 = {0, 0, P2 = {∼ =, 2. Observe that 3¯ ∼ = S(S(S(0))) ∈ L  IN but

3¯ ∼ / L N . = S(S(S(0))) ∈

28

1 First-Order Logic

Definition 1.133 Let I be an interpretation structure for a signature . The diagram of I , denoted by D(I ), is the set

 x ...x [ϕ] 1 ¯ n ¯ : ϕ ∈ L  , fv (ϕ) = {x1 , . . . , xn }, ρ an assignment over I, Iρ  ϕ . ρ(x1 )...ρ(xn )

Observe that D(I ) is contained in cL I . In a certain sense, a diagram is a representation by sentences of an interpretation structure. Example 1.134 Recall the interpretation structure IN introduced in Example 1.52. Let ρ be an assignment over IN such that ρ(x) = 3. Then 3¯ ∼ = S(S(S(0))) ∈ D(IN ) since IN ρ N x ∼ = S(S(S(0))). It makes sense that we can get an interpretation structure for a signature from an interpretation structure for a bigger signature. Definition 1.135 Let  and   be signatures such that F ⊆ F  and P ⊆ P  and I  an interpretation structure for   . Then, the reduct of I  over , denoted by I  | , is the interpretation structure for  such that • I  | has the same domain as I  ;   • f I | = f I for every f ∈ F;   • p I | = p I for every p ∈ P. Exercise 1.136 Let  and   be signatures such that F ⊆ F  and P ⊆ P  and I  an interpretation structure for   . Show that I  |  ϕ iff I    ϕ for every ϕ ∈ L  . Definition 1.137 Given an interpretation structure I for a signature , we denote by I¯ the interpretation structure for  I such that ¯ I¯| = I and d¯ I = d

1.5 Diagrams

29

for each d ∈ D. Example 1.138 Recall the interpretation structure IN introduced in Example 1.52. Then I¯N is the interpretation structure for  IN defined as follows: • • • • • • •

D is N; ¯ 0 IN is 0; ¯k I¯N is k; ¯ S IN is the map k → k + 1; ¯ + IN is the map (k1 , k2 ) → k1 + k2 ; I¯N × is the map (k1 , k2 ) → k1 × k2 ; ¯ ∼ = IN is =.

So

I¯N |N = IN .

In some cases, it is enough to work with the quantifier-free part of a diagram of an interpretation structure. Definition 1.139 Given an interpretation structure I , the quantifier-free diagram of I , denoted by Dq (I ), is the set D(I ) ∩ Q  I . Example 1.140 Recall Example 1.134. Then 3¯ ∼ = S(S(S(0))) ∈ Dq (IN ). Exercise 1.141 Show that the quantifier-free diagram is closed for conjunction. The following result provides a sufficient condition for establishing an embedding from I to I  : it is enough to prove that I  is a model of the quantifier-free diagram of I . This fact is used in the proof of Proposition 4.15. Proposition 1.142 Let  be a signature with equality, and I and I  interpretation  structures for  and  I , respectively, such that ∼ = I and ∼ = I are the identity relation and I   I Dq (I ). Then there is an embedding from I to I  | . 

¯ I. Proof Let h : D → D  be the map such that h(d) = [[d]] (1) h is injective. Let d1 , d2 ∈ D be such that d1 = d2 . Consider the formula = x2 . ¬ x1 ∼ Then Iρ  ¬ x1 ∼ = x2 where ρ(xi ) = di for i = 1, 2. Then, ¬ d¯1 ∼ = d¯2 ∈ Dq (I ) and so I   I ¬ d¯1 ∼ = d¯2 .

30

1 First-Order Logic 



I I Therefore, [[d¯1 ]] = [[d¯2 ]] and h(d1 ) = h(d2 ).  (2) h( f I (d1 , . . . , dn )) = f I | (h(d1 ), . . . , h(dn )). Consider the formula

f (x1 , . . . , xn ) ∼ = x. Observe that

Iρ  f (x1 , . . . , xn ) ∼ =x

for ρ such that ρ(xi ) = di for i = 1, . . . , n and ρ(x) = f I (d1 , . . . , dn ). Then f (d¯1 , . . . , d¯n ) ∼ = f I (d1 , ¯. . . , dn ) ∈ Dq (I ). So

I   I f (d¯1 , . . . , d¯n ) ∼ = f I (d1 , ¯. . . , dn ). I

I

I

Then f I ([[d¯1 ]] , . . . , [[d¯n ]] ) = [[ f I (d1 , ¯. . . , dn )]] . Hence 



f I (h(d1 ), . . . , h(dn )) = h( f I (d1 , . . . , dn )). 

(3) p I (d1 , . . . , dn ) = 1 iff p I | (h(d1 ), . . . , h(dn ))) = 1. (a) Assume that p I (d1 , . . . , dn ) = 1. Let ρ be an assignment over I such that ρ(xi ) = di for i = 1, . . . , n. Then Iρ  p(x1 , . . . , xn ) and so p(d¯1 , . . . , d¯n ) ∈ D p (I ). Hence, by hypothesis, I   I p(d¯1 , . . . , d¯n ) 



  I I and so p I ([[d¯1 ]] , . . . , [[d¯n ]] ) = 1, that is, p I (h(d1 ), . . . , h(dn )) = 1. (b) Assume that p I (d1 , . . . , dn ) = 0. Let ρ be an assignment over I such that ρ(xi ) = di for i = 1, . . . , n. Then

Iρ  p(x1 , . . . , xn ) and so Iρ  ¬ p(x1 , . . . , xn ). Then ¬ p(d¯1 , . . . , d¯n ) ∈ Dq (I ). Hence, by hypothesis, I   I ¬ p(d¯1 , . . . , d¯n ), I

I

and so p I ([[d¯1 ]] , . . . , [[d¯n ]] ) = 0. Therefore, p I (h(d1 ), . . . , h(dn )) = 0. 





Proposition 1.142 together with Exercise 1.99 is useful for the following exercise.

1.5 Diagrams

31

Exercise 1.143 Let  be a signature with equality, and I and I  interpretation struc tures for  such that D is {d1 , . . . , dm }, ∼ = I and ∼ = I are the identity relation. Show that 1. if I ≡e I  then I   (∃=m x x ∼ = x) ∧ ∃x1 . . . ∃xm

 ¯ d¯m q []dx11 ... ...xm , for each finite  ⊆ D (I ) ∩ B I ;

2. I and I  are isomorphic if and only if I   (∃=m x x ∼ = x) ∧ ∃x1 . . . ∃xm

 ¯ d¯m q []dx11 ... ...xm , for each finite  ⊆ D (I ) ∩ B I .

1.6 Theories Finally, we are ready to define the important concept of first-order theory. Definition 1.144 A theory over a signature  is a set  ⊆ L  such that  = . Example 1.145 As a first example, observe that, for every signature , the set L  is a theory (the improper one) over . On the other hand, the set ∅ is not a theory. Exercise  1.146 Let {k }k∈K be a chain of theories over the same signature. Show that k∈K k is a theory. Note 1.147 Given an interpretation structure I over a signature , we denote by Th(I ) the set {ϕ ∈ L  : I  ϕ}. Exercise 1.148 Show that Th(I ) is a theory. Exercise 1.149 Recall Definition 1.137. Investigate whether or not Th( I¯) ∩ cL I = D(I ). When working with theories, it is important and useful to assume that equality is present. Remark 1.150 From now on we assume that any theory is a theory with equality. Exercise 1.151 Show that a theory contains formulas for symmetry and transitivity of ∼ =. In most of the cases, we can describe a theory in a simpler way by presenting an axiomatization. For that recall the notion of decidable set in Definition A.20.

32

1 First-Order Logic

Definition 1.152 A theory  over  is said to be axiomatizable if there is a decidable set  of sentences over  such that  = . In this case, we say that  is an axiomatization of  or  is generated by . Exercise 1.153 Let  be an axiomatizable theory over  and I an interpretation structure for . Show that I   if and only if I   for every axiomatization  of . Remark 1.154 From now on we omit for simplification the explicit presentation of (E1), (E2) and (E3) when axiomatizing theories. Example 1.155 Recall signature S introduced in Example 1.6. We denote by S the theory over S generated by the set AxS with the following axioms: (S1) ∀x ¬ S(x) ∼ = 0; ∼ S(y) ⊃ x ∼ (S2) ∀x∀y (S(x) = = y); (S3) ∀x ((¬ x ∼ = 0) ⊃ ∃y S(y) ∼ = x); (S4k ) ∀x ¬ Sk (x) ∼ = x for all k ≥ 1; where Sk (x) is inductively defined as follows: S0 (x) is x and Sk+1 (x) = S(Sk (x)). It is immediate to see that AxS is a decidable set. Moreover, observe that IN |S S S (recall Definition 1.135). Exercise 1.156 Show that AxS S ∀x∀y (S2 (x) ∼ = S2 (y) ⊃ x ∼ = y). Example 1.157 Recall signature RCOF introduced in Example 1.8. We denote by RCOF the theory over RCOF for real closed ordered fields, generated by the following set

1.6 Theories

33

AxRCOF of axioms: (RCOF1) (RCOF2) (RCOF3) (RCOF4) (RCOF5) (RCOF6) (RCOF7) (RCOF8) (RCOF9) (RCOF10) (RCOF11) (RCOF12) (RCOF13) (RCOF14) (RCOF15) (RCOF16) (RCOF17) (RCOF18)

∀x∀y∀z x + (y + z) ∼ = (x + y) + z; ∀x x + 0 ∼ = x; ∀x x + (−x) ∼ = 0; ∀x∀y x + y ∼ = y + x; ∀x∀y∀z x × (y × z) ∼ = (x × y) × z; ∀x x × 1 ∼ = x; ∀x∀y∀z x × (y + z) ∼ = (x × y) + (x × z); ∀x∀y x × y ∼ = y × x; = 1; ¬0∼ ∀x ((¬ x ∼ = 0) ⊃ ∃y x × y ∼ = 1); ∀x ¬(x < x); ∀x∀y∀z (x < y ⊃ (y < z ⊃ x < z)); ∀x∀y (x < y ∨ y < z ∨ x ∼ = y); ∀x∀y∀z (x < y ⊃ x + z < y + z); ∀x∀y ((x < y ∧ 0 < z) ⊃ x × z < y × z); ∀x∃y (y 2 ∼ = 0); = x ∨ y2 + x ∼ ∀x1 . . . ∀xn ¬ x12 + · · · + xn2 ∼ = −1 for each n ∈ N+ ; n n−1 + · · · + xn−1 y + xn ∼ ∀x1 . . . ∀xn ∃y y + x1 y = 0 for every odd natural number n.

The axioms from (RCOF1) to (RCOF10) are the field axioms. The axioms from (RCOF11) to (RCOF13) assert that the order < is irreflexive, transitive and total, respectively. The axioms (RCOF14) to (RCOF15) state the relation between < and the function symbols + and ×, respectively. Finally, the axioms from (RCOF16) to (RCOF18) are specific for real closed fields. Axiom (RCOF16) states that every element or its symmetric is the square of another one. Axiom (RCOF17) asserts that every sum of squares is never the symmetric of 1. Axiom (RCOF18) states that every polynomial equation of odd degree in one variable has a solution. For more details on this theory, see [8, 10, 11]. Remark 1.158 The interpretation structure for RCOF with the set of reals numbers as domain, the real number operations as denotations of the function symbols and the expected order of the real numbers as the denotation of < is a model of RCOF . Therefore, the theory RCOF is consistent, using Exercise 1.84. It is immediate to see that the ordered field of real numbers satisfies RCOF . Exercise 1.159 Show that AxRCOF RCOF ∀x∀y∀z ((x + y ∼ =0∧x +z ∼ = 0) ⊃ y ∼ = z). There are nevertheless theories that are not axiomatizable (see [6]). We conclude this section with a result showing the importance of a theory having an axiomatization.

34

1 First-Order Logic

Proposition 1.160 Every axiomatizable theory over a decidable signature is listable. Proof If a theory  is axiomatizable, then there exists a decidable set  such that  = . Then,  is a listable set (see [6]).  Example 1.161 Recall theory S introduced in Example 1.155. The theory S is axiomatizable and so is listable (Proposition 1.160).

References 1. R. Cori, D. Lascar, Mathematical Logic, Part I, Propositional Calculus, Boolean Algebras, Predicate Calculus (Oxford University Press, 2000) 2. R. Cori, D. Lascar, Mathematical Logic, Part II, Recursion Theory, Gödel Theorems, Set Theory, Model Theory (Oxford University Press, 2001) 3. H.B. Enderton, A Mathematical Introduction to Logic, 2nd edn. (Academic Press, 2001) 4. H. Hermes, Introduction to Mathematical Logic (Springer, 1973) 5. E. Mendelson, Introduction to Mathematical Logic, 6th edn. (Chapman and Hall, 2015) 6. A. Sernadas, C. Sernadas, Foundations of Logic and Theory of Computation, 2nd edn. (College Publications, 2012) 7. C.C. Chang, H.J. Keisler, Model Theory (Dover, 2012) 8. W. Hodges, Model Theory. Encyclopedia of Mathematics and Its Applications, vol. 42 (Cambridge University Press, 1993) 9. W. Hodges, A Shorter Model Theory (Cambridge University Press, 1997) 10. D. Marker, Model Theory: An Introduction. Graduate Texts in Mathematics, vol. 217 (Springer, 2002) 11. A. Tarski, A Decision Method for Elementary Algebra and Geometry, 2nd edn. (University of California Press, 1951)

Chapter 2

Reasoning with Theories

The objective of the chapter is to provide a usable way to deduce logical consequences from a given theory. We adopt Gentzen calculus (other alternatives for reasoning with theories are, namely, natural deduction and tableaux, see [1, 2] and [3], respectively) inspired by the presentation in [4] and in [5] (see also [6]) and analyze how to use the calculus for reasoning with theories. After proving some technical lemmas and providing several examples, we show that the calculus is sound and, based on Hintikka sets (see [5, 7]), we also prove completeness. Then we establish in detail Gentzen’s Hauptsatz, i.e., the Cut Elimination Theorem (see [4, 8] and for more advanced topics on cut elimination see also [9–11]). Finally, we conclude the chapter with a constructive proof of Craig’s Interpolation Theorem (see [5, 12, 13]).

2.1 Gentzen Calculus Sequent calculi were introduced by Gerhard Gentzen (see [8]). In these calculi axioms, rules and derivations are defined over sequents. In order to define sequent, we need the concept of multiset (see [14]). Definition 2.1 A multiset is a collection that may contain repeated occurrences of elements and where order is not relevant. In the sequel, we represent multisets using { and } with the occurrences of elements inside. Moreover, we need the following operations on multisets as well as comparison. © Springer Nature Switzerland AG 2020 J. Rasga and C. Sernadas, Decidability of Logical Theories and Their Combination, Studies in Universal Logic, https://doi.org/10.1007/978-3-030-56554-1_2

35

36

2 Reasoning with Theories

Definition 2.2 Let A1 and A2 be multisets. The union of A1 and A2 denoted by A1 ∪ A2 is the multiset such that if a occurs n 1 times in A1 and n 2 times in A2 then a occurs n 1 + n 2 times in A1 ∪ A2 . Moreover, the relative complement of A2 with respect to A1 , denoted by A1 \ A2 , is the multiset such that if a occurs n 1 times in A1 and n 2 times in A2 then a occurs n 1 − n 2 times in A1 \ A2 whenever n 1 > n 2 and a does not occur in A1 \ A2 otherwise. Furthermore, A1 is a submultiset of A2 , denoted by A1 ⊆ A2 whenever the number of occurrence of each element a in A1 is less than or equal to the number of occurrence of a in A2 . Finally, A1 = A2 whenever A1 ⊆ A2 and A2 ⊆ A1 . Definition 2.3 A sequent over a signature  is a pair (, ) where  and  are finite multisets of formulas in L  . Note 2.4 A sequent (, ) is represented by  → . When  is the empty multiset, the sequent can be written as →  and when  is an empty multiset then the sequent can be presented as  →. Moreover, we denote by → the sequent where  and  are empty multisets. In the context of a sequent we may write ψ1 , . . . , ψk ,  instead of  ∪ {ψ1 , . . . , ψk }. Similarly, we may write 1 , 2 instead of 1 ∪ 2 . The sequent calculus adopted in this book was influenced by [4] and by [5]. Definition 2.5 The Gentzen calculus G (over a signature ) is composed of the following axioms and rules:

2.1 Gentzen Calculus

37

• Axiom (Ax): α,  → , α • Left bottom (L⊥): ⊥,  →  • Left implication (L⊃):

 → , ϕ ψ,  →  ϕ ⊃ ψ,  → 

• Right implication (R⊃):

ϕ,  → , ψ  → , ϕ ⊃ ψ

• Left universal quantification (L∀): [ϕ]tx , ∀x ϕ,  →  where t  x : ϕ ∀x ϕ,  →  • Right universal quantification (R∀):  → , [ϕ]xy  → , ∀x ϕ

where y ∈ / fv ( ∪  ∪ {∀x ϕ})

for every α ∈ A , ,  ⊆ L  , ϕ, ψ ∈ L  , x, y ∈ X and t ∈ T . Moreover, we denote by G + Cut the calculus G enriched with the following cut rule (Cut):  → , ϕ ϕ,  →  → where the formula ϕ is said to be the cut formula. We remark that the condition on the rule R∀ is usually stated in the literature by saying that y is fresh for  ∪  ∪ {∀x ϕ}. Definition 2.6 Let  →  be a sequent over . Then  →  is a theorem of G, written as G   → , if there is a finite sequence of sequents: 1 → 1 · · · n → n

38

2 Reasoning with Theories

such that • 1 → 1 is  → ; • for each i = 1, . . . , n, – either i → i is Ax; – or i → i is L⊥; – or i → i is the conclusion of a rule where each premise is a sequent  j →  j in the sequence with j > i. The sequence is said to be a derivation for  → . The notion of theorem is extendable to G + Cut. We write G + Cut  → whenever  →  is a theorem in G + Cut. Example 2.7 Recall the axiomatization AxS introduced in Example 1.155 and in particular axiom (S2). Then 1 (S2) → ∀x∀y (S2 (x) ∼ = S2 (y) ⊃ x ∼ = y) 2 (S2) →

∀y (S2 (x)

∼ =

S2 (y) ⊃

R∀ : 2

x∼ = y)

R∀ : 3

3 (S2) → S2 (x) ∼ = S2 (y) ⊃ x ∼ =y 4

∀y S2 (x)

5

S2 (x)

∼ =

L∀ : 4

∼ = y, (S2) → S2 (x) ∼ = S(y) ⊃ S(x) ∼ = S2 (y) ⊃ x ∼ =y S2 (y) ⊃ S(x)

∼ = S(y), ϕ, (S2) →

S2 (x)

∼ =

S2 (y) ⊃

x∼ =y

6 S2 (x) ∼ =y = S2 (y), S2 (x) ∼ = S2 (y) ⊃ S(x) ∼ = S(y), ϕ, (S2) → x ∼ 7

S2 (x)

∼ =y = S2 (y), S(x) ∼ = S(y), ϕ, (S2) → x ∼

L∀ : 5 R⊃ : 6 L⊃ : 7, 8 L∀ : 9

∼ ∼ (S2) → x ∼ = = = 2 2 ∼ ∼ ∼ 9 ∀y S(x) = S(y) ⊃ x = y, S (x) = S (y), S(x) ∼ = S(y), ϕ, (S2) → x∼ =y

L∀ : 10

10 S(x) ∼ = y, ψ, S2 (x) ∼ = S2 (y), S(x) ∼ = S(y), ϕ, (S2) = S(y) ⊃ x ∼ → x∼ =y

L⊃ : 11, 12

8

S2 (x)

11 x ∼ =

S2 (y), ϕ,

y, ψ, S2 (x)

∼ =

y, S2 (x)

S2 (y), S(x)

S2 (y)

∼ = S(y), ϕ (S2) → x ∼ =y

12 ψ, S2 (x) ∼ = S2 (y), S(x) ∼ = S(y), ϕ, (S2) → x ∼ = y, S(x) ∼ = S(y)

Ax

Ax Ax

where ϕ is ∀y S2 (x) ∼ = S(y) ⊃ S(x) ∼ = y and ψ is ∀y S(x) ∼ = S(y) ⊃ x ∼ = y, is a derivation for (S2) → ∀x∀y (S2 (x) ∼ = S2 (y) ⊃ x ∼ = y). Thus

2 ∼ 2 ∼ G S (S2) → ∀x∀y (S (x) = S (y) ⊃ x = y).

2.1 Gentzen Calculus

39

Remark 2.8 In the sequel, for simplification, we may condense in one step several applications of R∀ and L∀ in a derivation. Moreover, when applying rules L∀ and R∃ we may omit in the premise the quantified formula. Exercise 2.9 Let be a theory over signature . Recall Definition 1.79. Show that ∼ ∼ • G = ), (E1),  → , ∀x 1 ∀x 2 (x 1 = x 2 ⊃ x 2 = x 1 )  (E3∼ G ∼ ∼ ∼ •  (E3∼ = ), (E1),  → , ∀x 1 ∀x 2 ∀x 3 ((x 1 = x 2 ∧ x 2 = x 3 ) ⊃ x 1 = x 3 ) ∼ respectively. that is, symmetry and transitivity of =, In the sequel, we may denote by ∼



ϕ S= and ϕT= the formulas ∀x1 ∀x2 (x1 ∼ = x2 ⊃ x2 ∼ = x1 ) and ∀x1 ∀x2 ∀x3 ((x1 ∼ = x2 ∧ x2 ∼ = x3 ) ⊃ x1 ∼ = x3 ), respectively. Note 2.10 Given a derivation D, we denote by |D| the length of the sequence D. There are left and right introduction rules also for the constructors introduced by abbreviations. We start by giving an auxiliary result. Proposition 2.11 The sequence 1 → 1 · · · n → n is a derivation if and only if 1 → 1 , ⊥ · · · n → n , ⊥ is a derivation. Proof (⇒) Let 1 → 1 · · · n → n be a derivation for  → . We now show by induction on n that 1 → 1 , ⊥ · · · n → n , ⊥ is a derivation (Base) n = 1. We have two cases: (1) 1 → 1 is Ax. Thus, there is an atomic formula α ∈ 1 ∩ 1 . So α ∈ 1 ∩ (1 ∪ {⊥}). Therefore 1 → 1 , ⊥ is a derivation. (2) 1 → 1 is L⊥. Then ⊥ ∈ 1 . Hence, 1 → 1 , ⊥ is a derivation. (Step) We have four cases. We just present the proof of one of the cases because all are proved similarly. So assume that 1 → 1 was obtained by R⊃. Thus 1

40

2 Reasoning with Theories

is of the form 1 , ϕ ⊃ ψ and 2 → 2 is of the form ϕ, 1 → 1 , ψ. Then by the induction hypothesis ϕ, 1 → 1 , ψ, ⊥ · · · n → n , ⊥ is a derivation. Hence, 1 → 1 , ⊥ · · · n → n , ⊥ is a derivation. (⇐) Let 1 → 1 , ⊥ · · · n → n , ⊥ be a derivation. We now show by induction on n that 1 → 1 · · · n → n is a derivation. (Base) n = 1. We have two cases: (1) 1 → 1 , ⊥ is Ax. Then, either there is α ∈ A such that α ∈ 1 ∩ 1 and so 1 → 1 is a derivation or ⊥ ∈ 1 and so 1 → 1 is a derivation (justified by L⊥). (2) 1 → 1 , ⊥ is L⊥. Then 1 → 1 is a derivation. (Step) We only consider the case where the first rule used is R∀ applied to the formula ∀x ϕ. Then 1 = 1 , ∀x ϕ for some 1 . Moreover, 2 = 1 , [ϕ]xy for some fresh variable y. Observe that 1 → 1 , [ϕ]xy , ⊥ · · · n → n , ⊥ is a derivation. By induction hypothesis, 1 → 1 , [ϕ]xy · · · n → n is a derivation. Therefore, the thesis follows.  Example 2.12 The rule Right negation (R ¬): ϕ,  →   → , ¬ ϕ can be derived in G, that is, G  ϕ,  →  implies

G   → , ¬ ϕ.

We start by observing that ¬ ϕ can be seen as an abbreviation of ϕ ⊃ ⊥. So assume that G  ϕ,  → . Then, by Proposition 2.11, there is a derivation for ϕ,  → , ⊥. Hence, 1  → , ϕ ⊃ ⊥ R ⊃ 2 2 ϕ,  → , ⊥

is a derivation for  → , ϕ ⊃ ⊥. It is straightforward to deduce the rule Left negation (L¬):  → , ϕ ¬ ϕ,  →  in G. Exercise 2.13 Deduce the rules • Left conjunction (L∧):

• Right conjunction (R∧):

ϕ, ψ,  →  ϕ ∧ ψ,  → 

2.1 Gentzen Calculus

41

 → , ϕ → , ψ  → , ϕ ∧ ψ • Left disjunction (L∨):

ϕ,  → ψ,  →  ϕ ∨ ψ,  → 

• Right disjunction (R∨):

 → , ϕ, ψ  → , ϕ ∨ ψ

• Left existential quantification (L∃): [ϕ]xy ,  →  ∃x ϕ,  → 

where y ∈ / fv ( ∪  ∪ {∃x ϕ})

• Right existential quantification (R∃):  → , [ϕ]tx , ∃x ϕ where t  x : ϕ  → , ∃x ϕ for the constructors ∧, ∨ and ∃ using the usual abbreviations. From now on, we use these rules in derivations when needed as we illustrate in the next two examples. Example 2.14 Recall theory S in Example 1.155 and axiom (E1) in Definition 1.79. Let = {(E1)}. The derivation 1 → ∃y S(y) ∼ = S2 (x)

R∃ : 2

2 → ∃y S(y) ∼ = S2 (x), S2 (x) ∼ = S2 (x)

L∀ : 3

3 S (x) ∼ = S2 (x), → ∃y S(y) ∼ = S2 (x), S2 (x) ∼ = S2 (x) 2

shows that

Ax

∼ 2 ∼ 2 G  ¬ 0 = S (x), → ∃y S(y) = S (x).

Example 2.15 Recall theory S in Example 1.155, the axioms (E1), (E2S ), (E3∼ =) ∼ in Definition 1.79 and ϕT= in Exercise 2.9. Let = {(E1), (E2S ), (E3∼ = ), (S41 )}.

42

2 Reasoning with Theories

The derivation (where we use simplifications referred to in Remark 2.8) 1 → S(0) ∼ = x ⊃¬x ∼ =0 2 S(0) ∼ = x, → ¬ x ∼ =0

R⊃ : 2 R¬ : 3

3 x∼ = 0, S(0) ∼ = x, → ∼ 0 ⊃ S(x) ∼ 4 x= = S(0), x ∼ = 0, S(0) ∼ = x, → 5 x∼ = 0, S(0) ∼ = x, → x ∼ =0 6 S(x) ∼ = S(0), x ∼ = 0, S(0) ∼ = x, → ∼ 7 S(x) ∼ = S(0), x ∼ = 0, S(0) ∼ = x, → ϕT= ∼ 8 ϕ = , S(x) ∼ = S(0), x ∼ = 0, S(0) ∼ = x, →

L∀ : 4 L⊃ : 5, 6 Ax Cut : 7, 8 E x [2.9] L∀ : 9

T

9 (S(x) ∼ = S(0) ∧ S(0) ∼ = x) ⊃ S(x) ∼ = x, S(x) ∼ = S(0), x∼ = 0, S(0) ∼ = x, → ∼ S(0), x ∼ 10 S(x) ∼ = 0, S(0) ∼ = x, → = x, S(x) = 11 S(x) ∼ = S(0), x ∼ = 0, S(0) ∼ = x, → S(x) ∼ = S(0) ∧ S(0) ∼ =x

L⊃ : 10, 11 L∀ : 12 R∧ : 14, 15

12 ¬ S(x) ∼ = x, S(x) ∼ = x, S(x) ∼ = S(0), x ∼ = 0, S(0) ∼ = x, → ∼ ∼ ∼ ∼ 13 S(x) = x, S(x) = S(0), x = 0, S(0) = x, → S(x) ∼ =x 14 S(x) ∼ = S(0), x ∼ = 0, S(0) ∼ = x, → S(x) ∼ = S(0) ∼ ∼ ∼ ∼ 15 S(x) = S(0), x = 0, S(0) = x, → S(0) = x shows that

R ¬ : 13 Ax Ax Ax

G + Cut  → S(0) ∼ = x ⊃¬x ∼ = 0. S

We now establish some relevant technical lemmas used in the sequel, starting with the important Inversion Lemma. Proposition 2.16 Let  ∪  ∪ {ϕ, ψ} ⊆ L  for some signature . Then G + Cut 1. If there is a derivation for   → , ϕ ⊃ ψ then there is a derivation for G + Cut ϕ,  → , ψ 

with at most the same length and with the same cut formulas; G + Cut ϕ ⊃ ψ,  →  then there are derivations for 2. If there is a derivation for  G + Cut ψ,  →  and 

G + Cut   → , ϕ

both with at most the same length as the original one and with the same cut formulas;

2.1 Gentzen Calculus

43

G + Cut 3. If there is a derivation for   → , ∀x ϕ then there is a derivation for G + Cut  → , [ϕ]xy 

where y is a fresh variable in  → , ∀x ϕ, with at most the same length and with the same cut formulas. Proof (1) Let D be a derivation for  → , ϕ ⊃ ψ. We prove the result by induction on the length of D. (Base) There are two cases: (a) L⊥ is the justification. Then ⊥ ∈  and so there is a derivation for ϕ,  → , ψ with length 1 justified by the same axiom. (b) Ax is the justification. Observe that ϕ ⊃ ψ is not needed since it is not atomic. Then it is immediate to see that ϕ,  → , ψ is also an axiom. (Step) We have two cases: (a) ϕ ⊃ ψ is not needed in the last step. Thus, ϕ ⊃ ψ is present in every premise of the rule. Hence, using the induction hypothesis on each premise of the rule and then applying the rule, we obtain a derivation with the same length and the same cut formulas for ϕ,  → , ψ. (b) ϕ ⊃ ψ is needed in the last step. Then the subderivation for ϕ,  → , ψ has the desired property. The statements (2) and (3) follow in a similar way.  The next result, the Admissibility of Contraction Lemma, shows that contraction holds in the Gentzen calculus. Proposition 2.17 Let {ϕ} ∪  ∪  ⊆ L  for some signature . Then G + Cut ϕ, ϕ,  →  then there is a derivation for 1. If there is a derivation for  G + Cut ϕ,  →  

with at most the same length and with the same cut formulas; G + Cut  → , ϕ, ϕ then there is a derivation for 2. If there is a derivation for  G + Cut  → , ϕ 

with at most the same length and with the same cut formulas. G + Cut Proof (1) The proof follows by induction on the length of a derivation D for  ϕ, ϕ,  → . (Base) Immediate. (Step) There are two cases: (a) ϕ is not needed in the last step. Thus, ϕ, ϕ is present in every premise of the last rule. Hence, the result follows, by using the induction hypothesis on the premises of the rule and then applying the rule.

44

2 Reasoning with Theories

(b) ϕ is needed in the last step. We only consider the case where the last rule applied was L∀. Let ϕ be ∀x ψ. Then the premise of the rule is [ψ]tx , ∀x ψ, ∀x ψ,  →  for some t ∈ T and so, by the induction hypothesis, there is a derivation for [ψ]tx , ∀x ψ,  →  with length at most |D| − 1 and with the same cut formulas. Hence, the thesis follows by applying L∀. G + Cut (2) The proof follows by induction on the length of a derivation D for   → , ϕ, ϕ. (Base) Immediate. (Step) There are two cases: (a) ϕ is not needed in the last step. Thus, ϕ, ϕ is present in every premise of the last rule. Hence, the result follows, by using the induction hypothesis on the premises of the rule and then applying the rule. (b) ϕ is needed in the last step. We only consider the case where the last rule applied was R∀. Let ϕ be ∀x ψ. Then the premise of the rule is  → , ∀x ψ, [ψ]xy for some y ∈ X fresh in  → , ∀x ψ, ∀x ψ. Thus, by Proposition 2.16, there is a derivation for  → , [ψ]xy , [ψ]xy  for some y  ∈ X fresh in  → , ∀x ψ, [ψ]xy with length at most |D| − 1 and with the same cut formulas. Therefore, there is a derivation for  → , [ψ]zx , [ψ]zx , where z ∈ X fresh in  → , [ψ]xy , [ψ]xy  , with length at most |D| − 1 and with the same cut formulas (see [15]). Hence, by the induction hypothesis, there is a derivation for  → , [ψ]zx with length at most |D| − 1 and with the same cut formulas. Thus, the thesis follows by applying R∀.  Finally, we state the Admissibility of Weakening Lemma, establishing that weakening if

G + Cut   →  then 

G + Cut  ,   →  , 

2.1 Gentzen Calculus

45

holds in the Gentzen calculus (for more details, see [15]). Proposition 2.18 Let ,  be finite multisets over a signature . Then, if 1 → 1 2 → 2 · · · n → n is a derivation then there is a derivation , 1 → 1 ,  , 2 → 2 ,  · · · , n → n ,  using the same rules by the same order over the same formulas modulo a renaming of free variables. Proof We start by renaming the fresh variables used in the given derivation to variables not appearing free in  ∪ . Then, the result follows by a straightforward induction taking into account Remark 1.90.  Note 2.19 Taking into account Proposition 2.18, given a derivation D, we denote by D[ → ] the derivation obtained from D by adding  on the left-hand side and  to the righthand side of each sequent in D modulo an appropriate renaming of free variables in D. The next result states that we can use axioms over arbitrary formulas. Proposition 2.20 Let {ϕ} ∪  ∪  ⊆ L  for some signature . Then G  ϕ,  → , ϕ. Proof The proof follows by induction on ϕ. (Base) ϕ ∈ A . Immediate by Ax. (Step) We only consider the case where ϕ is ∀x ψ. By the induction hypothesis, let D be a derivation for [ψ]xy ,  → , [ψ]xy where y ∈ X is fresh in ∀x ψ,  → , ∀x ψ. Then, 1 ∀x ψ,  → , ∀x ψ

R∀ : 2

, [ψ]xy

L∀ : 3

2 ∀x ψ,  →

3 [ψ]xy , ∀x ψ,  → , [ψ]xy D[∀x ψ →] is a derivation for ∀x ψ,  → , ∀x ψ.



46

2 Reasoning with Theories

From now on, when the length of derivation is not at stake, we use axioms over arbitrary formulas and justify this step also by Ax. Example 2.21 Recall theory S introduced in Example 1.155 and Definition 1.79. Let = {(S1), (S3)} and ϕ be ∃y (S(x) ∼ = x ⊃ S(y) ∼ = S2 (x)). The derivation D: 1 → ϕ Cut : 2, 6 2 2 ∃y S(y) ∼ S (x), → ϕ = D1 6 → ϕ, ∃y S(y) ∼ = S2 (x) D2 where D1 is the derivation: 1 ∃y S(y) ∼ = S2 (x), → ϕ

L∃ : 2

2 S(y) ∼ R∃ : 3 = S2 (x), → ϕ 3 S(y) ∼ = S2 (x), → ϕ, S(x) ∼ = x ⊃ S(y) ∼ = S2 (x) R⊃ : 4 4 S(x) ∼ = x, S(y) ∼ = S2 (x), → ϕ, S(y) ∼ = S2 (x)

Ax

and D2 is the derivation: 1 → ϕ, ∃y S(y) ∼ = S2 (x)

L∀ : 2

2 ¬ S2 (x) ∼ L∀ : 3 = 0, → ϕ, ∃y S(y) ∼ = S2 (x) 2 2 2 ∼ ∼ ∼ 3 (¬ S (x) = 0) ⊃ ∃y S(y) = S (x), ¬ S (x) = 0, → ϕ, ∃y S(y) ∼ = S2 (x) L⊃ : 4, 5 4 ∃y S(y) ∼ = S2 (x), ¬ S2 (x) ∼ = 0, → ϕ, ∃y S(y) ∼ = S2 (x) 5 ¬ S2 (x) ∼ = 0, → ϕ, ∃y S(y) ∼ = S2 (x), ¬ S2 (x) ∼ =0 shows that

Ax Ax

+ Cut G → ∃y (S(x) ∼ = x ⊃ S(y) ∼ = S2 (x)). S

The following exercise is a particular case of the Metatheorem of Substitution of Bound Variable (see [15]) for sequents. Exercise 2.22 Let s and s  be sequents over a signature  such that s  is obtained from s by replacing all occurrence of some bound variables by distinct variables not occurring either free or bound in s. Show that

2.1 Gentzen Calculus

47

G  s if and only if

 G  s .

Finally, we say how we can use the Gentzen calculus for reasoning with formulas and sets of formulas, namely, for reasoning about theories. Definition 2.23 Let  ∪ {ϕ} ⊆ L  . Then, ϕ is derivable from , denoted by  G  ϕ whenever there is a finite set  ⊆ cL ∩  such that G   → ϕ. The notion of derivation of formulas can be extended to G + Cut as expected. Example 2.24 Recall Example 2.7. Then 2 ∼ 2 ∼ AxS G S ∀x∀y (S (x) = S (y) ⊃ x = y).

Moreover, taking into account Example 2.15, we have ∼ ∼ AxS G S S(0) = x ⊃ ¬ x = 0. The following exercise states that the Metatheorem of Deduction holds as expected for sets of sentences as hypotheses. Exercise 2.25 Let  ⊆ cL be a finite set,  ⊆ L  and ϕ ∈ L  . Show that G + Cut G + Cut ϕ if and only if   ,  



 ⊃ ϕ.

Exercise 2.26 Recall the theory RCOF over the signature RCOF . Show that G + Cut • AxRCOF  ∀x x < 0 ⊃ 0 < −x; RCOF

G + Cut ∀x∀y ((0 < x ∧ 0 < y) ⊃ 0 < x × y); • AxRCOF  RCOF G + Cut ∀x ((¬ x ∼ • AxRCOF = 0) ⊃ 0 < x × x); RCOF

G + Cut 0 < 1. • AxRCOF  RCOF

2.2 Soundness of the Gentzen Calculus In this section, we prove that the Gentzen calculus is sound. For this purpose, we first say when a sequent is contextually satisfiable and valid. Definition 2.27 Let I be an interpretation structure for a signature  and ρ an assignment over I . We say that I and ρ contextually satisfy a sequent  → , written as Iρ   → ,

48

2 Reasoning with Theories

if there is δ ∈  such that Iρ  δ whenever Iρ  . Example 2.28 Recall interpretation structure IN introduced in Example 1.52. Then, ∼ ∼ IN |S , ρ S (E1), (E2S ), (E3∼ = ), (S41 ) → S(0) = x ⊃ ¬ x = 0 where ρ is an assignment over IN such that ρ(x) = 1. Definition 2.29 Let I be an interpretation structure for a signature . We say that I satisfies a sequent  → , written as I   → , whenever Iρ   →  for every assignment ρ over I . Example 2.30 Recall Example 2.28. Then ∼ ∼ IN |S S (E1), (E2S ), (E3∼ = ), (S41 ) → S(0) = x ⊃ ¬ x = 0. Definition 2.31 The sequent  →  is said to be valid, denoted by   →  if I   →  for every interpretation structure I . Example 2.32 Recall Example 2.30. Then ∼ ∼ S (E1), (E2S ), (E3∼ = ), (S41 ) → S(0) = x ⊃ ¬ x = 0. Exercise 2.33 Refute the assertion S S(x) ∼ = y → ∀x∀y S(x) ∼ = y if and only if S(x) ∼ = y S ∀x∀y S(x) ∼ = y. Exercise 2.34 Let s and s  be sequents over a signature  such that s  is obtained from s by replacing all occurrences of some bound variables by distinct variables not occurring either free or bound in s. Show that  s if and only if

 s  .

Proposition 2.35 Let  be a finite set of sentences and ϕ a formula over . Then   → ϕ if and only if   ϕ. Proof (⇒) Assume that   → ϕ. Let I be an interpretation structure such that I  . Let ρ be an assignment over I . Observe that Iρ   → ϕ.

2.2 Soundness of the Gentzen Calculus

49

So Iρ  ϕ. (⇐) Assume that   ϕ. Let I be an interpretation structure and ρ an assignment over I . Assume that Iρ  . Since the formulas in  are sentences then I  .  Therefore, I  ϕ. Hence Iρ  ϕ. Toward proving soundness of the calculus, we start by saying when an inference rule is sound. Definition 2.36 The rule 1 → 1 · · · n → n → is said to be sound if for every interpretation structure I for , I   →  whenever I  i → i for i = 1, . . . , n. Example 2.37 Recall Definition 2.5. We show that rule (L⊃) is sound. Let I be an interpretation structure. Assume that (†) I   → , ϕ and (‡) I  ψ,  → . Let ρ an assignment over I . Suppose that Iρ  {ϕ ⊃ ψ} ∪ . We have two cases: (1) Assume that Iρ  ψ. Then Iρ  δ for some δ ∈  by (‡). (2) Assume that Iρ  ϕ. Then Iρ  δ for some δ ∈  by (†). Hence, Iρ  ϕ ⊃ ψ,  → . We also prove soundness of (R∀). Let I be an interpretation structure and ρ an assignment over I . Suppose that Iρ   and I   → , [ϕ]xy . We have three cases: (a) Iρ  δ for some δ ∈ . Then I   → , [ϕ]xy . (b) Iρ  δ for every δ ∈  and y = x. Let σ ≡x ρ and ρ  ≡ y ρ such that ρ  (y) = σ (x). Since, y ∈ / fv ( ∪ ) then by Proposition 1.85, Iρ  ψ iff Iρ   ψ for every ψ ∈  ∪ . Therefore, Iρ    and Iρ   δ for every δ ∈ . On the other hand, Iρ    → , [ϕ]xy and so Iρ   [ϕ]xy . By Proposition 1.88 Iρ   [ϕ]xy iff I σ  ϕ. Thus, I σ  ϕ. (c) Iρ  δ for every δ ∈  and y is x. Similar to (b).

50

2 Reasoning with Theories

Exercise 2.38 Show the soundness of the other rules in G. Proposition 2.39 The calculus G is sound, that is, if G   →  then   → . Proof The proof follows by a straightforward induction on a derivation for G   → .  A similar result can be obtained for formulas. Proposition 2.40 Let  ∪ {ϕ} ⊆ L  where  is a signature. Then, if  G  ϕ then   ϕ. G Proof Assume that  G  ϕ. Then, there is a finite set  ⊆ cL ∩  such that   → ϕ. Thus, by Proposition 2.39,   → ϕ. Hence, by Proposition 2.35,   ϕ  and so it is straightforward to conclude that   ϕ.

Example 2.41 Recall Example 2.24 and Example 1.155. Then, by Proposition 2.40, S S ∀x∀y (S2 (x) ∼ = S2 (y) ⊃ x ∼ = y) and

S S S(0) ∼ = x ⊃¬x ∼ = 0.

2.3 Completeness of the Gentzen Calculus In this section, we show that every valid sequent is also derivable. That is, we show the weak completeness of the Gentzen calculus. For that we use Hintikka sets (see [5, 7]). Note 2.42 Given a set of formulas  ⊆ L  , we denote by  the sub-signature of  composed of the function and predicate symbols occurring in . Moreover, we denote by T (Y ) the set of terms in T that only involve variables in Y , where Y ⊆ X . Furthermore, we denote by fv () the set {fv (ψ) : ψ ∈ }. Definition 2.43 A Hintikka set  is a set of formulas over a signature  such that

2.3 Completeness of the Gentzen Calculus

• • • • • • •

51

⊥∈ / ; / ; if ϕ ∈  ∩ A then ¬ ϕ ∈ if ¬ ¬ ϕ ∈  then ϕ ∈ ; if ϕ1 ⊃ ϕ2 ∈  then either ¬ ϕ1 ∈  or ϕ2 ∈ ; if ¬(ϕ1 ⊃ ϕ2 ) ∈  then ϕ1 ∈  and ¬ ϕ2 ∈ ; if ∀xϕ ∈  then [ϕ]tx ∈  for every t ∈ T (fv ()); if ¬(∀xϕ) ∈  then ¬[ϕ]tx ∈  for some t ∈ T (fv ()).

Example 2.44 Let  be a signature with F1 = { f } and P1 = { p, q}. Then, the following set  = {¬((∃x p(x)) ⊃ ∀x p( f (x))), ∃x p(x), p(y1 ), ¬ ∀x p( f (x)), ¬ p( f (y2 ))} is an Hintikka set. Observe that  is the signature with F1 = { f } and P1 = { p} and so is different from . Proposition 2.45 Every Hintikka set  is contextually satisfiable by an interpretation structure for  . Proof Let  be a Hintikka set. Consider the interpretation structure I over  defined as follows: • the domain is T (fv ()) when T (fv ()) = ∅, otherwise is {z}; • f I (t1 , . . . , tn ) = f (t1 , . . . , tn ) for every t1 , . . . , tn∈ T (fv ()) and n-ary function symbol f in  ; 1 if either p(t1 , . . . , tn ) ∈  or ¬ p(t1 , . . . , tn ) ∈ / • p I (t1 , . . . , tn ) = 0 otherwise for every t1 , . . . , tn ∈ T (fv ()) and n-ary predicate symbol p in  . Observe that p I is well defined as a function. Consider also an assignment ρ over fv () such that ρ(x) = x whenever x ∈ fv (). Observe that [[t]]Iρ = t for every t ∈ T (fv ()). We show by induction on ψ that If ψ ∈  then I ρ  ψ and If ¬ ψ ∈  then I ρ  ¬ ψ. (Base) (1) ψ is ⊥. (a) The first implication is true since ⊥ ∈ / .

52

2 Reasoning with Theories

(b) Assume that ¬ ⊥ ∈ . Then I ρ  ¬ ⊥ since I ρ  ⊥. (2) ψ is p(t1 , . . . , tn ). (a) Assume p(t1 , . . . , tn ) ∈ . Then p I (t1 , . . . , tn ) = 1 and so I ρ  ψ. (b) Assume that ¬ p(t1 , . . . , tn ) ∈ . Then p I (t1 , . . . , tn ) = 0. Hence I ρ  ψ and so I ρ  ¬ ψ. (Step) (1) ψ is ¬ δ. (a) Assume ¬ δ ∈ . This case follows immediately by the induction hypothesis. (b) Assume ¬ ¬ δ ∈ . Then δ ∈ ; hence, by the induction hypothesis, I ρ  δ and so I ρ  ¬ ¬ δ. (2) ψ is δ1 ⊃ δ2 . (a) Assume δ1 ⊃ δ2 ∈ . There are two cases to consider: (i) δ2 ∈ . By the induction hypothesis I ρ  δ2 and so I ρ  δ1 ⊃ δ2 ; (ii) ¬ δ1 ∈ . By the induction hypothesis I ρ  ¬ δ1 and so I ρ  δ1 ⊃ δ2 . (b) Assume ¬(δ1 ⊃ δ2 ) ∈ . Then δ1 ∈  and ¬ δ2 ∈  and so, by the induction hypothesis, I ρ  δ1 and I ρ  ¬ δ2 . Hence I ρ  , ¬(δ1 ⊃ δ2 ). (3) ψ is ∀xδ. (a) Assume ∀xδ ∈ . Then [δ]tx ∈  for every t ∈ T (fv ()). By the induction hypothesis, I ρ  [δ]tx for every t ∈ T (fv ()). Let ρ  be an assignment x-equivalent to ρ. Observe that ρ  (x) ∈ T (fv ()). By the Lemma of Substitution (see Proposition 1.88), I ρ  [δ]ρx  (x) iff I ρ   δ. Since I ρ  [δ]ρx  (x) then I ρ   δ. (b) Assume ¬ ∀xδ ∈ . Then ¬[δ]tx ∈  for some t ∈ T (fv ()). By the induction hypothesis, I ρ  ¬[δ]tx for some t ∈ T (fv ()). Let ρ  be the assignment x-equivalent to ρ such that ρ  (x) = t. Then, I ρ   ¬ δ using the Lemma of Substitution (see Proposi tion 1.88). So I ρ  ¬ ∀xδ. Example 2.46 Recall the Hintikka set  = {¬((∃x p(x)) ⊃ ∀x p( f (x))), ∃x p(x), p(y1 ), ¬ ∀x p( f (x)), ¬ p( f (y2 ))}

2.3 Completeness of the Gentzen Calculus

53

presented in Example 2.44. Note that fv () = {y1 , y2 } and T ({y1 , y2 }) = {y1 , f (y1 ), f 2 (y1 ), . . .} ∪ {y2 , f (y2 ), f 2 (y2 ), . . .}. Then, the interpretation structure I for  induced by the Hintikka set  is as follows: • the domain is T ({y1 , y2 }); • f I (t) = f (t) for all t ∈ T ({y1 , y2 }); 0 if t is f (y2 ) • p I (t) = 1 for t ∈ T ({y1 , y2 }) \ { f (y2 )}. Then, by Proposition 2.45, I ρ  ¬((∃x p(x)) ⊃ (∀x p( f (x)))) where ρ is an assignment over I such that ρ(yi ) = yi for i = 1, 2. Proposition 2.47 Every Hintikka set  is contextually satisfiable by an interpretation structure for . Proof By Proposition 2.45, I is a model of  with domain T (fv ()) if T (fv ()) = ∅ and with domain {z} otherwise, for  . Let I be an interpretation structure for  with the same domain as I and interpreting the function and the predicate symbols in  as I . Then I   iff I  , by Exercise 1.136,  since I | = I . Example 2.48 Recall the interpretation structure I for  presented in Example 2.46. Following the proof of Proposition 2.47, let I be the interpretation structure for  as follows: • I | = I ; • q I (d) = 1 for every d in the domain. Then, Iρ  ¬((∃x p(x)) ⊃ (∀x p( f (x)))) where ρ is an assignment over I such that ρ(yi ) = yi for i = 1, 2. The main objective now is to show that if a sequent is not derivable then there is an interpretation structure that falsifies it. For this purpose, we must introduce the concept of (deductive) expansion of a sequent.

54

2 Reasoning with Theories

Definition 2.49 Let  →  be a sequent over . An expansion of  →  is a sequence of sequents: 1 → 1 · · · such that • 1 → 1 is  → ; • for each i, – either i → i has no descendants; – or i → i is the conclusion of an application of a rule and each premise is a sequent  j →  j in the sequence with j > i; and – if i = 1 then i → i is a premise of a rule with conclusion  j →  j with j < i. Example 2.50 Recall signature  presented in Example 2.44. Then, 1 2 3 4

→ (∃x p(x)) ⊃ ∀x p( f (x)) R⊃ : 2 ∃x p(x) → ∀x p( f (x)) L∃ : 3 p(y1 ) → ∀x p( f (x)) R∀ : 4 p(y1 ) → p( f (y2 ))

is an expansion of → (∃x p(x)) ⊃ ∀x p( f (x)). Observe that every derivation is an expansion but not vice versa. From an expansion that is not a derivation, we will define an interpretation structure that falsifies the first sequent in the expansion. We need first to introduce the concept of branch. Definition 2.51 A branch of the expansion 1 → 1 · · · starting at sequent i → i is a subsequence i1 → i1 · · · of the expansion such that • i1 → i1 is i → i ; • for each j > 1, i j → i j is a premise of the rule used in the expansion to justify i j−1 → i j−1 (the conclusion of the rule); • for each j ≥ 1 if i j → i j is the conclusion of a rule in the expansion then there k > j such ik → ik is a premise of the rule in the expansion. Moreover, the depth of a finite branch is the number of sequents in the branch minus 1. Example 2.52 Recall the expansion in Example 2.50. The expansion has only one branch coinciding with the expansion itself and with depth 3. The interpretation structure falsifying the initial sequent of an expansion that is not a derivation is induced by the Hintikka set associated with a particular branch of the expansion. In order to achieve this goal, we need to assume certain properties of expansions.

2.3 Completeness of the Gentzen Calculus

55

Definition 2.53 We say that a branch of an expansion is open whenever either it is infinite or the leaf sequent is not an axiom. Definition 2.54 A branch b = 1 → 1 · · · of the expansion 1 → 1 · · · where 1 → 1 is 1 → 1 is exhausted whenever • there is no sequent in b which is an axiom; • for each non-atomic formula ϕ in a sequent i → i of b if either ϕ ∈ i and is not of the form ∀x ψ or ϕ ∈ i then there is a sequent  j → j with j ≥ i which is the conclusion of the application of a rule to ϕ; • for each ∀x ϕ in the left-hand side of a sequent i → i , [ϕ]tx occurs on the lefthand side of a sequent in b, for every t ∈ Tb (fv (b)) where b is the sub-signature of  including all the function and predicate symbols that occur in b and fv (b) is the set of variables that occur free in formulas in b. Observe that an exhausted branch is open. Example 2.55 The expansion presented in Example 2.50 has a unique branch which is exhausted. Proposition 2.56 If a sequent s with disjoint sets of free and bound variables is not a theorem in G then there is an expansion of s with an exhausted branch with disjoint sets of free and bound variables. Proof Assume that sequent s is not a theorem and consider a fixed Gödelization of the terms in T (X ) (see [15]). Let ⎧ ⎪ ⎨fv (s) if fv (s) = ∅ Y0 = {x} if fv (s) = ∅, Ts (∅) = ∅ ⎪ ⎩ ∅ otherwise where x does not occur in s. Consider the following expansion e = 1 → 1 · · · of s inductively defined as follows: • 1 → 1 is s; • for each i ≥ 1, where Yi is the set of fresh variables in the branch until step i, – If the sequent i → i is either an axiom or there are no rules to apply or the only rule that can be applied is (L∀) and for each ∀x ϕ ∈ i and t ∈ T1 ∪1 (Y0 ∪ Yi ), [ϕ]tx is in  j for some j ≤ i then the branch ends at this step. – Otherwise, if there is a rule r that can be used other than (L∀) then apply r putting the premise(s) after the last sequent not yet expanded in this step of the construction. If r is (R∀) then the fresh variable is distinct also from all bound variables in i → i . – Otherwise if L∀ is the only rule that can be applied, let ∀x1 ϕ1 , . . . , ∀xk ϕk be all the formulas in i such that there is t j ∈ T1 ∪1 (Y0 ∪ Yi ) not used before in the branch to instantiate ϕ j for each j = 1, . . . , k. Then, for each j pick the

56

2 Reasoning with Theories

first such term t j (according to the Gödelization). Let i  be the index of the last sequent not yet expanded in this step of the construction. Set i  + j to be x [ϕ]tx11 , . . . , [ϕ]t jj , i and i  + j to be i for each j = 1, . . . , k. Since s is not a theorem then the expansion e has an open branch b. By construction, it is immediate to see that b is exhausted and has disjoint sets of free and bound variables.  We provide an example of the construction described in the proof of Proposition 2.56. Example 2.57 Let  be a signature such that f ∈ F1 and p ∈ P1 and s the sequent ∀y p( f (y)) → p(x). Consider the Gödelization g : T (X ) → N such that g( f n (x)) < g( f n+1 (x)) for n ∈ N. Then 1 2 3 .. .

∀y p( f (y)) → p(x) L∀ : 2 p( f (x)), ∀y p( f (y)) → p(x) L∀ : 3 p( f ( f (x))), p( f (x)), ∀y p( f (y)) → p(x) L∀ : 4 .. .. . .

is an expansion of s following the construction described in the proof of Proposition 2.56. Observe that the expansion has a unique branch which is exhausted. Proposition 2.58 Assume that there is an expansion of  →  with an exhausted branch where the sets of free and bound variables occurring in the branch are disjoint. Then there is an interpretation structure I such that I   → . Proof Let b be the exhausted branch and  the set such that ϕ ∈  whenever ϕ occurs on the left-hand side of a sequent in b and ¬ ϕ ∈  whenever ϕ occurs on the right-hand side of a sequent in b. We now show that  is an Hintikka set. Indeed, (1) ⊥ ∈ / . Assume by contradiction that ⊥ ∈ . Then, ⊥ belongs to the left-hand side of a sequent s in the branch contradicting the fact that b is exhausted since s is an instance of (L⊥). (2) if ϕ ∈  ∩ A then ¬ ϕ ∈ / . Assume that ϕ ∈  ∩ A . Then ϕ occurs on the left-hand side of a sequent in b since ϕ is atomic. By contradiction, assume that ¬ ϕ ∈ . There are two cases: (a) ¬ ϕ occurs on the left-hand side of a sequent s in b. Then, at some point the rule R ¬ had to be applied (since b is exhausted). Hence ϕ occurs at the right-hand side of a sequent s  in b and so either s or s  is an axiom contradicting the fact that b is exhausted; (b) ϕ occurs at the right-hand side of a sequent s  in b. Then either s or s  is an axiom contradicting the fact that b is exhausted.

2.3 Completeness of the Gentzen Calculus

57

(3) if ¬ ¬ ϕ ∈  then ϕ ∈ . Assume that ¬ ¬ ϕ ∈ . There are two cases to consider: (a) ¬ ¬ ϕ occurs at the left-hand side of a sequent in b. Since b is exhausted, at some point, rules R ¬ and R ¬ were applied and so ϕ occurs at the left-hand side of some sequent; (b) ¬ ϕ occurs at the right-hand side of a sequent in b. Since b is exhausted, at some point the rule R ¬ was applied and so ϕ appears at the left-hand side of a sequent. (4) if ϕ1 ⊃ ϕ2 ∈  then either ϕ2 ∈  or ¬ ϕ1 ∈ . Assume that ϕ1 ⊃ ϕ2 ∈ . Then ϕ1 ⊃ ϕ2 appears at the left-hand side of a sequent in b. Since b is exhausted, at some point the rule L⊃ was applied. Hence, there are two paths originating from that sequent. One having the formula ϕ2 at the left-hand side of a sequent and the other having ϕ1 at the right-hand side of a sequent. So, either ϕ2 ∈  or ¬ ϕ1 ∈ . (5) if ¬(ϕ1 ⊃ ϕ2 ) ∈  then ϕ1 ∈  and ¬ ϕ2 ∈ . Assume that ¬(ϕ1 ⊃ ϕ2 ) ∈ . There are two cases: (a) ¬(ϕ1 ⊃ ϕ2 ) occurs on the lefthand side of a sequent in b. Since b is exhausted, then the rules R ¬ and R⊃ were applied. Hence, ϕ1 appears on the left-hand side of a sequent and ϕ2 appears on the right-hand side of the same sequent. Therefore, ϕ1 ∈  and ¬ ϕ2 ∈ ; (b) ϕ1 ⊃ ϕ2 occurs on the right-hand side of a sequent. This case is similar to (a). (6) if ∀xϕ ∈  then [ϕ]tx ∈  for every t ∈ T (fv ()). Assume that ∀xϕ ∈ . Then ∀xϕ occurs on the left-hand side of a sequent in b. Since b is exhausted, then [ϕ]tx appears on the left-hand side of a sequent in b for every t ∈ T (fv ()). (7) if ¬(∀xϕ) ∈  then ¬[ϕ]tx ∈  for some t ∈ T (fv ()). Assume that ¬(∀xϕ) ∈ . There are two cases: (a) ∀xϕ belongs to the right-hand side of a sequent in q. Since b is exhausted the rule R∀ was applied with a fresh variable y and [ϕ]xy appears on the right-hand side of a sequent in b. Hence ¬[ϕ]xy ∈ . Finally, y ∈ T (fv ()) since y ∈ fv (); (b) ¬(∀xϕ) is at the left-hand side of a sequent in q. This case follows straightforwardly using (a). By Proposition 2.47, there is an interpretation structure with domain T (fv ()) or {z} over  which falsifies  → .  Example 2.59 Recall the expansion of → (∃x p(x)) ⊃ ∀x p( f (x)) presented in Example 2.50. Observe that the unique branch of the expansion is exhausted and has disjoint sets of free and bound variables. So, by Proposition 2.58, I  → (∃x p(x)) ⊃ ∀x p( f (x)) where I is the interpretation structure defined in Example 2.48.

58

2 Reasoning with Theories

Example 2.60 Recall the expansion presented in Example 2.57. Observe that the unique branch of the expansion is exhausted and has disjoint sets of free and bound variables. Following the proof of Proposition 2.58, let  = {∀y p( f (y)), ¬ p(x)} ∪ { p( f (x)), p( f 2 (x)), p( f 3 (x)), . . .} be the Hintikka set induced by the branch. Note that fv () = {x}. Then, by Proposition 2.58, Iρ  → ∀y p( f (y)) → p(x) where, by Proposition 2.47, I is the interpretation structure such that • the domain is T ({x}) = {x, f (x), f 2 (x), . . .}; • f I (t) = f (t) for all t ∈ T ({x}); • p I (x) = 0 and p I (t) = 1 for all t ∈ T ({x}) \ {x}; and ρ is an assignment such that ρ(x) = x. Proposition 2.61 If a sequent is valid then it is a theorem in G. Proof Assume that the sequent s is not a theorem in G. Let s  be the sequent obtained from s by replacing the bound variables by distinct variables not occurring either free or bound in s. Then, by Exercise 2.22, s  is not a theorem. Thus, by Proposition 2.56, s  has an expansion with an exhausted branch with disjoint sets of free and bound variables. So, by Proposition 2.58, there is an interpretation structure I such that  I  s  . That is,  s  . Hence, by Exercise 2.34,  s. We now show the Restricted Completeness Theorem for Formulas. Proposition 2.62 Let  be a signature,  ⊆ cL and ϕ ∈ L  . Then, if   ϕ then  G  ϕ. Proof Assume that   ϕ. Then, by Exercise 1.78, there is a finite set  ⊆  such that   ϕ. Hence, by Proposition 2.35,   → ϕ. Thus, by Proposition 2.61, G  G   → ϕ. So, by definition,   ϕ. Example 2.63 Recall that ∼ ∼ S (E1), (E2S ), (E3∼ = ), (S41 ) → S(0) = x ⊃ ¬ x = 0 by Example 2.32. Hence, by Proposition 2.35, ∼ ∼ (E1), (E2S ), (E3∼ = ), (S41 ) S S(0) = x ⊃ ¬ x = 0 and so by Exercise 1.78, AxS S S(0) ∼ = x ⊃¬x ∼ = 0.

2.3 Completeness of the Gentzen Calculus

59

Then, by Proposition 2.62, ∼ ∼ AxS G S S(0) = x ⊃ ¬ x = 0 and so

∼ ∼ S G S S(0) = x ⊃ ¬ x = 0.

Relying on the soundness and completeness for sentences of the Gentzen calculus presented, we conclude the section by establishing that an axiomatizable theory is also closed for derivation. Proposition 2.64 Let be a theory and an axiomatization of . Then,  = =   . G

2.4 Cut Elimination The cut rule although very useful is not strictly necessary in the sense that we can prove the same set of theorems without using that rule. That is, cut is eliminable. In this section, after introducing some relevant technical notions, we prove this fact, that is, the well-known Gentzen’s Hauptsatz (see [8]). Definition 2.65 Let D be a derivation in G + Cut where Cut was applied in step i from premises at steps j and k. The level of the cut application at i is the sum of the maximum depth of a branch starting at the premise in j with the maximum depth of a branch starting at the premise in k. Example 2.66 Recall the derivation D in Example 2.21. Observe that the maximum depth of a branch starting at the sequent in position 2 is 3. Moreover, the maximum depth of a branch starting at the sequent in position 6 is also 3. So the level of the cut application at position 1 of D is 6. Definition 2.67 The depth of a formula ϕ over a signature , denoted by |ϕ|, is inductively defined as follows: • |ϕ| = 0 when ϕ ∈ A ; • |ψ1 ⊃ ψ2 | = max(|ψ1 |, |ψ2 |) + 1; • |∀x ψ| = |ψ| + 1. Example 2.68 Recall Example 1.4. Then |∃y S(y) ∼ = S2 (x)| = 3 taking into account Note 1.21.

60

2 Reasoning with Theories

Definition 2.69 Let D be a derivation in G + Cut where Cut was applied in step i with cut formula ϕ. The rank of the cut application at i is |ϕ| + 1. Example 2.70 Recall Example 2.68 and the derivation in Example 2.21. The rank of the cut in step 1 is 4. Definition 2.71 Let D be a derivation in G + Cut. The cutrank of D, denoted by cr(D), is the maximum of the ranks of the cuts occurring in D (the cutrank of a derivation with no cut applications is 0). Example 2.72 Recall Example 2.70 and the derivation D in Example 2.21. The cutrank of D is 4. The following result shows that the cutrank of a derivation with cuts can always be strictly reduced. + Cut  →  where  →  is Proposition 2.73 Given a derivation D for G  obtained by a cut from derivations D1 and D2 with a lower cutrank than D then G + Cut  →  with a lower cutrank than D. there is a derivation D• for 

Proof Let D be

1  →  Cut : 2, n 2  → , ϕ r D1 n ϕ,  →  r D2

The proof follows by induction on the level of the cut at step 1. (Base) The level is 0. Then we can consider two cases: (1) The cut formula is not needed either in r or in r  . Suppose with no loss of generality that that the cut formula is not needed in r . Then 1→r + Cut is a derivation for G  →  with a lower cutrank than D; 

(2) The cut formula is needed in r and in r  . There are two cases: (a) r and r  are Ax. Then ϕ ∈  since r is Ax. Moreover, ϕ ∈  since r  is Ax. So 1  →  Ax + Cut is a derivation for G  →  with a lower cutrank than D.   (b) r is Ax and r is L⊥. Then ϕ is ⊥. On the other hand, ϕ ∈  since r is Ax. Thus

2.4 Cut Elimination

61

1  →  L⊥ + Cut is a derivation for G  →  with a lower cutrank than D. 

(Step) There are four cases: (1) The length of D1 is 1. There are two subcases. (a) ϕ is not needed in r . Thus 1→r + Cut is a derivation for G  →  with a lower cutrank than D.  (b) ϕ is needed in r . Then r is Ax. Hence ϕ ∈ . Moreover, ϕ ∈ A . Then, by Proposition 2.17, there is a derivation for  →  with the same structure as D2 and so with a lower cutrank than D. (2) The length of D2 is 1. We only consider the case where ϕ is needed in r  and r  is L⊥. Then ϕ is ⊥. Thus, by Proposition 2.11, there is a derivation for  →  with the same structure as D1 and so with a lower cutrank than D. (3) The lengths of D1 and D2 are greater than 1 and ϕ is not needed either in D1 or in D2 . Assume with no loss of generality that ϕ is not needed in D1 and that D1 ends with an application of a binary rule r1 . That is, D1 is of the form

1  → , ϕ r1 : 2, m 2   →  , ϕ D1 m   →  , ϕ D1 . Taking into account Proposition 2.18, consider the derivations 1 ,   → ,  Cut : 2, m 2 ,   → ,  , ϕ D1 [ → ] m ϕ, ,   → ,  D2 [  →  ] and

1 ,   → ,  Cut : 2, j 2 ,   → ,  , ϕ D1 [ → ] j ϕ, ,   → ,  D2 [  →  ].

Since the level of the topmost cut application in both derivations is less than the level of the topmost cut in D, then, by the induction hypothesis, there are derivations D1•

62

2 Reasoning with Theories

and D1• for ,   → ,  and ,   → ,  , respectively, each one with a lower cutrank than the original one and so each one with a lower cutrank than D. Hence, 1 ,  → ,  r1 : 2, i 2 ,   → ,  D1• i ,   → ,  D1• is a derivation for ,  → ,  with a lower cutrank than D. Therefore, by Proposition 2.17, there is a derivation for  →  with a lower cutrank than D. (4) The length of D1 and D2 is greater than 1 and ϕ is needed in both D1 and D2 . We have two cases according to the main constructor of ϕ: (a) ϕ is of the form ϕ1 ⊃ ϕ2 . Then D1 is of the form 1  → , ϕ1 ⊃ ϕ2 R ⊃ 2 2 ϕ1 ,  → , ϕ2 D1 and D2 is of the form

Hence,

1 ϕ1 ⊃ ϕ2 ,  →  L ⊃ 2, m 2  → , ϕ1 D1 m ϕ2 ,  →  D2 . → Cut : 1, i  → , ϕ1 D1 i ϕ1 ,  →  Cut : i + 1, j i + 1 ϕ1 ,  → , ϕ2 D1 j ϕ2 , ϕ1 ,  →  D2 [ϕ1 →] 1 2

is a derivation for  →  with a lower cutrank than D. (b) ϕ is of the form ∀x ϕ1 . Then D1 is of the form

2.4 Cut Elimination

63

1  → , ∀x ϕ1 R∀2 2  → , [ϕ1 ]xy D1 where y is fresh in  ∪  ∪ {∀x ϕ1 } and D2 is of the form 1 ∀x ϕ1 ,  →  L∀2 x 2 [ϕ1 ]t , ∀x ϕ1 ,  →  D2 where t  x : ϕ1 . Let z be a variable fresh in  ∪  ∪ {∀x ϕ1 , [ϕ1 ]tx } and denote by y [D1 ]z the derivation obtained from D1 by replacing each free occurrence of y by z in all the formulas in D1 (see [15]). Consider the derivation 1 [ϕ1 ]tx ,  →  Cut : 2, j x 2 [ϕ1 ]t ,  → , ∀x ϕ1 R∀ : 3 3 [ϕ1 ]tx ,  → , [ϕ1 ]zx y [D1 ]z [[ϕ1 ]tx →] j ∀x ϕ1 , [ϕ1 ]tx ,  →  D2 where the topmost cut has a level less than the level of the topmost cut of D. Then, by the induction hypothesis, there is a derivation D2• for [ϕ1 ]tx ,  →  with a lower cutrank than the original one and so with a lower cutrank than D. Let y1 , . . . , yk be the variables in t used as fresh variables in the applications of rule R∀ in D1 . Assume that z 1 , . . . , z k are variables not occurring in D1 and in t. Thus, 1→ Cut : 2, i x 2  → , [ϕ1 ]t y ...y y [[D1 ]z11...zkk ]t i [ϕ1 ]tx ,  →  D2• is a derivation for  →  with a lower cutrank than D.



We now extend the previous result to derivations not ending with a cut rule application. G + Cut Proposition 2.74 Given a derivation D for   →  with non-null cutrank G + Cut •  →  with a lower cutrank than D. then there is a derivation D for 

64

2 Reasoning with Theories

Proof Let D be a derivation for  →  such that cr(D) = 0. The proof follows by induction on the number n of cuts with cutrank cr(D). (Base) n = 1. Let D be the subderivation of D starting at the cut application with the maximum cutrank. Then, by Proposition 2.73, there is a derivation D• of the end sequent of D with a lower cutrank. Hence, by replacing D by D• in D we get a derivation for  →  with a lower cutrank than D. (Step) Let D be a subderivation of D starting at a cut application with the maximum cutrank such that the derivation of each premise of the cut has a lower cutrank. Then, by Proposition 2.73, there is a derivation D• of the end sequent of D with a lower cutrank. Consider the derivation D◦ obtained by replacing D by D• in D. So, by the induction hypothesis, there is a derivation for  →  with a lower cutrank than D.  Finally, we are ready to prove Gentzen’s Hauptsatz, that is, the Cut Elimination Theorem. G + Cut Proposition 2.75 Given a derivation D for   → , then there is a derivaG • tion D for   → .

Proof The proof follows by induction on the cutrank of D. (Base) cr(D) = 0. Then take D• to be D. (Step) Assume that cr(D) = 0. Let D• be the derivation for  →  given in Proposition 2.74. Then, the thesis follows by applying the induction hypothesis to D• . 

2.5 Craig Interpolation We now concentrate on interpolation, a classical result on proof theory that was established by William Craig [12, 16, 17]. The theorem was inspired by Beth’s Definability Theorem (see [18, 19]). Herein, we present a symbolic constructive proof, as adopted in [4, 5, 13], starting with sequents and then going on to formulas and consequence. We provide examples of extracting Craig interpolants from deductions. It is also possible to prove Craig interpolation in a semantic way (see [20, 21]). The model-theoretic counterpart is Robinson’s Joint Consistency Theorem (see [22]). Craig interpolation is very relevant in practical applications, namely, in automated reasoning and theorem proving (see [23–25]). Definition 2.76 Let   ,  ,   ,  be finite multisets of formulas over a signature . An interpolant of (  ,  ,   ,  ) is a formula ϕ ∈ L  such that   • G   →  , ϕ; G  •  ϕ,  →  ; • all predicate symbols, constant symbols and free variables in ϕ occur in both   ∪  and   ∪  .

2.5 Craig Interpolation

65

We now show the main result of this section stating that there is an interpolant of each partition of the antecedent and the consequent of a derivable sequent. We start by defining partition of a multiset. Definition 2.77 The collection of multisets A1 , . . . , An is a partition of a multiset A whenever A1 ∪ · · · ∪ An = A.   Proposition 2.78 Assume that G   → . Then, for each partition  and  of    and  and  of , (  ,  ,   ,  )

has an interpolant. Proof Let 1 → 1 · · · n → n be a derivation for  →  and 1 and 1 a partition of 1 and 1 and 1 a partition of 1 . The proof follows by induction on n: (Base) n = 1 There are two cases to consider. (1) 1 → 1 is an axiom over an arbitrary formula ψ. There are four subcases: (a) ψ ∈ 1 ∩ 1 . Then an interpolant of (1 , 1 , 1 , 1 ) is ⊥. (b) ψ ∈ 1 ∩ 1 . Then an interpolant of (1 , 1 , 1 , 1 ) is ¬ ⊥. (c) ψ ∈ 1 ∩ 1 . Then an interpolant of (1 , 1 , 1 , 1 ) is ψ. (d) ψ ∈ 1 ∩ 1 . Then an interpolant of (1 , 1 , 1 , 1 ) is ¬ ψ. (2) 1 → 1 is L⊥. Then ⊥ ∈ 1 ∪ 1 . There are two subcases: (a) ⊥ ∈ 1 . Then an interpolant of (1 , 1 , 1 , 1 ) is ⊥. (b) ⊥ ∈ 1 . Then an interpolant of (1 , 1 , 1 , 1 ) is ¬ ⊥. (Step) There are four cases to consider. (1) 1 → 1 was obtained by R⊃ from 2 → 2 over formula ψ1 ⊃ ψ2 ∈ 1 ∪ 1 . Observe that 2 = 1 ∪ {ψ1 } and 2 = (1 \ {ψ1 ⊃ ψ2 }) ∪ {ψ2 }. We assume with no loss of generality that ψ1 ⊃ ψ2 ∈ 1 . Then, by the induction hypothesis, there is an interpolant ϕ2 for (1 , 1 , 1 ∪ {ψ1 }, (1 \ {ψ1 ⊃ ψ2 }) ∪ {ψ2 }). That is,   • G  1 → 1 , ϕ2 ; G •  ϕ2 , ψ1 , 1 → 1 \ {ψ1 ⊃ ψ2 }, ψ2 ;

66

2 Reasoning with Theories

• all predicate symbols, constant symbols and free variables in ϕ2 occur in both 1 ∪ 1 and 1 ∪ {ψ1 } ∪ (1 \ {ψ1 ⊃ ψ2 }) ∪ {ψ2 }. Then, ϕ2 is an interpolant of (1 , 1 , 1 , 1 ). (2) 1 → 1 was obtained by L⊃ from 2 → 2 and 3 → 3 over formula ψ1 ⊃ ψ2 ∈ 1 ∪ 1 . Observe that 2 = (1 \ {ψ1 ⊃ ψ2 }) ∪ {ψ2 } and 2 = 1 and 3 = (1 \ {ψ1 ⊃ ψ2 }) and 3 = 1 ∪ {ψ1 }. Consider two subcases: (a) ψ1 ⊃ ψ2 ∈ 1 . Then, by the induction hypothesis, there is an interpolant ϕ2 of ((1 \ {ψ1 ⊃ ψ2 }) ∪ {ψ2 }, 1 , 1 , 1 ), that is,   • G  ψ2 , 1 \ {ψ1 ⊃ ψ2 } → 1 , ϕ2 ; G   •  ϕ2 , 1 → 1 ; • all predicate symbols, constant symbols and free variables in ϕ2 occur in both (1 \ {ψ1 ⊃ ψ2 }) ∪ {ψ2 } ∪ 1 and 1 ∪ 1 .

Moreover, there is an interpolant ϕ3 of (1 , 1 , (1 \ {ψ1 ⊃ ψ2 }), 1 ∪ {ψ1 }), that is,   • G  1 → 1 , ϕ3 ; G  •  ϕ3 , 1 \ {ψ1 ⊃ ψ2 } → 1 , ψ1 ; • all predicate symbols, constant symbols and free variables in ϕ3 occur in both (1 \ {ψ1 ⊃ ψ2 }) ∪ 1 ∪ {ψ1 } and 1 ∪ 1 .

Then, ϕ3 ⊃ ϕ2 is an interpolant of (1 , 1 , 1 , 1 ). (b) ψ1 ⊃ ψ2 ∈ 1 . Then, by the induction hypothesis, there is an interpolant ϕ2 of (1 , 1 , (1 \ {ψ1 ⊃ ψ2 }) ∪ {ψ2 }, 1 ), that is,   • G  1 → 1 , ϕ2 ; G •  ϕ3 , ψ2 , 1 \ {ψ1 ⊃ ψ2 } → 1 ; • all predicate symbols, constant symbols and free variables in ϕ2 occur in both 1 ∪ 1 and (1 \ {ψ1 ⊃ ψ2 }) ∪ {ψ2 } ∪ 1 .

Moreover, there is an interpolant ϕ3 of (1 , 1 , (1 \ {ψ1 ⊃ ψ2 }), 1 ∪ {ψ1 }), that is,   • G  1 → 1 , ϕ3 ;

2.5 Craig Interpolation

67

  • G  ϕ3 , 1 \ {ψ1 ⊃ ψ2 } → 1 , ψ1 ; • all predicate symbols, constant symbols and free variables in ϕ3 occur in both 1 ∪ 1 and (1 \ {ψ1 ⊃ ψ2 }) ∪ 1 ∪ {ψ1 }.

Then, ϕ2 ∧ ϕ3 is an interpolant of (1 , 1 , 1 , 1 ). (3) 1 → 1 was obtained by R∀ from 2 → 2 over formula ∀x ψ ∈ 1 ∪ 1 . Observe that 2 = 1 and 2 = (1 \ {∀x ψ}) ∪ {[ψ]xy } for some fresh variable y. We assume with no loss of generality that ∀x ψ ∈ 1 . Then, by the induction hypothesis, there is an interpolant ϕ2 for (1 , (1 \ {∀x ψ}) ∪ {[ψ]xy }, 1 , 1 ). Then, ϕ2 is an interpolant for (1 , 1 , 1 , 1 ). (4) 1 → 1 was obtained by L∀ from 2 → 2 over formula ∀x ψ ∈ 1 ∪ 1 . Observe that 2 = 1 and 2 = 1 ∪ {[ψ]tx } for some term t. Assume with no loss of generality that ∀x ψ ∈ 1 . Then, by the induction hypothesis there is an interpolant ϕ2 of (1 ∪ {[ψ]tx }, 1 , 1 , 1 ). That is,   x • G  [ψ]t , 1 → 1 , ϕ2 ; G   •  ϕ2 , 1 → 1 ; • all predicate symbols, constant symbols and free variables in ϕ2 occur in both 1 ∪ {[ψ]tx } ∪ 1 and 1 ∪ 1 .

There are three subcases: (a) t is z ∈ X and z is fresh in 1 → 1 . Then, an interpolant of (1 , 1 , 1 , 1 ) is ∀z ϕ2 . Indeed,   • G  1 → 1 , ∀z ϕ2 ; G  •  ∀z ϕ2 , 1 → 1 ; • all predicate symbols, constant symbols and free variables in ∀z ϕ2 occur in both   ∪  and   ∪  .

(b) t is z ∈ X and z occurs free in 1 → 1 . Then, an interpolant of (1 , 1 , 1 , 1 ) is

68

2 Reasoning with Theories

ϕ2 . Indeed,   • G  1 → 1 , ϕ2 ; G  •  ϕ2 , 1 → 1 ; • all predicate symbols, constant symbols and free variables in ϕ2 occur in both   ∪  and   ∪  .

(c) t ∈ / X . Let x1 , . . . , xk be the variables that occur in t and are free in ϕ2 but not in 1 ∪ 1 . Moreover, let c1 , . . . , cm be the constant symbols that occur in t and in ϕ2 but not in 1 ∪ 1 . Let D be a derivation for [ψ]tx , 1 → 1 , ϕ2 and z 1 , . . . , z k+m ...xk c1 ...cm variables not occurring in D . Then, there is a derivation (see [15]) [D1 ]zx11,...,z for k+m ...xk c1 ...cm [ψ]tx , 1 → 1 , [ϕ2 ]zx11,...,z k+m ...xk c1 ...cm where t  = [t]zx11,...,z . Thus k+m   x1 ...xk c1 ...cm G  ∀x ψ, 1 → 1 , ∀z 1 . . . ∀z k+m [ϕ2 ]z 1 ,...,z k+m .

Moreover,

x1 ...xk c1 ...cm   G  ∀z 1 . . . ∀z k+m [ϕ2 ]z 1 ,...,z k+m , 1 → 1 .

Finally, observe that all predicate symbols, constant symbols and free variables in ...xk c1 ...cm ∀z 1 . . . ∀z k+m [ϕ2 ]zx11,...,z k+m ...xk c1 ...cm occur in both   ∪  and   ∪  . Hence, ∀z 1 . . . ∀z k+m [ϕ2 ]zx11,...,z is an interk+m      polant of (1 , 1 , 1 , 1 ).

Example 2.79 Assume that ϕ is an interpolant of (  ,  ,   ∪ {ψ},  ), that is,   • G   →  , ϕ; G  •  ϕ, ψ,  →  ; • all predicate symbols, constant symbols and free variables in ϕ occur in both   ∪  and   ∪ {ψ} ∪  .

Then ϕ is an interpolant of (  ,  ,   ,  ∪ {¬ ψ}). On the other hand, assume that ϕ is an interpolant of

2.5 Craig Interpolation

69

(  ∪ {ψ},  ,   ,  ), that is,   • G  ψ,  →  , ϕ; G  •  ϕ,  →  ; • all predicate symbols, constant symbols and free variables in ϕ occur in both   ∪ {ψ} ∪  and   ∪  .

Then ϕ is an interpolant of (  ,  ∪ {¬ ψ},   ,  ). Exercise 2.80 Similar to Example 2.79, derive an interpolant for the rules for ∧ and ∨. Example 2.81 Recall theory S in Example 1.155. Consider the derivation D: 1 ¬ S2 (x) ∼ L∀ : 2 = 0, (S3) → ∃y S(y) ∼ = S2 (x) 2 (¬ S2 (x) ∼ = 0) ⊃ ∃y S(y) ∼ = S2 (x), ¬ S2 (x) ∼ = 0, (S3) → ∃y S(y) ∼ = S2 (x) L⊃ : 3, 4 3 ∃y S(y) ∼ = S2 (x), ¬ S2 (x) ∼ = 0, (S3) → ∃y S(y) ∼ = S2 (x) ∼ 0, (S3) → ∃y S(y) ∼ 4 ¬ S2 (x) = = S2 (x), ¬ S2 (x) ∼ =0

Ax Ax

The objective is to find an interpolant β1 of ({¬ S2 (x) ∼ = 0, (S3)}, ∅, ∅, {∃y S(y) ∼ = S2 (x)}) following the steps in D. Since step (1) of D is justified by L∀ from step 2, then β1 is β2 where β2 is the interpolant of ({(¬ S2 (x) ∼ = 0) ⊃ ∃y S(y) ∼ = S2 (x), ¬ S2 (x) ∼ = 0, (S3)}, ∅, ∅, {∃y S(y) ∼ = S2 (x)}).

Since step (2) of D is justified by L⊃ from step (3) and step (4) then β2 is β3 ⊃ β4 where β3 is the interpolant of (∅, {∃y S(y) ∼ = 0, (S3)}, {¬ S2 (x) ∼ = 0}) = S2 (x)}, {¬ S2 (x) ∼

70

2 Reasoning with Theories

corresponding to step (4) in the derivation, and β4 is the interpolant of ({∃y S(y) ∼ = 0, (S3)}, ∅, ∅, {∃y S(y) ∼ = S2 (x)}) = S2 (x), ¬ S2 (x) ∼ corresponding to step (3) in the derivation. Hence, β3 is ¬ ⊥ and β4 is ∃y S(y) ∼ = S2 (x). So (¬ ⊥) ⊃ ∃y S(y) ∼ = S2 (x) is the interpolant at stake. We now extend the previous result to the level of formulas. For that, we start with a technical result useful in the proofs of Proposition 2.83 and Proposition 2.85. Proposition 2.82 Let ϕ be a formula over  where all its atomic formulas are ⊥. G Then either G  → ϕ or  ϕ →. Proof The proof follows by induction on the structure of ϕ. (Base) ϕ is ⊥. Then G  ⊥ →. (Step) There are two cases to consider: (1) ϕ is ϕ1 ⊃ ϕ2 . Using the induction hypothesis over ϕ1 and ϕ2 there are four subcases to consider: G G (a) G  → ϕ1 and  → ϕ2 . Then it is immediate to see that  → ϕ1 ⊃ ϕ2 . G G (b) G  ϕ1 → and  ϕ2 →. Then it is immediate to see that  → ϕ1 ⊃ ϕ2 . G G (c) G  ϕ1 → and  → ϕ2 . Then it is immediate to see that  → ϕ1 ⊃ ϕ2 . G G (d)  → ϕ1 and  ϕ2 →. Then it is immediate to see that G  ϕ1 ⊃ ϕ2 →. (2) ϕ is ∀x ϕ1 . Using the induction hypothesis over ϕ1 there are two subcases to consider: G (a) G  → ϕ1 . Then it is immediate to see that  → ∀x ϕ1 . G (b) G   ϕ1 →. Then it is immediate to see that  ∀x ϕ1 →. The following result is important for the proof of Proposition 2.88. Proposition 2.83 Assume that G   →  and that  and  do not share any G predicate symbols. Then, either G   → or  → . Proof Observe that, by Proposition 2.78, (, ∅, ∅, ) has an interpolant ϕ. Then G   → ϕ and

G  ϕ →

and all atomic formulas in ϕ are ⊥. Therefore, by Proposition 2.82, either G  →ϕ or G  ϕ →. Consider two cases:

2.5 Craig Interpolation

71

G + Cut (1) G → .  → ϕ. Then, using Cut, it is immediate to see that  G + Cut G (2)  ϕ →. Then, using Cut, it is immediate to see that   →. G G Hence, by Proposition 2.75, either   → or  → .



We now introduce the concept of interpolant for formulas with ⊃ as the main constructor. Definition 2.84 Let ψ1 , ψ2 be formulas over a signature  such that G  ψ1 ⊃ ψ2 . An interpolant of ψ1 ⊃ ψ2 is a formula ϕ ∈ L  such that • G  ψ1 ⊃ ϕ; • G  ϕ ⊃ ψ2 ; • all predicate symbols, constant symbols and free variables in ϕ occur in both ψ1 and ψ2 . We are now ready to prove the Craig’s Interpolation Theorem for Implicational Formulas. Proposition 2.85 Let ψ1 , ψ2 be formulas over a signature . Assume that G  ψ1 ⊃ ψ2 . Then • ψ1 ⊃ ψ2 has an interpolant; G • G  ¬ ψ1 or  ψ2 whenever ψ1 and ψ2 have no common predicate symbols. Proof Since G  → ψ1 ⊃ ψ2 then G  ψ1 → ψ2 . Observe that, by Proposition 2.78, there is an interpolant ϕ for ({ψ1 }, ∅, ∅, {ψ2 }). That is, • G  ψ1 → ϕ; • G  ϕ → ψ2 ; • all predicate symbols, constant symbols and free variables in ϕ occur in both ψ1 and ψ2 . Then G  ψ1 ⊃ ϕ and

G  ϕ ⊃ ψ2

and ϕ is the interpolant for ψ1 ⊃ ψ2 . For proving the second statement, assume that ψ1 and ψ2 have no common predicate symbols. Then ϕ is such that all its atomic formulas are ⊥. Thus, by Proposition 2.82,

72

2 Reasoning with Theories

either G  → ϕ or

G  ϕ →.

So, using Cut and Proposition 2.18, we have either G  → ψ2 or

G  ψ1 → .

G That is, either G  ¬ ψ1 or  ψ2 .



Example 2.86 Recall Example 2.81. Then (¬ ⊥) ⊃ ∃y S(y) ∼ = S2 (x) is an interpolant of ((¬ S2 (x) ∼ = 0) ∧ (S3)) ⊃ ∃y S(y) ∼ = S2 (x). We now extend the notion of interpolant to deductive consequence. Definition 2.87 Let  ∪ {δ} be a finite set of formulas over a signature  such that all formulas in  are sentences and  G  δ. An interpolant of (, δ) is a formula ϕ ∈ L  such that •  G  ϕ; • ϕ G  δ; • all predicate symbols, constant symbols and free variables in ϕ occur in both  and δ. Finally, we prove the Craig’s Interpolation Theorem for Deductive Consequence capitalizing on the Craig’s Interpolation Theorem for Formulas (see Proposition 2.85). Proposition 2.88 Let  ∪ {δ} be a finite set of formulas over a signature  such that all formulas in  are sentences and  G  δ. Then • (, δ) has an interpolant; G •  G  ⊥ or  δ whenever  and δ have no common predicate symbols. Proof Since  G  δ then, by Exercise 2.25, G 



 ⊃ δ.

 Hence, by Proposition 2.85, there is an interpolant ϕ for  ⊃ δ. That is,   ⊃ ϕ; • G  • G  ϕ ⊃ δ;  • all predicate symbols, constant symbols and free variables in ϕ occur in both  and δ.

2.5 Craig Interpolation

73

G Thus, by Exercise 2.25,  G  ϕ and ϕ  δ. Therefore, ϕ is an interpolant for (, δ). With respect to the second statement, assume that  and δ have no common predicate symbols. Hence, by Proposition 2.83,

either G  ¬ and so the thesis follows.



 or

G  δ 

References 1. D. Prawitz, Natural Deduction. A Proof-Theoretical Study. Acta Universitatis Stockholmiensis. Stockholm Studies in Philosophy, vol. 3 (Almqvist & Wiksell, 1965) 2. D. van Dalen, Logic and Structure, 5th edn. (Springer, 2013) 3. J.L. Bell, M. Machover, A Course in Mathematical Logic (North-Holland, 1977) 4. A.S. Troelstra, H. Schwichtenberg, Basic Proof Theory, vol. 43, 2nd edn. (Cambridge University Press, 2000) 5. J. Gallier, Logic for Computer Science: Foundations of Automatic Theorem Proving (Dover, 2015) 6. S.R. Buss, An introduction to proof theory, in Handbook of Proof Theory. Studies in Logic and the Foundations of Mathematics, vol. 137 (North-Holland, 1998), pp. 1–78 7. R.M. Smullyan, First-Order Logic (Springer, 1968) 8. G. Gentzen, The Collected Papers of Gerhard Gentzen, ed. by M.E. Szabo. Studies in Logic and the Foundations of Mathematics (North-Holland, 1969) 9. M. Baaz, A. Leitsch, Methods of Cut-Elimination (Springer, 2013) 10. A. Carbone, S. Semmes, A Graphic Apology for Symmetry and Implicitness (Oxford University Press, 2000) 11. T. Kowalski, H. Ono, Analytic cut and interpolation for bi-intuitionistic logic. Rev. Symb. Logic 10(2), 259–283 (2017) 12. W. Craig, Three uses of the Herbrand-Gentzen theorem in relating model theory and proof theory. J. Symb. Logic 22, 269–285 (1957) 13. S. Maehara, On the interpolation theorem of Craig. Sûgaku 12, 235–237 (1960) 14. J.L. Hein, Discrete Mathematics (Jones & Bartlett Publishers, 2003) 15. A. Sernadas, C. Sernadas, Foundations of Logic and Theory of Computation, 2nd edn. (College Publications, 2012) 16. W. Craig, On axiomatizability within a system. J. Symb. Logic 18, 30–32 (1953) 17. W. Craig, Linear reasoning. A new form of the Herbrand-Gentzen theorem. J. Symb. Logic 22, 250–268 (1957) 18. E.W. Beth, On Padoa’s method in the theory of definition. Indagationes Mathematicae 15, 330–339 (1953) 19. H. Kihara, H. Ono, Interpolation properties, Beth definability properties and amalgamation properties for substructural logics. J. Logic Comput. 20(4), 823–875 (2010) 20. C.C. Chang, H.J. Keisler, Model Theory (Dover, 2012) 21. J.R. Shoenfield, Mathematical Logic (Association for Symbolic Logic, 2001) 22. A. Robinson, A result on consistency and its application to the theory of definition. Indagationes Mathematicae 18, 47–58 (1956) 23. J. Bicarregui, T. Dimitrakos, D. Gabbay, T. Maibaum, Interpolation in practical formal development. Logic J. IGPL 9(2), 231–243 (2001) 24. M.P. Bonacina, M. Johansson, On interpolation in automated theorem proving. J. Autom. Reason. 54(1), 69–97 (2015) 25. J. Harrison, Handbook of Practical Logic and Automated Reasoning (Cambridge University Press, 2009)

Chapter 3

Decidability Results on Theories

In this chapter, after some introductory concepts and results, we present sufficient conditions for a theory to be decidable. We start by considering theories with computable quantifier elimination and show that the decidability of such theories is equivalent to the decidability of their quantifier-free sentence fragment. This technique is illustrated by showing that the theory of real closed ordered fields is decidable, relying on the pioneering work of Tarski [1–3]. Then we use the problem reduction technique, introduced in Sect. A.4 of Appendix, to show that the theory of Euclidean geometry (see [4, 5]) is decidable by reduction to the theory of real closed ordered fields. Afterward, we show that every axiomatizable and complete theory is decidable. Based on this result, we discuss sufficient conditions for completeness of theories, namely, via quantifier elimination and via categoricity [6].

3.1 Preliminaries We start by defining the important concept of a theory having quantifier elimination. Definition 3.1 We say that a theory  over a signature  has quantifier elimination whenever for every formula ϕ ∈ L  there is a formula ϕ ∗ such that • ϕ∗ ∈ Q ; • fv (ϕ ∗ ) = fv (ϕ); •   ϕ ≡ ϕ ∗ . Example 3.2 Recall Example 1.155. Consider the formula ∀x∃y (S(x) ∼ = S2 (y) ⊃ ¬ x ∼ = 0). © Springer Nature Switzerland AG 2020 J. Rasga and C. Sernadas, Decidability of Logical Theories and Their Combination, Studies in Universal Logic, https://doi.org/10.1007/978-3-030-56554-1_3

75

76

3 Decidability Results on Theories

Observe that this formula is equivalent to = 0). = S2 (y)) ∨ ¬ x ∼ ¬ ∃x ¬ ∃y ((¬ S(x) ∼ Consider the subformula ∃y ((¬ S(x) ∼ = 0). = S2 (y)) ∨ ¬ x ∼ Note that S (∃y ((¬ S(x) ∼ = S2 (y)) ∨ ¬ x ∼ = 0)) ≡ ((∃y ¬ S(x) ∼ = 0)). = S2 (y)) ∨ (∃y ¬ x ∼

Then, taking into account (S41 ), we have S S (∃y ((¬ S(x) ∼ = S2 (y)) ∨ ¬ x ∼ = 0)) ≡ (x ∼ = x ∨¬x ∼ = 0). Thus,

S S (∀x∃y (S(x) ∼ = S2 (y) ⊃ ¬ x ∼ = 0)) ≡ ∀x x ∼ =x

where x ∼ = x is a formula that is obviously true. Hence S S (∀x∃y (S(x) ∼ = S2 (y) ⊃ ¬ x ∼ = 0)) ≡ 0 ∼ = 0. Therefore, the quantifier-free formula equivalent to ∀x∃y (S(x) ∼ = 0) = S2 (y) ⊃ ¬ x ∼ is

0∼ = 0. Quantifier elimination is preserved by enrichment of theories as we now show.

Proposition 3.3 Let  and  be theories over a signature  such that  ⊆  . Then  has quantifier elimination whenever  has quantifier elimination. Proof Let ϕ ∈ L  . Since  has quantifier elimination there is ϕ ∗ ∈ Q  such that   ϕ ≡ ϕ ∗ and fv (ϕ ∗ ) = fv (ϕ). The thesis follows immediately since   ϕ ≡ ϕ ∗ because  ⊆  . 

3.2 Decidability via Computable Quantifier Elimination In this section we show that whenever a theory  has computable quantifier elimination then the decidability of  can be reduced to the decidability of the quantifier-free

3.2 Decidability via Computable Quantifier Elimination

77

sentence fragment of . We start by introducing the notion of computable quantifier elimination. Definition 3.4 Let  be a theory over a signature . We say that  has computable quantifier elimination whenever there is a computable map qe : L  → Q  such that for every formula ϕ ∈ L  • fv (qe (ϕ)) = fv (ϕ); •   ϕ ≡ qe (ϕ). Proposition 3.5 Let  be a theory over a decidable signature . Assume that  has computable quantifier elimination and is axiomatizable. Then  is decidable in L  whenever  ∩ cQ is decidable in cQ . Proof Let ϕ ∈ L  . Observe that (∀ϕ) ≡ qe (∀ϕ) ∈  and so ϕ ∈  iff ∀ϕ ∈  iff qe (∀ϕ) ∈ . Then χ = χ∩cQ ◦ qe ◦ (ψ → ∀ψ). Hence, χ is computable since χ∩cQ , qe and ψ → ∀ψ are computable maps.  We observe that Alfred Tarski was a pioneer in proving decidability, namely, by using (symbolic) quantifier elimination. The reader interested in the history of proofs of decidability of theories by Tarski should consult [1, 7]. As an example, recall the theory RCOF of the real closed ordered fields introduced in Example 1.157. Proposition 3.6 The theory RCOF is decidable. Proof Observe that RCOF has computable quantifier elimination (see [2, 8, 9]) and is axiomatizable. On the other hand, it is immediate to see that there is an algorithm based on the axioms of RCOF that given a literal without variables returns an equivalent formula either of the form · · + 1 1 + · · · + 1 ∼ = 1 + · k-times

m-times

or of the form · · + 1 1 + · · · + 1 < 1 + · k-times

m-times

78

3 Decidability Results on Theories

where k, m ∈ N. So any quantifier-free sentence is equivalent to a disjunction of conjunctions of formulas of these forms (see Exercise 1.33). Thus RCOF ∩ cQRCOF is decidable in cQRCOF . Therefore, by Proposition 3.5, RCOF is decidable.  The reader interested in the semantic properties of real closed ordered fields should consult [10].

3.3 Decidability via Reduction The problem reduction technique (see Sect. A.4) can be used for proving decidability of a theory whenever it is possible to reduce it to another decidable theory. We provide now an illustration of this technique by proving that Euclidean geometry is decidable by reduction to the decidable theory RCOF . We start by defining Euclidean geometry as the FOL theory EG following the work of Alfred Tarski (see [5], based on the second-order axiomatization proposed by David Hilbert [11]), also taking into account [4]. This means that we axiomatize the relationships between points, lines, segments and rays by choosing appropriate predicate symbols. The goal is to prove that EG is decidable. The proof relies on the reduction to the theory of real closed ordered fields which was shown to be decidable (see Proposition 3.6). Definition 3.7 Let EG = (F, P, τ ) be the signature of Euclidean geometry with P2 = {∼ =}, P3 = {B} and P4 = {E}. Predicate symbols B and E are meant to be interpreted as “between” and “equidistance”, respectively. Hence, the atomic formula B(x, y, z) would mean that y is between x and z, that is, y is in the segment x z. Moreover, the atomic formula E(x, y, z, w) would mean that the distance from x to y is the same as the distance from z to w. Definition 3.8 The theory EG over EG is generated by the set AxEG with the following axioms where we omit, for the sake of simplicity, the universal closure in all formulas: • • • • •

(EG1) (EG2) (EG3) (EG4) (EG5)

E(x, y, y, x); (E(x, y, u, v) ∧ E(u, v, r, w)) ⊃ E(x, y, r, w); E(x, y, z, z) ⊃ (x ∼ = y); ∃w(B(x, y, w) ∧ E(y, w, u, v)); ((¬ x ∼ = y) ∧ B(x, y, z) ∧ B(x , y , z ) ∧ E(x, y, x , y ) ∧ E(y, z, y , z ) ∧ E(x, u, x , u ) ∧ E(y, u, y , u )) ⊃ E(z, u, z , u ); • (EG6) B(x, y, x) ⊃ x ∼ = y;

3.3 Decidability via Reduction

79

• (EG7) (B(x, u, z) ∧ B(y, v, z)) ⊃ ∃w(B(u, w, y) ∧ B(v, w, x)); • (EG8) ∃x ∃y ∃z ((¬ B(x, y, z)) ∧ (¬ B(y, z, x)) ∧ ¬ B(z, x, y)); • (EG9) ((¬ u ∼ = v) ∧ E(x, u, x, v) ∧ E(y, u, y, v) ∧ E(z, u, z, v)) ⊃ (B(x, y, z) ∨ B(y, z, x) ∨ B(z, x, y)) • (EG10) (B(x, u, v) ∧ B(y, u, z) ∧ (¬ x ∼ = u)) ⊃ ∃w∃w (B(x, y, w) ∧ B(x, z, w ) ∧ B(w , v, w)); • (EG11) (∃w∀x∀y ((ϕ ∧ ψ) ⊃ B(w, x, y))) ⊃ ∃w ∀x∀y ((ϕ ∧ ψ) ⊃ B(x, w , y)) such that w, w , y do not occur free in ϕ and w, w , x do not occur free in ψ. Axioms (EG1), (EG2) and (EG3) are called reflexivity, transitivity and identity of E and Axiom (EG6) the identity of B. Axiom (EG4), known as segment construction axiom, states that if uv is a segment and x is the starting point of a ray and y belongs to that ray then there is w in that ray such that the distance from u to v is the same as the distance from y to w. Axiom (EG5), known as five-segment axiom, means the following: if we have two pairs of five segments (x y, yz, xu, yu, zu) and (x y , y z , x u , y u , z u ) if the kth segments of the pairs have the same length for k = 1, . . . , 4 then the 5th segments have also the same length. Axiom (EG7), called Pasch axiom, means that the two diagonals, from x to v and from u to y, of the polygon based on points x, u, v, y must intersect at some point w. Axiom (EG8) is known as lower 2-dimensional axiom. The meaning of this axiom is that there are three noncolinear points. Axiom (EG9), called upper 2-dimensional axiom, imposes that if three points are equidistant from two different points then they must be colinear. Axiom (EG10), known as Euclid’s axiom, states that if v is in the angle ∠yx z then there exist points w, w such that the line determined by w, w intersects both sides of the angle. Finally, we explain the meaning of Axiom (EG11) called continuity axiom. Assume that ϕ and ψ are formulas in EG satisfying the conditions of (EG11) such that (1) ϕ and ψ define two sets of points, say Pϕ and Pψ , respectively; and (2) there is a ray with left-hand point w such that every x in Pϕ is always on the left of every y in Pψ . Then, there is w that separates x from y for every x and y in Pϕ and Pψ , respectively (it is interesting to note the relationship of this axiom to the Dedekind cuts for defining the real closed field R from the field Q, see [12]). Example 3.9 Recall the signature EG in Definition 3.7. Then EG EG EG I EG = (R2 , ∅, {∼ = I , B I , E I })

where EG EG • ∼ = I ((c1 , c2 ), (d1 , d2 )) = 1 iff c1 is d1 and = I : R2 × R2 → {0, 1} is such that ∼ c2 is d2 ; EG EG • B I : R2 × R2 × R2 → {0, 1} is such that B I (c, d, e) = 1 iff c, d, e are collinear (see [13]), that is, using the slope formula,

(d2 − c2 )(e1 − d1 ) = (e2 − d2 )(d1 − c1 ) and d is in between c and e, that is,

80

3 Decidability Results on Theories

(c1 − d1 )(d1 − e1 ) ≥ 0 and (c2 − d2 )(d2 − e2 ) ≥ 0; • EI

EG

: R2 × R2 × R2 × R2 → {0, 1} is such that E I (c, d, c , d ) = 1 iff EG

(d1 − c1 )2 + (d2 − c2 )2 = (d1 − c1 )2 + (d2 − c2 )2 ; is an interpretation structure for EG . Example 3.10 Recall the signature EG in Definition 3.7 and the interpretation structure I EG in Example 3.9. Then, for instance, I EG ρ  E(x, y, z, z) ⊃ x ∼ = y for every assignment ρ. Definition 3.11 Let I = (D, {0 I , 1 I , − I , + I , × I }, {∼ = I , < I }) be an interpretation structure for RCOF . The Euclidean geometry interpretation structure EG(I ) = (D 2 , ∅, {∼ =EG(I ) , B EG(I ) , E EG(I ) }) induced by I is such that • ∼ =EG(I ) (c, d) = 1 iff c1 ∼ =EG(I ) : D 2 × D 2 → {0, 1} is the map such that ∼ = I d1 and I ∼ c2 = d2 ; • B EG(I ) : D 2 × D 2 × D 2 → {0, 1} is the map such that B EG(I ) (c, d, e) = 1 iff ⎧ ⎪ (d + I (− I c2 )) × I (e1 + I (− I d1 )) = (e2 + I (− I d2 )) × I (d1 + I (− I c1 )) ⎪ ⎨ 2 0 I ≤ I (c1 + I (− I d1 )) × I (d1 + I (− I e1 )) ⎪ ⎪ ⎩ I I 0 ≤ (c2 + I (− I d2 )) × I (d2 + I (− I e2 ))

;

• E EG(I ) : D 2 × D 2 × D 2 × D 2 → {0, 1} is the map such that E EG(I ) (c, d, c , d ) = 1 iff (d1 + I (− I c1 ))2 + I (d2 + I (− I c2 ))2 = (d1 + I (− I c1 ))2 + I (d2 + I (− I c2 ))2 . The decidability of the FOL theory EG of Euclidean geometry is obtained by reduction to RCOF following [2]. The decidability of RCOF is shown in Proposition 3.6. Definition 3.12 Let sEG→RCOF : L EG → L RCOF

3.3 Decidability via Reduction

81

be the translation map defined inductively as follows: • sEG→RCOF (x ∼ = y) is the formula (x1 ∼ = y1 ) ∧ (x2 ∼ = y2 ); • sEG→RCOF (B(x, y, z)) is the formula (y2 + (−x2 )) × (z 1 + (−y1 )) ∼ = (z 2 + (−y2 )) × (y1 + (−x1 )) ∧ (0 < (x1 + (−y1 )) × (y1 + (−z 1 )) ∨ (0 ∼ = (x1 + (−y1 )) × (y1 + (−z 1 ))) ∧ (0 < (x2 + (−y2 )) × (y2 + (−z 2 )) ∨ (0 ∼ = (x2 + (−y2 )) × (y2 + (−z 2 ))); • sEG→RCOF (E(x, y, z, w)) is the formula (x1 + (−y1 ))2 + (x2 + (−y2 ))2 ∼ = (z 1 + (−w1 ))2 + (z 2 + (−w2 ))2 ; • sEG→RCOF (¬ ϕ) is the formula ¬ sEG→RCOF (ϕ); • sEG→RCOF (ϕ1 ⊃ ϕ2 ) is the formula sEG→RCOF (ϕ1 ) ⊃ sEG→RCOF (ϕ2 ); • sEG→RCOF (∀x ϕ) is the formula ∀x1 ∀x2 sEG→RCOF (ϕ). The translation of B(x, y, z) reflects the fact that if (x1 , x2 ), (y1 , y2 ) and (z 1 , z 2 ) are collinear then the slope of the lines determined by (x1 , x2 ), (y1 , y2 ) and by (y1 , y2 ), (z 1 , z 2 ) should be the same. Moreover, the translation of B(x, y, z) expresses that y is in between x and z. The translation of E(x, y, z, w) states that the distance from x to y should be the same as the distance from z to w. We are ready to prove that EG is decidable. Proposition 3.13 Theory EG is decidable. Proof The first step is to prove that for each interpretation structure I for RCOF , the following statement holds: EG(I ) EG ϕ iff I RCOF sEG→RCOF (ϕ) for every ϕ ∈ L EG by induction on the structure of ϕ (we omit the proof since it follows straightforwardly). Moreover, EG EG ϕ iff RCOF RCOF sEG→RCOF (ϕ). Thus, the map sEG→RCOF reduces the problem Given ϕ ∈ L EG , does ϕ ∈ EG ? to the problem Given ψ ∈ L RCOF , does ψ ∈ RCOF ? The thesis follows by Proposition A.28 since sEG→RCOF is a computable map and  RCOF is decidable (see Proposition 3.6).

82

3 Decidability Results on Theories

3.4 Decidability via Completeness Another way to prove decidability of a theory involves showing its completeness. Definition 3.14 A set of formulas over a signature  is complete if either  ϕ or  ¬ ϕ for every sentence ϕ. Note that if a set of formulas is not consistent (see Definition 1.81), then it is complete. Moreover, observe that it is possible for a set of formulas to be consistent but not complete. Example 3.15 For each signature , theory ∅ is consistent but not complete. On the other hand, theory L  is complete and not consistent. G

Exercise 3.16 Let I be an interpretation structure for a signature . Show that Th(I ) (recall Exercise 1.148) is a complete theory. The problem of deciding whether a sentence or its negation is a consequence of a complete theory can be addressed using the Gentzen calculus. Proposition 3.17 Let  be a complete theory over a signature  axiomatized by Ax and ϕ ∈ cL . Then either

G  → ϕ or

G  → ¬ϕ

for some finite set ⊆ Ax. Proof Since  is a complete theory and ϕ is a sentence then either   ϕ or   ¬ ϕ. Thus, either Ax  ϕ or Ax  ¬ ϕ. So, by compactness (see [14]), there are finite sets 1 , 2 ⊆ Ax such that either 1  ϕ or 2  ¬ ϕ. Thus, by monotonicity, either  ϕ or  ¬ ϕ where is 1 ∪ 2 . Hence, by Proposition 2.35, either  → ϕ or  → ¬ ϕ. Therefore, the thesis follows by weak completeness of the Gentzen calculus, see Proposition 2.61.  We now provide a sufficient condition for a theory to be decidable (see Appendix A for computability notions). Proposition 3.18 Every axiomatizable and complete theory over a decidable signature is decidable. Proof Two cases are considered: (1) If theory  over  is not consistent then  = L  . Hence  is decidable (see Exercise 1.28). (2) Otherwise, it is possible to give an algorithm to compute χ using the fact that  is listable by Proposition 1.160 and is complete. Indeed, consider the following program:

3.4 Decidability via Completeness

83

function (w) ( if PχL  (w) == 0 then return 0; k = 0; while Ph (k) = Pa (w) ∧ Ph (k) = Pb (w) do k = k + 1; if Ph (k) == Pa (w) then return 1 else return 0 ) where • Ph is a program that computes a function h : N → L  that enumerates  which exists because  is listable and non-empty; • Pa is a program that computes a = (ψ → (∀ ψ)) : L  → L  ; • Pb is a program that computes b = (ψ → (¬(∀ ψ))) : L  → L  ; • PχL  is a program that computes χ L  . The justification of the algorithm is as follows. Observe that w ∈  if and only if (∀ w) ∈ . Given w, assume that w ∈ L  (if not, the algorithm returns 0 as envisaged). Then, either (∀w) ∈  or (¬(∀w)) ∈  but not both, since  is complete and consistent. Finally, because h is an enumeration of  there exists k such that h(k) = (∀w) if (∀w) ∈  or h(k) = (¬(∀w)) otherwise. In the first case the algorithm returns 1, and in the second case it returns 0.  Many theories are shown to be decidable using the sufficient condition in Proposition 3.18. Hence, it is worthwhile to investigate sufficient conditions for a theory to be complete. We start by providing a characterization of complete set of formulas. Proposition 3.19 A set of formulas is complete if and only if any two models I1 and I2 of satisfy the same sentences. Proof (→) Let I1 and I2 be models of and ϕ a sentence. Then, since is complete, either  ϕ or  ¬ ϕ. Therefore, either I1  ϕ and I2  ϕ or I1  ϕ and I2  ϕ since the formulas are closed. So I1 and I2 satisfy the same sentences. (←) Let ϕ be a sentence. Assume that  ϕ. Thus, there is I ∈ Mod( ) such that I  ϕ and so I  ¬ ϕ since ϕ is a sentence (see Proposition 1.86). Since all models of satisfy the same sentences, I  ¬ ϕ for every I ∈ Mod( ). Hence   ¬ ϕ. In the next sections, we discuss sufficient conditions for completeness based on quantifier elimination and categoricity.

84

3 Decidability Results on Theories

3.5 Completeness via Quantifier Elimination In this section, we show that quantifier elimination can be used for proving completeness. Nevertheless, we start by providing an example establishing that in general quantifier elimination is not sufficient to guarantee completeness of a theory. Definition 3.20 Recall signature f introduced in Example 1.7. We denote by ACF the theory over f for algebraically closed fields generated by the following set AxACF of axioms: (ACF1) (ACF2) (ACF3) (ACF4) (ACF5) (ACF6) (ACF7) (ACF8) (ACF9) (ACF10) (ACF11)

∀x∀y∀z x + (y + z) ∼ = (x + y) + z; ∀x x + 0 ∼ = x; ∀x x + (−x) ∼ = 0; ∀x∀y x + y ∼ = y + x; ∀x∀y∀z x × (y × z) ∼ = (x × y) × z; ∀x x × 1 ∼ = x; ∀x∀y∀z x × (y + z) ∼ = (x × y) + (x × z); ∀x∀y x × y ∼ = y × x; = 1; ¬ 0∼ ∀x ((¬ x ∼ = 0) ⊃ ∃y x × y ∼ = 1); ∀x1 . . . ∀xn ∃y y n + x1 y n−1 + · · · + xn−1 y + xn ∼ = 0, for each n ∈ N+ .

The axioms (ACF1) to (ACF8) state that a field is a commutative ring where × is commutative. Axiom (ACF9) imposes that the theory does not have any trivial model. That is, it should have at least two elements. Axiom (ACF10) asserts the existence of the multiplicative inverse for non-zero elements. Axiom (ACF11) states that every polynomial equation in one variable of degree at least 1, with coefficients in the field, has a root. For instance, the field R of real numbers is not algebraically closed, because the polynomial equation x 2 + 1 = 0 has no root in R. The same argument shows that no subfield of R is algebraically closed. In particular, Q is not algebraically closed. On the other hand, C is an algebraically closed field. Example 3.21 The theory ACF has quantifier elimination (as we show in Proposition 4.5) and is not complete since, for instance, / ACF . 1+1∼ / ACF and ¬ 1 + 1 ∼ =0∈ =0∈ We now present a sufficient condition involving quantifier elimination to prove that a theory is complete.

3.5 Completeness via Quantifier Elimination

85

Proposition 3.22 Let  be a theory over . Assume that •  has quantifier elimination; • for each α ∈ A ∩ cL , either   α or   ¬ α. Then,  is complete. Proof Let ψ be a sentence. We must show that either   ψ or   (¬ ψ). Consider two cases: (1) ψ is a quantifier-free formula. The proof is by induction on ψ. The base holds by hypothesis. For the step let ψ be δ1 ⊃ δ2 where δ1 , δ2 are closed quantifier-free formulas. Then, by the induction hypothesis, either   δi or   ¬ δi for i = 1, 2. Consider two cases: (a)   δ2 or   ¬ δ1 . Then   δ1 ⊃ δ2 . (b)   δ1 and   ¬ δ2 . Then   ¬(δ1 ⊃ δ2 ). (2) ψ is not a quantifier-free formula. Since  has quantifier elimination, there is ψ ∗ such that ψ ∗ is a quantifier-free formula, fv (ψ) = fv (ψ ∗ ) = ∅ and   ψ ≡ ψ ∗ . Hence, using (1) either   ψ ∗ or   ¬ ψ ∗ . Thus, either   ψ or   ¬ ψ using tautological reasoning.  The following result is a direct consequence of Proposition 3.22. Proposition 3.23 Let  be a theory over . Assume that  has quantifier elimination and F0 = ∅. Then,  is complete. Toward another sufficient condition for a theory to be complete, we need two notions: model completeness and initial model. Definition 3.24 A theory  is model complete if every embedding between models of  is elementary (see Definition 1.111). Exercise 3.25 Let  be a model complete theory and I1 and I2 models of . Show that I1 ⊆ I2 if and only if I1  I2 . Definition 3.26 An interpretation structure I is an initial model of a theory  whenever I ∈ Mod() and for every I ∈ Mod() there is an embedding h : I → I . Example 3.27 Recall the theory S introduced in Example 1.155 and the interpretation structure IN defined in Example 1.52. We show that IN |S is an initial model of S . Let I ∈ Mod(S ). Consider the map h : IN |S → I such that • h(0) = 0 I ; • h(k + 1) = S I (h(k)) for every k ∈ N.

86

3 Decidability Results on Theories

We prove that h is an embedding: (1) h is injective. Let k1 , k2 ∈ N. The proof follows by induction on k1 . Assume that h(k1 ) = h(k2 ). (Base) k1 = 0. Then h(k1 ) = 0 I . So h(k2 ) = 0 I . Since I satisfies (S1) then k2 should also be 0; (Step) k1 = 0. Hence k2 = 0, since I satisfies (S1). So h(k1 ) = S I (h(k1 − 1)) and h(k2 ) = S I (h(k2 − 1)). Thus, S I (h(k1 − 1)) = S I (h(k2 − 1)). Since I satisfies (S2) then h(k1 − 1) = h(k2 − 1). Then, by the induction hypothesis, k1 − 1 = k2 − 1 and so k1 = k2 . (2) h is a homomorphism. Indeed, • h(0 IN |S ) = h(0) = 0 I ; • h(S IN |S (k)) = h(k + 1) = S I (h(k)) for every k ∈ N. Proposition 3.28 A theory  is complete whenever  is model complete and has an initial model. Proof Let I1 and I2 be models of  and I an initial model of . Let h 1 : I → I1 and h 2 : I → I2 be embeddings. Since  is model complete, then h 1 and h 2 are elementary. Hence, by Exercise 1.130, I ≡e I1 and I ≡e I2 . Thus, by symmetry and transitivity (see Exercise 1.128), I1 ≡e I2 and, so by Proposition 3.19,  is complete.  Another way to prove completeness involves quantifier elimination since quantifier elimination implies model completeness. Proposition 3.29 A theory  is model complete whenever  has quantifier elimination. Proof Assume that theory  over  has quantifier elimination. Let h : I1 → I2 be an embedding where I1 , I2 ∈ Mod(). Let ϕ ∈ L  and ρ an assignment over I1 . Then, there is a quantifier-free formula ϕ ∗ such that fv (ϕ ∗ ) = fv (ϕ) and (†)

  ϕ ≡ ϕ ∗ .

Then, for every assignment ρ over I1 I1 ρ  ϕ iff I1 ρ  ϕ ∗ (†) ∗ iff I2 h ◦ ρ  ϕ (Proposition 1.105) iff I2 h ◦ ρ  ϕ (†). Therefore, h is elementary.



The following result is an immediate consequence of Propositions 3.29 and 3.28. Proposition 3.30 A theory  is complete whenever  has quantifier elimination and an initial model. We illustrate this result in Chap. 4.

3.6 Completeness via Categoricity

87

3.6 Completeness via Categoricity Another way of proving the completeness of a theory is via categoricity. Toward this end, we start by introducing the notion of finitely satisfiable set of formulas. Definition 3.31 A set of formulas over signature  is finitely satisfiable if every finite set ⊆ has a model. Exercise 3.32 Show that  ⊥ for every finitely satisfiable set of formulas . Exercise 3.33 Prove the Compactness Theorem for sets of formulas: If is a finitely satisfiable set of formulas then is satisfiable. Show also that is a finitely satisfiable set of formulas if and only if it is consistent. Proposition 3.34 Let { k }k∈K be a chain of finitely satisfiable sets of formulas over the same signature. Then

k k∈K

is a finitely satisfiable set of formulas. Proof Let = k∈K k . Let {ϕ1 , . . . , ϕn } ⊆ be a finite set. Then, there are k1 , . . . kn ∈ K such that ϕi ∈ ki for every i = 1, . . . , n. Since { k }k∈K is a chain, there is k ∈ K such that ki ⊆ k for i = 1, . . . , n and so {ϕ1 , . . . , ϕn } ⊆ k . Since k is finitely satisfiable, then there is a model of {ϕ1 , . . . , ϕn }.



We now introduce the concept of extension so that we can state that any finitely satisfiable set of formulas can be extended to a complete set of formulas. Definition 3.35 A set of formulas is an extension of a set of formulas whenever they are both over the same signature and ⊆ . Exercise 3.36 Show that if a set of formulas has no finite models then no extension of has. Proposition 3.37 Each finitely satisfiable set of formulas has a complete and finitely satisfiable extension. Proof Let be a finitely satisfiable set of formulas over signature  and (E( ), ⊆)

88

3 Decidability Results on Theories

the partial order where E( ) is the class of all finitely satisfiable extensions of . We show that E( ) has a maximal element, using Zorn’s Lemma (see [15]). (1) Every chain over E( ) has an upper bound in E( ). Consider the set of formulas K =



k

k∈K

where { k }k∈K is a chain in E( ). Hence, by Proposition 3.34, K is finitely satisfiable. (2) Maximal element in E( ). Taking into account (1), using Zorn’s Lemma, we can conclude that there is a maximal element with respect to inclusion in E( ). Denote by such an element. (3) Either ∪ {ϕ} or ∪ {¬ ϕ} is finitely satisfiable for every sentence ϕ. Assume, without loss of generality, that ∪ {ϕ} is not finitely satisfiable. Then there is a finite set ⊆ ∪ {ϕ} that is not satisfiable. Since is finitely satisfiable then ϕ ∈ and \ {ϕ} is satisfiable. Hence every model that satisfies \ {ϕ} does not satisfy ϕ. So \ {ϕ}  ¬ ϕ. It remains to prove that ∪ {¬ ϕ} is finitely satisfiable. Take a finite set ⊆ . Then ( \ {ϕ}) ∪ is satisfiable and so ( \ {ϕ}) ∪  ¬ ϕ. Therefore, ∪ {¬ ϕ} is satisfiable. The proof that a finite subset of is satisfiable follows by hypothesis. (4) is complete. Let ϕ be a sentence. By (3), either

∪ {ϕ} ∈ E( )

or

∪ {¬ ϕ} ∈ E( ).

Since is maximal in E( ) then either ϕ ∈ or ¬ ϕ ∈ . Thus, is complete. We need to define a symbolic interpretation structure induced by a set of formulas with the witness property. Definition 3.38 Let  be a signature. A set of formulas over  has the (existential) witness property if for every formula ϕ with fv (ϕ) = {x} there is cϕ ∈ F0 such that  (∃xϕ) ⊃ [ϕ]cxϕ . Exercise 3.39 Show that any extension of a given set of formulas with the witness property also enjoys this property.

3.6 Completeness via Categoricity

89

The following exercise is crucial for defining a canonical structure in Proposition 3.41. Exercise 3.40 Given a finitely satisfiable set of formulas over a signature , show that if  p(c1 , . . . , cn ) then  ¬ p(c1 , . . . , cn ) for each c1 , . . . , cn ∈ F0 , p ∈ Pn and n ∈ N+ . Proposition 3.41 Let be a finitely satisfiable and complete set of formulas over  with equality and the witness property. Then the tuple I = (D, { f I } f ∈F , { p I } p∈P ) such that • D = F0 /∼ where ∼ ⊆ F02 such that c1 ∼ c2 whenever  c1 ∼ = c2 ; • for each n-ary function symbol f and x ∈ X f I ([c1 ], . . . , [cn ]) = [c f (c1 ,...,cn )∼ =x ]; • for each n-ary predicate symbol p 1 if  p(c1 , . . . , cn ) p I ([c1 ], . . . , [cn ]) = 0 if  ¬ p(c1 , . . . , cn ); is an interpretation structure for . Proof (1) f I is well defined. Indeed, assume that c j ∈ [c j ] for j = 1, . . . , n. Then,  c j ∼ = cj for j = 1, . . . , n. Moreover, by (E2 f ),  f (c1 , . . . , cn ) ∼ = f (c1 , . . . , cn ). On the other hand,  ∃x f (c1 , . . . , cn ) ∼ = x and  ∃x f (c1 , . . . , cn ) ∼ = x. Furthermore, by the witness property, ∼  f (c1 , . . . , cn ) ∼ = c f (c1 ,...,cn )∼ =x . =x and  f (c1 , . . . , cn ) = c f (c1 ,...,cn )∼ Therefore,

∼  c f (c1 ,...,cn )∼ =x . =x = c f (c1 ,...,cn )∼

90

3 Decidability Results on Theories

(2) p I is well defined. Similar to (1), we can prove p I ([c1 ], . . . , [cn ]) = p I ([c1 ], . . . , [cn ]) whenever c j ∈ [c j ] for j = 1, . . . , n. Moreover, p I is functional by Exercise 3.40 since is finitely satisfiable. Furthermore, it is always defined because is complete.  The following result states that we can internalize in a finitely satisfiable and complete set of formulas with the witness property the denotation in I of terms. Proposition 3.42 Let be a finitely satisfiable and complete set of formulas over  with equality and the witness property, t ∈ T and c ∈ F0 . Then I ρ ,...,xn ∼  [t]cx11,...,c = c if and only if [[t]] = [c] n

where ρ(x j ) = [c j ] for j = 1, . . . , n and var (t) = {x1 , . . . , xn }. Proof The proof follows by induction on t. (Base) There are two cases to consider: (a) t is c ∈ F0 . (→) Assume that  c ∼ = c. Observe that I ρ

[[c ]] = [cc ∼ =x ] and

 ∃x c ∼ = x.

Hence, by the witness property,  c ∼ = cc ∼ =x . Therefore,

 c ∼ = cc ∼ =x I ρ

and so [[c ]] = [c]. (←) This implication follows similarly. (b) t is x1 ∈ X . (→) Assume that  c1 ∼ = c. Observe that I ρ

[[x1 ]] = ρ(x1 ) = [c1 ] = [c]. (←) Assume that [[x1 ]]I ρ = [c]. Then, [c1 ] = ρ(x1 ) = [c]. So the thesis follows. (Step) Let t = f (t1 , . . . , tn ).

3.6 Completeness via Categoricity

(→) Assume that

91

,...,xn ∼  [ f (t1 , . . . , tn )]cx11,...,c = c, n

that is,

,...,xn ,...,xn ∼ , . . . , [tn ]cx11,...,c ) = c.  f ([t1 ]cx11,...,c n n

(†)

Observe that, for every j = 1, . . . , n, ,...,xn ∼  ∃x [t j ]cx11,...,c = x. n

Thus, (‡)

,...,xn ∼ ,...,xn  [t j ]cx11,...,c = c[t j ]cx11,...,c ∼ n n =x

because has the witness property. Then, by congruence, (†) and (‡), ∼ ,...,xn ,...,xn  f (c[t1 ]cx1,...,c ∼ , . . . , c[tn ]cx1 ,...,c ∼ ) = c. n =x n =x 1

Hence,

1

∼ ,...,xn ,...,xn  ∃x f (c[t1 ]cx1,...,c ∼ , . . . , c[tn ]cx1 ,...,c ∼ )= x n =x n =x 1

1

and so, since has the witness property, ∼ ,...,xn ,...,xn  f (c[t1 ]cx1,...,c ∼ , . . . , c[tn ]cx1 ,...,c ∼ ) = c f (c[t n =x n =x 1

1

x1 ,...,xn ,...,c[t ]x1 ,...,xn ∼ )∼ =x =x n c1 ,...,cn =x 1 ]c1 ,...,cn ∼

Therefore,  c f (c[t

x1 ,...,xn ,...,c[t ]x1 ,...,xn ∼ )∼ =x =x n c1 ,...,cn =x 1 ]c1 ,...,cn ∼

∼ = c.

On the other hand, using the induction hypothesis on (‡), we conclude that I ρ

,...,xn [[t j ]] = [c[t j ]cx1,...,c ∼ ]. n =x 1

So

[[t]]I ρ = f I ([[t1 ]]I ρ , . . . , [[tn ]]I ρ ) ,...,xn ,...,xn = f I ([c[t1 ]cx1,...,c ∼ ], . . . , [c[tn ]cx1 ,...,c ∼ ]) n =x n =x 1 1 = [c f (c[t ]x1 ,...,xn ∼ ,...,c[t ]x1 ,...,xn ∼ )∼ ] =x = [c].

1 c1 ,...,cn =x

n c1 ,...,cn =x

I ρ

(←) Suppose that [[t]] = [c]. Observe that ,...,xn ∼  ∃x [t]cx11,...,c = x. n ,...,xn Therefore, by the witness property, there is c[t]cx1,...,c ∼ such that n =x 1

.

92

3 Decidability Results on Theories ,...,xn ∼ ,...,xn  [t]cx11,...,c = c[t]cx11,...,c ∼ . n n =x I ρ

,...,xn ,...,xn By (→), [[t]] = [c[t]cx1,...,c ∼ ] and so [c] = [c[t]cx1 ,...,c ∼ ]. Hence,  n =x n =x 1 1 ,...,xn ∼ [t]cx11,...,c c.  = n

In a similar way, we now show that we can internalize in a finitely satisfiable and complete set of formulas with equality and the witness property the contextual satisfaction of formulas in I . Proposition 3.43 Let be a finitely satisfiable and complete set of formulas over  with equality and the witness property and ϕ ∈ L  . Then ,...,xn I ρ  ϕ if and only if  [ϕ]cx11,...,c n

where ρ(x j ) = [c j ] for j = 1, . . . , n and fv (ϕ) = {x1 , . . . , xn }. Proof The proof is by induction on ϕ. (Base) There are two cases to consider. When ϕ is ⊥ then the thesis follows immediately, by Exercise 3.32, since is finitely satisfiable. Otherwise, let ϕ be p(t1 , . . . , tn ). Taking into account that has the witness property denote by ,...,xn c[t j ]cx1,...,c ∼ n =x 1

the constant symbol such that ,...,xn ∼ ,...,xn  [t j ]cx11,...,c = c[t j ]cx11,...,c ∼ n n =x

(†) for each j = 1, . . . , n. Then I ρ

I ρ

I ρ  ϕ iff p I ([[t1 ]] , . . . , [[tn ]] ) = 1 ,...,xn ,...,xn iff p I ([c[t1 ]cx1,...,c ∼ ], . . . , [c[tn ]cx1 ,...,c ∼ ]) = 1 by (†), Proposition 3.42 n =x n =x 1

1

,...,xn ,...,xn iff  p(c[t1 ]cx1,...,c ∼ , . . . , c[tn ]cx1 ,...,c ∼ ) n =x n =x 1

1

,...,xn ,...,xn iff  p([t1 ]cx11,...,c , . . . , [tn ]cx11,...,c ) n n

iff

(E3 p ) on (†)

,...,xn  [ p(t1 , . . . , tn )]cx11,...,c . n

(Step) We just consider the case where ϕ is ∃x ψ. (→) Assume that I ρ  ∃x ψ where fv (ψ) = {x1 , . . . , xn , x}. Then there is σ ≡x ρ such that I σ  ψ. Hence, by the induction hypothesis, ,...,xn ,x  [ψ]cx11,...,c n ,c

assuming that σ (x) = [c]. On the other hand, ,...,xn x ,...,xn  ∃x [ψ]cx11,...,c [ψ]cx11,...,c nc n

3.6 Completeness via Categoricity

and so

93

,...,xn .  ∃x [ψ]cx11,...,c n

The case where fv (ψ) = {x1 , . . . , xn } follows similarly. ,...,xn (←) Assume that  [∃xψ]cx11,...,c . Then, since has the witness property n ,...,xn x  [ψ]cx11,...,c nc

x ,...,xn [ψ]c 1 ,...,cn 1

.

,...,xn ], Hence, I σ  ψ, where σ (xi ) = [ci ] for i = 1, . . . , n and σ (x) = [c[ψ]cx1,...,c n 1  using the induction hypothesis. Therefore, I ρ  ϕ.

Now we put in good use the results above for showing that a finitely satisfiable and complete set of formulas with the witness property has a model with cardinality less or equal to the cardinality of the set of constant symbols. Proposition 3.44 Let be a finitely satisfiable and complete set of formulas over  with equality and the witness property. Then, there is a model of with cardinality at most |F0 |. Proof Observe that (†)

I  ϕ

iff

 ϕ

for every sentence ϕ by Proposition 3.43. We now prove that I is a model for . Let ψ ∈ . Then  ψ and so  ∀ ψ, by Exercises 1.76 and 1.78. Thus, I  ∀ ψ by (†) and so I  ψ. Finally, observe that the domain of I , that is,  F0 /∼ is such that |F0 /∼| ≤ |F0 |. Note 3.45 Given a signature , by

||

we denote |F ∪ P|. We now concentrate on showing that it is always possible to extend a finitely satisfiable set of formulas to a set of formulas with the witness property by appropriately enriching it with at most a denumerable set of symbols. Proposition 3.46 Let be a finitely satisfiable set of formulas over a signature . Then there are a signature  ∗ ⊇  and a finitely satisfiable set of formulas ∗ ⊇ over  ∗ such that any extension of ∗ has the witness property. Moreover, | ∗ | = || + ℵ0 . Proof Consider the following families { j } j∈N and { j } j∈N of signatures and sets of formulas, respectively, inductively defined as follows: • 0 = ;

94

3 Decidability Results on Theories

•  j+1 is the enrichment of  j with the set {cϕ : ϕ ∈ L  j , |fv j (ϕ)| = 1} of new constant symbols; • 0 = ; • j+1 = j ∪ {(∃xϕ) ⊃ [ϕ]cxϕ : ϕ ∈ L  j , fv j (ϕ) = {x}}. We show, by induction, that j is finitely satisfiable for every j. The base case follows immediately by hypothesis. For the step, assume that j is finitely satisfiable. Let j+1 ⊆ j+1 be a finite set. Observe that j+1 = j ∪ {(∃xϕ1 ) ⊃ [ϕ1 ]cxϕ , . . . , (∃xϕn ) ⊃ [ϕn ]cxϕn } 1

where j is the finite set j+1 ∩ j and ϕi ∈ L  j such that fv j (ϕi ) = {x} for i = 1, . . . , n. By the induction hypothesis, there is a model I j of j . Consider an interpretation structure I j+1 such that I j+1 | j = I j and (cϕi ) I j+1 ∈ {ρ(x) : I j ρ  j ϕi } whenever {ρ(x) : I j ρ  j ϕi } = ∅. Otherwise the interpretation of cϕi is any element in D j . Hence I j+1  j+1 (∃xϕi ) ⊃ [ϕi ]cxϕ

i

for i = 1, . . . , n and so I j+1 is a model for j+1 (taking also into account Exercise 1.136). Let



∗ =  j and ∗ = j. j∈N

j∈N

Observe that ∗ is a finitely satisfiable set of formulas over  ∗ , by Proposition 3.34. Moreover, it is straightforward to see that ∗ enjoys the witness property. Indeed, let ϕ ∈ L  ∗ be such that fv ∗ (ϕi ) = {x}. Then, ϕ ∈ L  j for some j ∈ N. Then, by definition, (∃xϕ) ⊃ [ϕ]cxϕ ∈ j+1 . Therefore, (∃xϕ) ⊃ [ϕ]cxϕ ∈ ∗ . Thus, by Exercise 3.39, any extension of ∗ over the same signature has the witness property. It remains to show that | ∗ | = || + ℵ0 . Note that |L  j | = | j | + ℵ0 for each j. We show, by induction, that | j | = || + ℵ0 for every j ≥ 1.

3.6 Completeness via Categoricity

95

(Base) Note that |1 | = || + |{cϕ : ϕ ∈ L  , |fv (ϕ)| = 1}|. Since |{cϕ : ϕ ∈ L  , |fv (ϕ)| = 1}| = |L  | = || + ℵ0 , then |1 | = 2|| + ℵ0 = || + ℵ0 . (Step) Observe that | j+1 | = | j | + |{cϕ : ϕ ∈ L  j , |fv j (ϕ)| = 1}|. Since |{cϕ : ϕ ∈ L  j , |fv j (ϕ)| = 1}| = |L  j | = | j | + ℵ0 , then | j+1 | = 2| j | +  ℵ0 . By the induction hypothesis, | j+1 | = || + ℵ0 . The following result is a strong version of the well-known Compactness Theorem imposing some cardinality requirements on models of a set of formulas. Proposition 3.47 Let be a finitely satisfiable set of formulas over  with equality. Then there is a model of of cardinality at most |L  |. Proof Let  ∗ and ∗ be as in Proposition 3.46. By Proposition 3.37, there is a complete and finitely satisfiable set of formulas ∗ over  ∗ extending ∗ . Moreover, ∗ enjoys the witness property by Proposition 3.46. Note that |L  | = || + ℵ0 = | ∗ | ≥ |F0∗ |. So, the thesis follows by Proposition 3.44 and because ⊆ ∗ .  We now show that any set of formulas with infinite models has a model of any infinite cardinality. This result together with Proposition 3.47 provides the essential ingredients for proving Vaught’s Test. Proposition 3.48 Let be a set of formulas with equality over a signature  with infinite models and κ a cardinal such that κ ≥ |L  |. Then there is a model of with cardinality κ. Proof Let  be an enrichment of  with F0 = F0 ∪ {cα : α < κ} where each cα is a new constant symbol. Consider the set of formulas = ∪ {¬ cα ∼ = cβ : α, β < κ, α = β}. It is immediate to see that every model of has cardinality at least κ. We prove that

is finitely satisfiable. Let ⊆ be a finite set and I an infinite model of . Denote by C the (finite) set of new constant symbols in and consider an interpretation structure I over  such that I | = I and assigning distinct individuals to the elements of C. Then I  and so is finitely satisfiable. Using Proposition 3.47, has a model of cardinality at most |L  |. Then the set ∀ = {∀ ψ : ψ ∈ }

96

3 Decidability Results on Theories

has a model of cardinality at most |L  |, by Exercise 1.76. Hence ∀ has a model of cardinality κ (see Proposition 1.91). So, once again by Exercise 1.76, has a model of cardinality κ.  We are ready to introduce the important concept of categorical set of formulas. Definition 3.49 Let be a set of formulas and κ a cardinal. Then is κ-categorical if has a model of cardinality κ and any two models of of cardinality κ are isomorphic. Definition 3.50 Let be a set of formulas over signature . Then is categorical if is κ-categorical for some cardinal κ such that κ ≥ |L  |. We need an auxiliary result before discussing the important Ło´s-Vaught Theorem. Proposition 3.51 Let  be a signature with equality and I an interpretation structure for  such that ∼ I is finite if and only if for every interpretation = I is =. Then



structure I over  such that ∼ = I is = if I ≡e I then I and I are isomorphic. Proof (→) Assume that I is a finite interpretation structure. Then, by Exercise 1.143, I and I are isomorphic. (←) Assume that I is infinite. Then by the upward Löwenheim-Skolem Theorem (see [14]) the theory Th(I ) has models of any arbitrary large cardinality. Observe that, by Exercise 3.16, Th(I ) is complete. Hence, by Proposition 3.19, all those models are elementarily equivalent. However, not all of them are isomorphic to I .  Proposition 3.52 (Ło´s-Vaught Theorem) Let  be a categorical theory over the signature . Then  is complete if and only if  has no finite models. Proof (←) Since  is categorical, it is κ-categorical for some infinite κ ≥ |L  |. The proof follows by contradiction. Assume that  is not a complete theory. Then, there is a sentence ϕ such that   ϕ and   ¬ ϕ. Moreover, 0 =  ∪ {ϕ} and 1 =  ∪ {¬ ϕ} are satisfiable (see Exercise 1.87). Since  does not have finite models then 0 and 1 only have infinite models (see Exercise 3.36). Using Proposition 3.48, there are models I0 and I1 of cardinality κ of 0 and 1 , respectively. Hence, I0 and I1 do not satisfy the same sentences and so, by Exercise 1.113, are not isomorphic which contradicts the assumption that  is κ-categorical. (→) Since  is categorical then it is κ-categorical for some κ ≥ |L  |. Therefore,  has an infinite model I because κ is an infinite cardinal. On the other hand, by Proposition 3.19, all models of  are elementarily equivalent. Suppose, by contradiction that  has a finite model. Thus, by Proposition 3.51, this model is isomorphic to the infinite model I which is a contradiction. 

3.6 Completeness via Categoricity

97

We now illustrate the fact that Ło´s-Vaught Theorem can be used to conclude that a theory is not complete. Example 3.53 The theory of equality over the signature with equality, no function symbols and no predicate symbols except ∼ = generated by the formulas in Definition 1.34 has a finite model and is categorical. Hence, by Proposition 3.52, is not complete. Definition 3.54 Let DLO be the signature with equality, no function symbols and the binary predicate symbol